[go: up one dir, main page]

CN113326802A - 3D recognition device, 3D recognition system and recognition method thereof - Google Patents

3D recognition device, 3D recognition system and recognition method thereof Download PDF

Info

Publication number
CN113326802A
CN113326802A CN202110699174.1A CN202110699174A CN113326802A CN 113326802 A CN113326802 A CN 113326802A CN 202110699174 A CN202110699174 A CN 202110699174A CN 113326802 A CN113326802 A CN 113326802A
Authority
CN
China
Prior art keywords
image
measured object
receiving module
light
measured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110699174.1A
Other languages
Chinese (zh)
Inventor
王百顺
余建男
徐小岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Bosheng Photoelectric Technology Co ltd
Original Assignee
Shenzhen Bosheng Photoelectric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Bosheng Photoelectric Technology Co ltd filed Critical Shenzhen Bosheng Photoelectric Technology Co ltd
Priority to CN202110699174.1A priority Critical patent/CN113326802A/en
Publication of CN113326802A publication Critical patent/CN113326802A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本申请公开了一种3D识别装置、3D识别系统及其识别方法,其中,一种3D识别装置,包括:光发射模组,被配置用于向被测物表面发射光线;第一接收模组,被配置用于接收至少部分被测物表面反射回来的光线;第二接收模组,被配置用于接收至少部分被测物表面发射回来的光线;其中,第一接收模组在靠近被测物的一侧设置有第一偏振片,和/或第二接收模组在靠近被测物的一侧设置有第二偏振片。本申请实施例提供的3D识别装置通过接收模组前部安装的偏振片的不同组合搭配方式,使在一些场景中可以通过接收的图像的不同亮度,选择输出的较优的图像或者通过两张图像进行一些后端合成后再进行输出,使3D识别系统的扩大到更多的使用场景。

Figure 202110699174

The present application discloses a 3D identification device, a 3D identification system and an identification method thereof, wherein a 3D identification device includes: a light emission module configured to emit light to the surface of a measured object; a first receiving module , is configured to receive at least part of the light reflected from the surface of the measured object; the second receiving module is configured to receive at least part of the light reflected from the surface of the measured object; wherein, the first receiving module is close to the measured object. One side of the object is provided with a first polarizer, and/or the second receiving module is provided with a second polarizer on the side close to the measured object. The 3D identification device provided by the embodiment of the present application uses different combinations of polarizers installed at the front of the receiving module, so that in some scenarios, the output image can be selected according to the different brightness of the received image, or the output image can be selected through two The image is output after some back-end synthesis, so that the 3D recognition system can be expanded to more usage scenarios.

Figure 202110699174

Description

3D recognition device, 3D recognition system and recognition method thereof
Technical Field
The present application generally relates to the field of 3D technologies, and in particular, to a 3D recognition apparatus, a 3D recognition system, and a recognition method thereof.
Background
The 3D imaging technology overcomes the defect that the traditional 2D imaging technology can only obtain objective world two-dimensional information, realizes accurate acquisition of depth information (distance from an object to a camera), and has wide application prospects in the fields of man-machine interaction, three-dimensional reconstruction, security monitoring, machine vision and the like.
With the emergence of scenes with extremely high requirements on safety, such as face recognition access control and face payment, living body detection and authenticity detection are imminent, so that a 3D (three-dimensional) imaging technology is brought to birth and is well known by people, and the method is increasingly applied to industries such as consumption payment and recognition.
However, from the practical effect of 3D stereoscopic imaging, there is still a great limitation, and if the outdoor direct sunlight object causes overexposure and other relatively special scenes, the entire depth information cannot be calculated, and there is no way to output the 3D information of a specific object.
Disclosure of Invention
In view of the above-mentioned drawbacks or shortcomings in the prior art, it is desirable to provide a 3D recognition apparatus, a 3D recognition system and a recognition method thereof, which can improve a 3D display effect.
In a first aspect, the present application provides a 3D recognition apparatus, comprising:
the light emitting module is configured for emitting light to the surface of the object to be measured;
the first receiving module is configured for receiving the light reflected by at least part of the surface of the object to be measured;
the second receiving module is configured to receive at least part of the light emitted from the surface of the object to be measured;
the first receiving module is provided with a first polaroid on one side close to the object to be measured, and/or the second receiving module is provided with a second polaroid on one side close to the object to be measured.
Optionally, the polarization angles of the first polarizer and the second polarizer are the same.
Optionally, the first polarizer and the second polarizer have different polarization angles.
Optionally, the polarization angle of the first polarizer is 0-180 °, and the polarization angle of the second polarizer is 0-180 °
Optionally, the first receiving module and the second receiving module are located on the same side of the light emitting module or the first receiving module and the second receiving module are located on different sides of the light emitting module.
Optionally, the light emitting module includes a light source and a diffractive optical element, and the light source includes one or more of a high-contrast vertical cavity surface emitting laser, a single-aperture wide-area vertical cavity surface emitting laser, an array vertical cavity surface emitting laser, a laser diode, and an LED light source.
In a second aspect, the present application provides a 3D identification system comprising a 3D identification device as described in any of the above, wherein the 3D identification system is a 3D structured light identification system,
the light emitting module is configured for emitting the coded structured light to the surface of the measured object;
the first receiving module is configured to receive the structured light reflected by at least part of the surface of the object to be measured so as to acquire a first image related to the object to be measured;
the second receiving module is configured to receive the structured light reflected by at least part of the surface of the object to be measured so as to acquire a second image related to the object to be measured;
and the data processing unit is configured to acquire the first image and the second image and establish 3D depth information of the measured object based on the first image and the second image.
In a third aspect, the present application provides a 3D identification system comprising a 3D identification apparatus as described in any of the above, wherein the 3D identification system is a 3D TOF identification system,
the light emitting module is configured to emit TOF light to the surface of the measured object;
the first receiving module is configured to receive TOF light reflected by at least part of the surface of the object to be measured to acquire a first image related to the object to be measured;
the second receiving module is configured to receive the TOF light reflected by at least part of the surface of the object to be measured to acquire a second image related to the object to be measured;
and the data processing unit is configured to acquire the first image and the second image and establish 3D depth information of the measured object based on the first image and the second image.
In a fourth aspect, the present application provides a 3D identification system, the 3D identification system is a binocular vision identification system, comprising an active binocular vision unit and a passive binocular vision unit, wherein the active binocular vision unit comprises the 3D identification device as described in any one of the above, a third receiving module and a fourth receiving module are provided in the passive binocular vision unit, the third receiving module is provided with a third polarizer on a side close to the object to be tested, and/or the fourth receiving module is provided with a fourth polarizer on a side close to the object to be tested, wherein,
the light emitting module is configured to emit the coded structured light or TOF light to the surface of the measured object;
the first receiving module is configured to receive the structured light or the TOF light reflected by at least part of the surface of the object to be measured to acquire a first image related to the object to be measured;
the second receiving module is configured to receive the structured light or the TOF light reflected by at least part of the surface of the object to be measured to acquire a second image related to the object to be measured;
the third receiving module is configured to receive the structured light or the TOF light reflected by at least part of the surface of the object to be measured to acquire a third image related to the object to be measured;
the fourth receiving module is configured to receive the structured light or the TOF light reflected by at least part of the surface of the object to be measured to acquire a fourth image related to the object to be measured;
the data processing unit is configured to acquire the first image and the second image, and establish active binocular display 3D depth information of the object to be measured based on the first image and the second image; and the image acquisition unit is also configured to acquire the third image and the fourth image and establish passive binocular display 3D depth information of the object to be measured based on the third image and the fourth image.
In a fifth aspect, the present application provides a 3D identification method applied to the 3D identification system as described in any one of the above, including:
emitting light to the surface of the measured object through the light emitting module;
acquiring a first image related to the measured object through a first receiving module;
acquiring a second image related to the measured object through a second receiving module;
comparing the first image with the second image through a data processing unit, and selecting one of the images as a standard image, or processing the first image and the second image through the data processing unit to obtain the standard image;
and carrying out post-processing on the standard image to obtain the 3D depth information of the measured object.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the 3D recognition device that this application embodiment provided includes a light emission module and two receiving module, through the different combination collocation mode of the polaroid of receiving module front portion installation, makes can be through the different luminance of the image of receiving in some scenes, selects the better image of output or carries out some rear end synthesis through two images and then exports, makes 3D recognition system's expansion to more use scenes.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is a schematic structural diagram of a first 3D identification device according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a second 3D identification device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a third 3D identification device provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a fourth 3D identification device provided in an embodiment of the present application;
fig. 5 is a flowchart of a 3D recognition method provided by an embodiment of the present application;
fig. 6 is a schematic diagram of 3D structured light recognition provided by an embodiment of the present application;
FIG. 7 is an image of ambient light overexposure in 3D structured light recognition;
FIG. 8 is an image after 3D structured light recognition provided by embodiments of the present application;
fig. 9 is a flowchart of another 3D identification method provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of one principle of 3D TOF identification provided by an embodiment of the present application;
FIG. 11 is another schematic diagram of 3D TOF identification provided by an embodiment of the present application;
FIG. 12 is an image of ambient light overexposure in 3D TOF identification;
fig. 13 is an image after 3D TOF identification provided by an embodiment of the application;
fig. 14 is a schematic diagram of the structure of an active binocular vision unit provided by an embodiment of the present application;
fig. 15 is a schematic diagram of a passive binocular vision unit provided by an embodiment of the present application;
fig. 16 is a schematic diagram of binocular stereo vision provided by an embodiment of the present application.
10. A light emitting module; 20. a first receiving module; 201. a first polarizing plate; 30. a second receiving module; 301. a second polarizing plate; 40. a color module; 50. a third receiving module; 501. a third polarizing plate; 60. a fourth receiving module; 100. an active binocular vision unit; 200. a passive binocular vision unit.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1-4 in detail, in a first aspect, the present application provides a 3D identification apparatus, comprising:
the light emitting module 10 is configured to emit light to the surface of the object to be measured;
the first receiving module 20 is configured to receive the light reflected by at least part of the surface of the object to be measured;
a second receiving module 30 configured to receive at least a part of the light emitted from the surface of the object to be measured;
the first receiving module 20 is provided with a first polarizer 201 on a side close to the object to be measured, and/or the second receiving module 30 is provided with a second polarizer 301 on a side close to the object to be measured.
In addition, the 3D identification system provided by the embodiment of the present application further includes a color module 40 configured to obtain a color image of the surface of the measured object. The color image and the texture information of the object can be rendered and output as a three-dimensional image of the scene by the data processor unit after being matched with the depth image.
In some embodiments, the polarization angles of the first polarizer 201 and the second polarizer 301 are the same, the polarization angle of the first polarizer 201 is 0 to 180 °, and the polarization angle of the second polarizer 301 is 0 to 180 °.
In some embodiments, the polarization angles of the first polarizer 201 and the second polarizer 301 are different, the polarization angle of the first polarizer 201 is 0 to 180 °, and the polarization angle of the second polarizer 301 is 0 to 180 °.
Various arrangements of the polarizing plates can be applied: 1) one module is provided with a polaroid and the other module is not provided with the polaroid; 2) the two modules are provided with polaroids, and the angles of the light rays passing through the polaroids are consistent, as shown in FIG. 3, the polarization angles of the first polaroid 201 and the second polaroid 301 are consistent); 3) the two modules are provided with polarizers, and the angles of the light rays passing through the polarizers are different, as shown in fig. 4, the polarization angle of the first polarizer 201 is 0 ° and the polarization angle of the second polarizer 301 is 90 °; 4) the two modules are provided with polaroids, the angles of light rays passing through the polaroids are different, and the polarization angles of the two polaroids can be random and can be between 0 and 180 degrees.
In addition, in a specific setting, the first receiving module 20 and the second receiving module 30 are located on the same side of the light emitting module 10 or the first receiving module 20 and the second receiving module 30 are located on different sides of the light emitting module 10. In the embodiment of the present application, the positions of the first receiving module 20 and the second receiving module 30 are not limited, and different identification devices may be applied according to different application scenarios.
The light emitting module 10 includes, but is not limited to, a light source including one or more of a high contrast VCSEL, a single aperture broad area VCSEL, an array VCSEL, a laser diode, and an LED light source, and a diffractive optical element. The light emitting module 10 may further include optical elements with other functions, and when the light emitting module is specifically configured, the light source may be appropriately adjusted according to different application scenarios, such as a micro lens array, a grating, and the like.
Wherein the light source (not shown) is used for emitting a light beam, and the diffractive optical element (not shown) is arranged on the light emitting side of the light source and used for receiving the light beam and projecting the light to the surface of the measured object. Diffractive Optical Elements (DOE, full name: Diffraction Optical Elements) realize multiple Optical outputs.
The light source may be an infrared light beam, and the light source may also be a vcsel (vertical Cavity Surface Emitting laser), and the light source emits infrared structured light and/or floodlight as an object identification signal through the diffractive optical element.
The light beam emitted by the light source may be a pattern light beam, and the pattern light beam is, for example, in a pattern of an irregular lattice, a grid, a stripe, or an encoded pattern. The diffractive optical element may split and expand the light beam emitted from the light source into a plurality of light beams as the object recognition signal. The diffractive optical element can also expand the light beam emitted by the light source into a plurality of infrared floodlights substantially covering the space to be illuminated, with a substantially uniform light intensity distribution in each region of the space to be illuminated.
It should be noted that, in the embodiment of the present application, the object identification signal is, for example, but not limited to, a pattern beam and an image. The pattern light beam is in a pattern of, for example, an irregular lattice, a grid, a stripe, or an encoded pattern. The 3D recognition device in the embodiment of the application can be used in various 3D technologies, and the 3D technologies in the industry at present mainly comprise structured light, TOF (time of flight), binocular and the like.
Example one
The application provides a 3D identification system, wherein, 3D identification system is 3D structure light identification system, as shown in fig. 1, includes:
the light emitting module 10 is configured to emit the coded structured light to the surface of the measured object;
the first receiving module 20 is configured to receive the structured light reflected by at least part of the surface of the object to be measured to obtain a first image related to the object to be measured;
the second receiving module 30 is configured to receive the structured light reflected by at least part of the surface of the object to be measured to obtain a second image related to the object to be measured;
and the data processing unit is configured to acquire the first image and the second image and establish 3D depth information of the measured object based on the first image and the second image.
When the 3D structured light shoots an object, because of the fact that glare and other conditions appear in partial images due to some special reasons, the shot object cannot be seen clearly, on the basis, the polarizing films of the two receiving modules have different effects, and it can be seen that the right image received by the receiving modules can be distinguished as a light identification point, and the rear-end algorithm calculation with better imaging effect is selected to output better depth information and depth images.
As shown in fig. 5, the present application provides a 3D identification method applied to the 3D identification system as described above, including:
ST11, emitting light to the surface of the measured object through the light emitting module 10;
ST12, acquiring a first image related to the object to be measured through the first receiving module 20;
ST13, acquiring a second image related to the object to be measured through the second receiving module 30;
ST14, comparing the first image and the second image through a data processing unit, and selecting one of the images as a standard image, or processing the first image and the second image through the data processing unit to obtain the standard image;
ST15, carrying out post-processing on the standard image to obtain the 3D depth information of the measured object.
In step ST11, the light emitted by the light emitting module 10 to the surface of the object to be measured is a pattern beam, which is, for example, in the form of an irregular lattice, a grid, a stripe, or a code pattern. In the embodiments of the present application, an irregular lattice is taken as an exemplary illustration.
In step ST14, one of the first image and the second image, which is superior in image effect, may be selected as a standard image for post-processing; or the first image and the second image are superposed and synthesized to form a new image as a standard image for post-processing.
It should be noted that, in the embodiment of the present application, the data processing unit processes the first image and the second image, including but not limited to image transformation, image enhancement and restoration, image segmentation, image coding, image description, and image recognition.
For the image processing domain, image transformation: indirect processing techniques such as fourier transform, walsh transform, discrete cosine transform, etc., convert the processing in the spatial domain into processing in the transform domain, which not only reduces the amount of computation, but also allows for more efficient processing. Image coding: compressed image coding compression techniques may reduce the amount of data (i.e., the number of bits) describing an image in order to save image transmission, processing time, and reduce the amount of memory occupied. Image enhancement and restoration: the purpose of image enhancement and restoration is to improve the quality of an image, such as removing noise, improving the sharpness of an image, and the like. Image segmentation: image segmentation is one of the key techniques in digital image processing. The image segmentation is to extract a meaningful characteristic part in the image, wherein the meaningful characteristic is an edge, a region and the like in the image, and the meaningful characteristic is a basis for further image recognition, analysis and understanding. Image description: image description is a necessary prerequisite for image recognition and understanding. As the simplest binary image, the geometric characteristics of the binary image can be used for describing the characteristics of an object, and a general image description method adopts two-dimensional shape description which has two types of methods of boundary description and region description. Image recognition: the image recognition belongs to the category of pattern recognition, and the main content of the image recognition is that after certain preprocessing (enhancement, restoration and compression), the image is subjected to image segmentation and feature extraction, so that judgment and classification are performed.
The projected set of beams in known spatial directions is called structured light (structured light). 3D structured light principle, as shown in fig. 6: the method is characterized in that invisible infrared laser with a specific wavelength is used as a light source, emitted light forms an image with a certain encoding rule through an optical diffraction element (DOE) and is projected on an object, meanwhile, a receiving module receives the image with the encoding rule on the surface of the object, and then the distortion of a returned encoding pattern is calculated based on an optical triangulation method measuring principle to obtain the position and depth information of the object. Namely, the 3D structured light module can set a reference picture in a burning way on the reference surface under a specific distance, the reference picture contains some specific information, and the picture of the actually shot object surface is compared with the previously determined reference picture, and the difference between the picture and the previously determined reference picture is confirmed and then converted into a depth map for displaying after algorithm calculation.
It should be noted that, in the embodiment of the present application, the standard image is post-processed by using other processing methods in the prior art to obtain the image depth map, which is not described herein again, and any processing method belongs to the protection scope of the present application without departing from the inventive concept of the present application.
The use scenes of payment equipment and the like in the consumer payment industry are gradually expanded from indoor to outdoor, but the coded image of the surface of an object is often submerged in external ambient light under the outdoor high illumination condition of structured light, so that the depth information of the object cannot be effectively identified and acquired, the depth image imaging of the object is lost, and the payment failure is caused.
In a normal environment, the matching of the light emitting module 10 and a normal receiving module (for example, a receiving module without a polarizer) can generate a depth map in real time, and no problem occurs; if the situation of the overexposure of the ambient light as shown in fig. 7 occurs, the whole brightness is reduced through the collocation of the transmitting module and the receiving module with the polaroid, so that the spot points can be distinguished from the ambient light, as shown in fig. 8, the depth map can be ensured to be output normally, and the effect of the depth map under the outdoor high-illumination situation can be improved.
Example two
The application provides a 3D identification system, 3D identification system is 3D TOF identification system, as shown in FIG. 2, includes:
a light emitting module 10 configured to emit TOF light toward the surface of the object to be measured;
a first receiving module 20 configured to receive TOF light reflected by at least a portion of the surface of the object to be measured to obtain a first image related to the object to be measured;
a second receiving module 30 configured to receive TOF light reflected by at least a portion of the surface of the object to be measured to obtain a second image related to the object to be measured;
and the data processing unit is configured to acquire the first image and the second image and establish 3D depth information of the measured object based on the first image and the second image.
When shooting an object in the 3D TOF, due to the fact that glare and other conditions occur in partial images caused by some special reasons, the object cannot be seen clearly, on the basis, the polarizing films of the two receiving modules have different effects, it can be seen that the right image received by the receiving modules can be distinguished from the object, the rear-end algorithm calculation is carried out by selecting the receiving modules with better imaging effect, and better depth information and depth images are output.
As shown in fig. 9, the present application provides a 3D identification method applied to the 3D identification system as described in any one of the above, including:
ST21, emitting TOF light to the surface of the measured object through the light emitting module 10;
ST22, acquiring a first image related to the object to be measured through the first receiving module 20;
ST23, acquiring a second image related to the object to be measured through the second receiving module 30;
ST24, comparing the first image and the second image through a data processing unit, and selecting one of the images as a standard image, or processing the first image and the second image through the data processing unit to obtain the standard image;
ST25, carrying out post-processing on the standard image to obtain the 3D depth information of the measured object.
TOF (time of flight), transliterated as time of flight. The working principle of the TOF imaging device is that a laser emission module 10 continuously transmits light pulses to a measured object, then a TOF camera receives light signals returned from the object, and finally a control system of the TOF imaging device calculates the flight (round trip) time of the emitted and received light signals to obtain the distance of the measured object. Different from ultrasonic ranging, ultrasonic ranging has higher requirement on a reflecting object, objects with small areas, such as lines and conical objects, can not be basically measured, and TOF imaging devices have lower requirements on the size, the area and the like of the measured objects, and are high in ranging precision, long in ranging distance and fast in response.
TOF can be classified into:
1) pulse Modulation (DTOF): measuring and calculating the distance directly according to the time difference between the pulse transmission and the pulse reception; as shown in fig. 10.
2) Continuous Wave Modulation (ITOF): sinusoidal modulation is commonly employed. Since the phase shift of the sinusoidal waves of the receiving end and the transmitting end is proportional to the distance of the object from the camera, the distance can be measured by using the phase shift. As shown in fig. 11.
It should be noted that, in the embodiment of the present application, the standard image is post-processed by using other methods in the prior art to obtain the image depth map, which is not described herein again, and any processing method is within the protection scope of the present application without departing from the inventive concept of the present application.
EXAMPLE III
The application provides a 3D identification system, the 3D identification system is binocular vision identification system, including initiative binocular vision unit 100 and passive binocular vision unit 200, wherein, as shown in fig. 14, initiative binocular vision unit 100 includes as above arbitrary 3D recognition device, as shown in fig. 15, be provided with third receiving module 50 and fourth receiving module 60 in the passive binocular vision unit 200, third receiving module 50 is provided with third polaroid 501 in the one side of being close to the measured object, and/or fourth receiving module 60 is provided with the fourth polaroid in the one side of being close to the measured object, wherein,
the light emitting module 10 is configured to emit the coded structured light or TOF light to the surface of the measured object;
the first receiving module 20 is configured to receive the structured light or the TOF light reflected by at least a part of the surface of the object to be measured to obtain a first image related to the object to be measured;
the second receiving module 30 is configured to receive the structured light or the TOF light reflected by at least a part of the surface of the object to be measured to obtain a second image related to the object to be measured;
the third receiving module 50 is configured to receive the structured light or the TOF light reflected by at least a part of the surface of the object to be measured to obtain a third image related to the object to be measured;
the fourth receiving module 60 is configured to receive the structured light or the TOF light reflected by at least a part of the surface of the object to be measured to obtain a fourth image related to the object to be measured;
the data processing unit is configured to acquire the first image and the second image, and establish active binocular display 3D depth information of the object to be measured based on the first image and the second image; and the image acquisition unit is also configured to acquire the third image and the fourth image and establish passive binocular display 3D depth information of the object to be measured based on the third image and the fourth image.
In the 3D binocular vision system provided by the application, the light source used in active binocular may be the light emitting module 10 of one of the multiple light sources explained above, polarizing plates which can pass through the same or different angles are arranged in front of the two receiving modules, and 3D depth information of an object can be better output after images respectively received by the two receiving modules are calculated through a rear-end algorithm. And a light source is not used in the passive binocular, polarizing films which can pass through the same or different angles are arranged at the front parts of the two receiving modules, and 3D depth information of an object can be better output after the images respectively received by the two receiving modules are calculated through a rear-end algorithm.
The application provides a 3D identification method, applied to the 3D identification system as described in any of the above, comprising:
ST31, emitting light to the surface of the measured object through the light emitting module 10;
ST32, acquiring a first image related to the object to be measured through the first receiving module 20;
ST33, acquiring a second image related to the object to be measured through the second receiving module 30;
ST34, comparing the first image and the second image through a data processing unit, and selecting one of the images as a standard image, or processing the first image and the second image through the data processing unit to obtain the standard image;
ST35, carrying out post-processing on the standard image to obtain the 3D depth information of the measured object.
As shown in fig. 16, binocular stereoscopic vision is a method for acquiring three-dimensional geometric information of an object by acquiring two images of the object to be measured from different positions by using an imaging device based on a parallax principle and calculating a position deviation between corresponding points of the images, and the binocular vision only depends on the images for feature matching, and the positions of two cameras in the binocular vision must be accurately calibrated before using a binocular vision camera.
In the embodiment of the present application, different working modes can be executed according to different application scenarios and environmental conditions, for example: when the target object is close to the application scene with extremely high requirements on the precision of 3D imaging, such as face authentication, face payment and the like, a monocular mode can be executed; it can be understood that, in the monocular mode, the measurement range is often relatively small, and therefore, if a depth image with a wider measurement range is to be measured, the binocular mode can be performed, and although the projection field angle of the projection module is not changed, since the total collection field angle at which the two receiving modules collect the structured light image is larger than the collection field angle at which the single receiving module collects the structured light image, an image with a wider field angle than that of the single receiving module can be obtained by using the two receiving modules.
It should be noted that, in the embodiment of the present application, the light emitting module 10 in the active binocular mode may adopt a structured light unit structure, and may also adopt a TOF unit structure, which is not limited in this application. For the processing modes of the structured light and the TOF light, different modes in the prior art can be adopted, which is not described herein again, and any processing mode is within the protection scope of the present application without departing from the inventive concept of the present application.
It will be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like, as used herein, refer to an orientation or positional relationship indicated in the drawings that is solely for the purpose of facilitating the description and simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and is therefore not to be construed as limiting the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Unless defined otherwise, technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Terms such as "disposed" and the like, as used herein, may refer to one element being directly attached to another element or one element being attached to another element through intervening elements. Features described herein in one embodiment may be applied to another embodiment, either alone or in combination with other features, unless the feature is otherwise inapplicable or otherwise stated in the other embodiment.
The present invention has been described in terms of the above embodiments, but it should be understood that the above embodiments are for purposes of illustration and description only and are not intended to limit the invention to the scope of the described embodiments. It will be appreciated by those skilled in the art that many variations and modifications may be made to the teachings of the invention, which fall within the scope of the invention as claimed.

Claims (10)

1.一种3D识别装置,其特征在于,包括:1. a 3D identification device, is characterized in that, comprises: 光发射模组,被配置用于向被测物表面发射光线;a light emission module, configured to emit light to the surface of the object to be measured; 第一接收模组,被配置用于接收至少部分所述被测物表面反射回来的光线;a first receiving module configured to receive at least part of the light reflected from the surface of the measured object; 第二接收模组,被配置用于接收至少部分所述被测物表面发射回来的光线;The second receiving module is configured to receive at least part of the light emitted from the surface of the measured object; 其中,所述第一接收模组在靠近所述被测物的一侧设置有第一偏振片,和/或所述第二接收模组在靠近所述被测物的一侧设置有第二偏振片。Wherein, the first receiving module is provided with a first polarizer on the side close to the tested object, and/or the second receiving module is provided with a second polarizer on the side close to the tested object polarizer. 2.根据权利要求1所述的3D识别装置,其特征在于,所述第一偏振片和所述第二偏振片的偏振角度相同。2 . The 3D identification device according to claim 1 , wherein the polarization angles of the first polarizer and the second polarizer are the same. 3 . 3.根据权利要求1所述的3D识别装置,其特征在于,所述第一偏振片和所述第二偏振片的偏振角度不同。3 . The 3D identification device according to claim 1 , wherein the polarization angles of the first polarizer and the second polarizer are different. 4 . 4.根据权利要求1所述的3D识别装置,其特征在于,所述第一偏振片的偏振角度为0-180°,所述第二偏振片的偏振角度为0-180°。4 . The 3D identification device according to claim 1 , wherein the polarization angle of the first polarizer is 0-180°, and the polarization angle of the second polarizer is 0-180°. 5 . 5.根据权利要求1所述的3D识别装置,其特征在于,所述第一接收模组和所述第二接收模组位于所述光发射模组的同一侧或者所述第一接收模组和所述第二接收模组位于所述光发射模组的不同侧。5. The 3D identification device according to claim 1, wherein the first receiving module and the second receiving module are located on the same side of the light emitting module or the first receiving module and the second receiving module is located on different sides of the light emitting module. 6.根据权利要求1所述的3D识别装置,其特征在于,所述光发射模组包括光源和衍射光学元件,所述光源包括高对比度垂直腔面发射激光器、单孔宽面型垂直腔面发射激光器、阵列式垂直腔面发射激光器、激光二极管、LED光源中的一种或多种。6 . The 3D identification device according to claim 1 , wherein the light emission module comprises a light source and a diffractive optical element, and the light source comprises a high-contrast vertical cavity surface emitting laser, a single-hole wide-surface vertical cavity surface One or more of emitting lasers, arrayed vertical cavity surface emitting lasers, laser diodes, and LED light sources. 7.一种3D识别系统,其特征在于,包括如权利要求1-6任一所述的3D识别装置,其中,所述3D识别系统为3D结构光识别系统,7. A 3D identification system, characterized in that it comprises the 3D identification device according to any one of claims 1-6, wherein the 3D identification system is a 3D structured light identification system, 光发射模组,被配置用于向所述被测物表面发射编码后的结构光;a light emission module configured to emit the encoded structured light to the surface of the measured object; 第一接收模组,被配置用于接收至少部分所述被测物表面反射回来的结构光而获取与所述被测物相关的第一图像;a first receiving module configured to receive at least part of the structured light reflected from the surface of the measured object to acquire a first image related to the measured object; 第二接收模组,被配置用于接收至少部分所述被测物表面反射回来的结构光而获取与所述被测物相关的第二图像;a second receiving module configured to receive at least part of the structured light reflected from the surface of the measured object to acquire a second image related to the measured object; 数据处理单元,被配置用于获取所述第一图像和所述第二图像,基于所述第一图像和所述第二图像建立所述被测物的3D深度信息。A data processing unit configured to acquire the first image and the second image, and establish 3D depth information of the measured object based on the first image and the second image. 8.一种3D识别系统,其特征在于,包括如权利要求1-6任一所述的3D识别装置,其中,所述3D识别系统为3D TOF识别系统,8. A 3D identification system, characterized in that, comprising the 3D identification device according to any one of claims 1-6, wherein the 3D identification system is a 3D TOF identification system, 光发射模组,被配置用于向所述被测物表面发射TOF光;a light emission module, configured to emit TOF light to the surface of the measured object; 第一接收模组,被配置用于接收至少部分所述被测物表面反射回来的TOF光而获取与所述被测物相关的第一图像;a first receiving module configured to receive at least part of the TOF light reflected from the surface of the measured object to obtain a first image related to the measured object; 第二接收模组,被配置用于接收至少部分所述被测物表面反射回来的TOF光而获取与所述被测物相关的第二图像;The second receiving module is configured to receive at least part of the TOF light reflected from the surface of the measured object to obtain a second image related to the measured object; 数据处理单元,被配置用于获取所述第一图像和所述第二图像,基于所述第一图像和所述第二图像建立所述被测物的3D深度信息。A data processing unit configured to acquire the first image and the second image, and establish 3D depth information of the measured object based on the first image and the second image. 9.一种3D识别系统,其特征在于,所述3D识别系统为双目视觉识别系统,包括主动双目视觉单元和被动双目视觉单元,其中,所述主动双目视觉单元包括如权利要求1-6任一所述的3D识别装置,所述被动双目视觉单元中设置有第三接收模组和第四接收模组,所述第三接收模组在靠近所述被测物的一侧设置有第三偏振片,和/或所述第四接收模组在靠近所述被测物的一侧设置有第四偏振片,其中,9. A 3D recognition system, characterized in that, the 3D recognition system is a binocular visual recognition system, comprising an active binocular vision unit and a passive binocular vision unit, wherein the active binocular vision unit includes as claimed in claim 9 . In any of the 3D recognition devices described in 1-6, the passive binocular vision unit is provided with a third receiving module and a fourth receiving module, and the third receiving module is located at a position close to the measured object. The side is provided with a third polarizer, and/or the fourth receiving module is provided with a fourth polarizer on the side close to the measured object, wherein, 所述光发射模组,被配置用于向所述被测物表面发射编码后的结构光或者TOF光;The light emission module is configured to emit encoded structured light or TOF light to the surface of the measured object; 所述第一接收模组,被配置用于接收至少部分所述被测物表面反射回来的结构光或者TOF光而获取与所述被测物相关的第一图像;The first receiving module is configured to receive at least part of the structured light or TOF light reflected from the surface of the measured object to obtain a first image related to the measured object; 所述第二接收模组,被配置用于接收至少部分所述被测物表面反射回来的结构光或者TOF光而获取与所述被测物相关的第二图像;The second receiving module is configured to receive at least part of the structured light or TOF light reflected from the surface of the measured object to obtain a second image related to the measured object; 所述第三接收模组,被配置用于接收至少部分所述被测物表面反射回来的结构光或者TOF光而获取与所述被测物相关的第三图像;The third receiving module is configured to receive at least part of the structured light or TOF light reflected from the surface of the measured object to obtain a third image related to the measured object; 所述第四接收模组,被配置用于接收至少部分所述被测物表面反射回来的结构光或者TOF光而获取与所述被测物相关的第四图像;The fourth receiving module is configured to receive at least part of the structured light or TOF light reflected from the surface of the measured object to obtain a fourth image related to the measured object; 数据处理单元,被配置用于获取所述第一图像和所述第二图像,基于所述第一图像和所述第二图像建立所述被测物的主动双目显示的3D深度信息;还被配置用于获取所述第三图像和所述第四图像,基于所述第三图像和所述第四图像建立所述被测物的被动双目显示的3D深度信息。a data processing unit configured to acquire the first image and the second image, and establish 3D depth information of the active binocular display of the measured object based on the first image and the second image; further is configured to acquire the third image and the fourth image, and establish 3D depth information of the passive binocular display of the measured object based on the third image and the fourth image. 10.一种3D识别方法,应用于如权利要求7-9任一所述的3D识别系统,其特征在于,包括:10. A 3D identification method, applied to the 3D identification system according to any one of claims 7-9, characterized in that, comprising: 通过光发射模组向被测物表面发射光线;Emit light to the surface of the measured object through the light emission module; 通过第一接收模组获取与所述被测物相关的第一图像;Obtain the first image related to the measured object through the first receiving module; 通过第二接收模组获取与所述被测物相关的第二图像;Obtain a second image related to the measured object through a second receiving module; 通过数据处理单元对比所述第一图像和所述第二图像,选取其中一张图像作为标准图像,或者,通过数据处理单元对所述第一图像和所述第二图像处理获得标准图像;Compare the first image and the second image by a data processing unit, and select one of the images as a standard image, or obtain a standard image by processing the first image and the second image by a data processing unit; 对所述标准图像进行后处理获得所述被测物的3D深度信息。The 3D depth information of the measured object is obtained by post-processing the standard image.
CN202110699174.1A 2021-06-23 2021-06-23 3D recognition device, 3D recognition system and recognition method thereof Pending CN113326802A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110699174.1A CN113326802A (en) 2021-06-23 2021-06-23 3D recognition device, 3D recognition system and recognition method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110699174.1A CN113326802A (en) 2021-06-23 2021-06-23 3D recognition device, 3D recognition system and recognition method thereof

Publications (1)

Publication Number Publication Date
CN113326802A true CN113326802A (en) 2021-08-31

Family

ID=77424411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110699174.1A Pending CN113326802A (en) 2021-06-23 2021-06-23 3D recognition device, 3D recognition system and recognition method thereof

Country Status (1)

Country Link
CN (1) CN113326802A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114598855A (en) * 2022-02-28 2022-06-07 深圳博升光电科技有限公司 Structured light system, electronic equipment with structured light system and control method
CN116908877A (en) * 2023-06-08 2023-10-20 信扬科技(佛山)有限公司 Time-of-flight cameras and depth image acquisition methods

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150092016A1 (en) * 2013-09-30 2015-04-02 Lenovo (Beijing) Co., Ltd. Method For Processing Data And Apparatus Thereof
US20180196998A1 (en) * 2017-01-11 2018-07-12 Microsoft Technology Licensing, Llc Infrared imaging recognition enhanced by 3d verification
CN110246186A (en) * 2019-04-15 2019-09-17 深圳市易尚展示股份有限公司 A kind of automatized three-dimensional colour imaging and measurement method
CN111708039A (en) * 2020-05-24 2020-09-25 深圳奥比中光科技有限公司 Depth measuring device and method and electronic equipment
WO2021008209A1 (en) * 2019-07-12 2021-01-21 深圳奥比中光科技有限公司 Depth measurement apparatus and distance measurement method
CN112379563A (en) * 2020-11-11 2021-02-19 深圳博升光电科技有限公司 Three-dimensional imaging device and method based on structured light and electronic equipment
CN112834435A (en) * 2019-11-22 2021-05-25 深圳市光鉴科技有限公司 4D camera and electronic equipment
CN112880600A (en) * 2021-04-29 2021-06-01 深圳博升光电科技有限公司 Imaging device and equipment for detecting glass
CN215219710U (en) * 2021-06-23 2021-12-17 深圳博升光电科技有限公司 3D recognition device and 3D recognition system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150092016A1 (en) * 2013-09-30 2015-04-02 Lenovo (Beijing) Co., Ltd. Method For Processing Data And Apparatus Thereof
US20180196998A1 (en) * 2017-01-11 2018-07-12 Microsoft Technology Licensing, Llc Infrared imaging recognition enhanced by 3d verification
CN110246186A (en) * 2019-04-15 2019-09-17 深圳市易尚展示股份有限公司 A kind of automatized three-dimensional colour imaging and measurement method
WO2021008209A1 (en) * 2019-07-12 2021-01-21 深圳奥比中光科技有限公司 Depth measurement apparatus and distance measurement method
CN112834435A (en) * 2019-11-22 2021-05-25 深圳市光鉴科技有限公司 4D camera and electronic equipment
CN111708039A (en) * 2020-05-24 2020-09-25 深圳奥比中光科技有限公司 Depth measuring device and method and electronic equipment
CN112379563A (en) * 2020-11-11 2021-02-19 深圳博升光电科技有限公司 Three-dimensional imaging device and method based on structured light and electronic equipment
CN112880600A (en) * 2021-04-29 2021-06-01 深圳博升光电科技有限公司 Imaging device and equipment for detecting glass
CN215219710U (en) * 2021-06-23 2021-12-17 深圳博升光电科技有限公司 3D recognition device and 3D recognition system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114598855A (en) * 2022-02-28 2022-06-07 深圳博升光电科技有限公司 Structured light system, electronic equipment with structured light system and control method
CN116908877A (en) * 2023-06-08 2023-10-20 信扬科技(佛山)有限公司 Time-of-flight cameras and depth image acquisition methods

Similar Documents

Publication Publication Date Title
KR102538645B1 (en) Augmentation systems and methods for sensor systems and imaging systems using polarized light
CN104634276B (en) Three-dimension measuring system, capture apparatus and method, depth computing method and equipment
CN106772431B (en) A kind of Depth Information Acquistion devices and methods therefor of combination TOF technology and binocular vision
US9392262B2 (en) System and method for 3D reconstruction using multiple multi-channel cameras
CN112232109B (en) Living body face detection method and system
US11503228B2 (en) Image processing method, image processing apparatus and computer readable storage medium
US8836756B2 (en) Apparatus and method for acquiring 3D depth information
CN101288105B (en) For the method and system of object reconstruction
US9501833B2 (en) Method and system for providing three-dimensional and range inter-planar estimation
KR101706093B1 (en) System for extracting 3-dimensional coordinate and method thereof
CN215219710U (en) 3D recognition device and 3D recognition system
JP2001264033A (en) Three-dimensional shape-measuring apparatus and its method, three-dimensional modeling device and its method, and program providing medium
CN107451561A (en) Iris recognition supplementary light method and device
JP2008537190A (en) Generation of three-dimensional image of object by irradiating with infrared pattern
CN107463659B (en) Object searching method and device
CN113326802A (en) 3D recognition device, 3D recognition system and recognition method thereof
CN112085793A (en) Three-dimensional imaging scanning system based on combined lens group and point cloud registration method
CN107610127B (en) Image processing method, device, electronic device, and computer-readable storage medium
Wong et al. Fast acquisition of dense depth data by a new structured light scheme
CN107527335A (en) Image processing method and device, electronic device, and computer-readable storage medium
Harvent et al. Multi-view dense 3D modelling of untextured objects from a moving projector-cameras system
CN113283422A (en) 3D structured light recognition device, system and method
CN119168890B (en) Point cloud generation method and device based on millimeter wave and visual image fusion
US8682041B2 (en) Rendering-based landmark localization from 3D range images
EP4217962B1 (en) A device and method for image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination