CN111654626B - High-resolution camera containing depth information - Google Patents
High-resolution camera containing depth information Download PDFInfo
- Publication number
- CN111654626B CN111654626B CN202010505524.1A CN202010505524A CN111654626B CN 111654626 B CN111654626 B CN 111654626B CN 202010505524 A CN202010505524 A CN 202010505524A CN 111654626 B CN111654626 B CN 111654626B
- Authority
- CN
- China
- Prior art keywords
- image
- rgb
- depth
- depth image
- sensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 claims abstract description 22
- 238000006243 chemical reaction Methods 0.000 claims description 28
- 239000011159 matrix material Substances 0.000 claims description 24
- 238000000034 method Methods 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000004927 fusion Effects 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 5
- 238000012938 design process Methods 0.000 claims description 3
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
The invention discloses a high-resolution camera containing depth information, which comprises an RGB image acquisition unit, a depth image acquisition unit, a light source emission unit, an FPGA processing unit, an ARM processor unit, a storage unit and an Ethernet unit; firstly, collecting an RGB image and a depth data image, acquiring the accurate distance from a camera to a target through the depth image, and fusing the RGB image and the depth image to enable the depth image and the RGB image to be completely overlapped; and finally, interpolating the depth image to a high resolution consistent with the resolution of the RGB image, wherein the output image is a high resolution image containing depth information. The camera can acquire a high-resolution color image of a target and can acquire depth information of a corresponding point of the target.
Description
Technical Field
The invention belongs to the field of high-resolution cameras, and particularly relates to a high-resolution camera containing depth information.
Background
The high-resolution camera can acquire the detail information of the target, and has a wide application scene. However, many application scenarios are increasingly complex nowadays, and especially in the field of industrial application, higher requirements are placed on the application of the camera, and not only high-resolution target information needs to be acquired, but also target-to-camera distance information needs to be known. Whereas conventional cameras can achieve very high resolution but cannot acquire distance information. A binocular camera designed using two high resolution planar image sensors is one solution. However, the scheme has several problems, the calculated amount of the image fusion algorithm is too large, and the camera cannot reach a high speed easily; cameras have certain requirements for targets, require that the targets have distinct features, and that the range of camera measurements is relatively close. Or the images of two different cameras are connected by using different calibration matrixes at different depth distances, so that a plurality of calibration matrixes need to be stored in order to obtain better coincidence, and the table lookup consumes a large amount of calculation time.
For the above reasons, a new solution is proposed.
Disclosure of Invention
It is an object of the invention to provide a high resolution camera comprising depth information.
The purpose of the invention can be realized by the following technical scheme:
a high-resolution camera containing depth information comprises a light source emitting unit, an RGB image acquisition unit, a depth image acquisition unit, an FPGA processing unit, an ARM processor, a storage module and a gigabit Ethernet unit;
the system comprises a light source emitting unit, an RGB image acquisition unit, an FPGA processing unit, an ARM processor, a depth image acquisition unit, an FPGA processing unit and an ARM processor, wherein the light source emitting unit is connected with the depth image acquisition unit; the storage module and the gigabit Ethernet unit are respectively connected with the ARM processor;
the RGB image acquisition unit is used for acquiring RGB images, and the depth image acquisition unit is used for acquiring depth images; the light source emission unit is used as an active light source of the depth image acquisition unit; the FPGA processing unit completes the fusion calculation of the RGB image and the depth image; the ARM processor is used for configuring the working mode of the sensor and controlling data flow; the storage module is used for storing temporary data for data processing, and the gigabit Ethernet unit is used for outputting the data;
the depth image acquisition unit is used for acquiring a depth image and specifically comprises the following steps: emitting a beam of modulated light of infrared light by the light source emitting unit, and acquiring the distance between a target and a camera by calculating the flight time of the emitted light;
the specific mode of the RGB image acquisition unit for acquiring the RGB image is as follows: passing through a high resolution RGB image sensor;
the specific process of the FPGA processing unit for completing the fusion calculation of the RGB image and the depth image comprises the following steps:
the method comprises the following steps: the RGB image acquisition unit and the depth image acquisition unit store the acquired RGB image and depth image into the storage module;
step two: calculating the depth image according to the pixels and the conversion matrix M to obtain a depth image superposed with the RGB image;
wherein, at different distances, the conversion relationship between the RGB image pixels and the depth image pixels is different, and the conversion relationship is related to the depth value; calculating the corresponding relation between RGB image pixels and depth image pixels in real time according to the depth value obtained by the depth image sensor, namely obtaining different conversion matrixes M at different depths;
step three: interpolating the aligned depth image to the same resolution as the RGB image;
step four: aligning the interpolated depth image with RGB one by one according to pixels; each pixel point of the RGB image obtained at the moment contains depth information besides RGB three-channel color information, so that a target image is obtained;
the FPGA processing unit is used for transmitting the target image to the storage module for storage; the storage module comprises an RGB image, a depth image and an RGBD image containing depth information after coordinate alignment; according to the practical application scene, an RGB image, a depth image or an RGBD image containing depth information can be output, and a gigabit Ethernet interface is used for data output.
Further, the depth image acquisition unit is a TOF-based depth image sensor; the RGB image acquisition unit is a high-resolution RGB color sensor.
Further, the modulation frequency of the modulated light of the infrared light beam emitted by the light source emission unit is 12 MHZ.
Further, the depth image acquisition unit and the RGB image acquisition unit use sensors with the same target surface size in the design process.
Further, the acquisition mode of the external parameter matrix M is as follows:
s1: firstly, establishing a relation between a depth image pixel coordinate system and an RGB image pixel coordinate system;
s2: the homogeneous coordinate of each pixel of an RGB sensor in an RGB image acquisition unit is described as follows:
in the formula, pixRiThe resolution of an RGB image is i-M × N, ZRiIs the distance of the pixel to the target; (u)Ri,vRi) Is the pixel coordinate;
s3: establishing a conversion relation between an RGB pixel coordinate system and an RGB image camera coordinate system;
in formula 2, dx and dy represent distances between pixels in the horizontal direction and the vertical direction of the RGB sensor pixels; (X)Ri,YRi) Is the representation of the object in the RGB image coordinate system, (XC)Ri,YCRi) Is a representation of the target in the RGB camera coordinate system; f. ofRIs the focal length of the RGB image lens;
s4: deriving formula 3 according to formula 2, specifically:
s5: describing the homogeneous coordinate of each pixel of the TOF sensor in the depth image acquisition unit as follows:
in the formula, pixLjThe depth sensor is represented in a pixel coordinate system, and the resolution of a depth image is j ═ B × C,; zLjIs the distance of the pixel to the target; (u)Lj,vLj) Is the depth image pixel coordinate system coordinate;
s6: the transformation relationship between the depth image pixel coordinate system and the depth image camera coordinate system is established in formula 5, specifically, (XC)Ri,YCRi) Is the coordinates of the depth image camera coordinate system; f. ofLIs the focal length of the depth image lens;
s7: deducing formula 6 according to formula 5
S8: establishing a relation between an RGB sensor camera coordinate system and a depth sensor camera coordinate system, and describing the difference between an image acquired by an RGB camera and an image acquired by a TOF camera through an external reference matrix M; the external parameter matrix M is obtained by simultaneously shooting the same target through two cameras and calculating the difference between two pictures; the conversion matrix is used for describing the relationship between the shot picture and the pixel point of the sensor chip; the method specifically comprises the following steps:
s9: from equations 3, 6 and 7, we can obtain:
s10: due to ZRi≈ZLjEquation 8 is reduced to equation 9; thereby establishing a relation between the pixel coordinates of the RGB sensor and the pixel coordinates of the depth sensor; the method specifically comprises the following steps:
from equation 9, the relationship between the RGB sensor pixel and the depth sensor pixel can be obtained as
In the application process, the relation between the depth image pixel coordinate system and the RGB image pixel coordinate system can be obtained as long as the external parameter matrix M is obtained, wherein the conversion relation between the depth image pixel coordinate and the RGB image pixel coordinate and the distance Z between the target and the cameraRiOr ZLjCorrelation; obtaining a formula 11 and a formula 12 by performing determinant transformation on the formula 10; the formula 11 and the formula 12 are obtained through the formula 10, and are used for obtaining the change relation of pixel coordinates between the pixel point of the RGB sensor and the pixel point of the TOF sensor along with the change of the distance;
the matrix M is an external reference matrix of images acquired by the RGB image sensor and the depth image sensor; is an attribute of the camera itself and can be obtained by the camera itself. The extrinsic matrix is a calculation parameter in the transformation matrix.
The invention has the beneficial effects that:
the invention discloses a high-resolution camera containing depth information, which comprises an RGB image acquisition unit, a depth image acquisition unit, a light source emission unit, an FPGA processing unit, an ARM processor unit, a storage unit and an Ethernet unit; firstly, collecting an RGB image and a depth data image, acquiring the accurate distance from a camera to a target through the depth image, and fusing the RGB image and the depth image to enable the depth image and the RGB image to be completely overlapped; and finally, interpolating the depth image to a high resolution consistent with the resolution of the RGB image, wherein the output image is a high resolution image containing depth information. The camera can acquire a high-resolution color image of a target and can acquire depth information of a corresponding point of the target.
Drawings
In order to facilitate understanding for those skilled in the art, the present invention will be further described with reference to the accompanying drawings.
FIG. 1 is a block diagram of the system of the present invention.
Detailed Description
As shown in fig. 1, a high resolution camera including depth information includes a light source emitting unit, an RGB image collecting unit, a depth image collecting unit, an FPGA processing unit, an ARM processor, a storage module, and a gigabit ethernet unit;
the system comprises a light source emitting unit, an RGB image acquisition unit, an FPGA processing unit, an ARM processor, a depth image acquisition unit, an FPGA processing unit and an ARM processor, wherein the light source emitting unit is connected with the depth image acquisition unit; the storage module and the gigabit Ethernet unit are respectively connected with the ARM processor.
The RGB image acquisition unit is used for acquiring RGB images, the depth image acquisition unit is used for acquiring depth images, and the depth image acquisition unit can be specifically a depth sensor; the light source emission unit is used as an active light source of the depth image acquisition unit; the FPGA processing unit completes the fusion calculation of the RGB image and the depth image; the ARM processor is used for configuring the working mode of the sensor and controlling data flow; the storage module is used for storing temporary data for data processing, and the gigabit Ethernet unit is used for outputting the data;
the specific mode of acquiring the depth image by the depth image acquisition unit is as follows:
emitting a beam of modulated light of infrared light through the light source emitting unit, wherein the modulation frequency is 12MHZ, and obtaining the distance between a target and a camera by calculating the flight time of the emitted light;
the specific mode of the RGB image acquisition unit for acquiring the RGB image is as follows: passing through a high resolution RGB image sensor; the depth image sensor and the RGB image sensor use the same sensor with the same target surface size in the design process;
the specific process of the FPGA processing unit for completing the fusion calculation of the RGB image and the depth image comprises the following steps:
the method comprises the following steps: the RGB image acquisition unit and the depth image acquisition unit store the acquired RGB image and depth image into the storage module;
step two: calculating the depth image according to the pixels and the conversion matrix M to obtain a depth image superposed with the RGB image;
step three: wherein at different distances, the conversion relationships of the RGB image pixels to the depth image pixels are different, the conversion relationships being related to the depth values; and calculating the corresponding relation between the RGB image pixel and the depth image pixel according to the depth value acquired by the depth sensor.
The conversion matrix M is obtained in the following manner:
s1: firstly, establishing a relation between a depth image pixel coordinate system and an RGB image pixel coordinate system;
s2: the homogeneous coordinate of each pixel of an RGB sensor in an RGB image acquisition unit is described as follows:
in the formula, pixRiThe resolution of an RGB image is i-M × N, ZRiIs the distance of the pixel to the target; (u)Ri,vRi) Is the pixel coordinate; f. ofRIs the focal length of the RGB image lens;
establishing a conversion relation between an RGB pixel coordinate system and an RGB image camera coordinate system;
in equation 2, dx and dy denote horizontal and vertical images of pixels of the RGB sensorThe distance between elements; (X)Ri,YRi) Is the representation of the object in the RGB image coordinate system, (XC)Ri,YCRi) Is a representation of the target in the RGB camera coordinate system;
deriving formula 3 according to formula 2, specifically:
describing the homogeneous coordinate of each pixel of the TOF sensor in the depth image acquisition unit as follows:
in the formula, pixLjThe depth sensor is represented in a pixel coordinate system, and the resolution of a depth image is j ═ B × C, ZLjIs the distance of the pixel to the target; (u)Lj,vLj) Is the depth image pixel coordinate system coordinate; (XC)Ri,YCRi) Is the coordinates of the depth image camera coordinate system; f. ofLIs the focal length of the depth image lens;
establishing a conversion relation between a depth image pixel coordinate system and a depth image camera coordinate system, which specifically comprises the following steps:
deducing formula 6 according to formula 5
Establishing a relation between an RGB sensor camera coordinate system and a depth sensor camera coordinate system; wherein M is an extrinsic parameter matrix between the RGB sensor and the depth sensor;
from equations 3, 6 and 7, we can obtain:
due to ZRi≈ZLjEquation 8 is reduced to equation 9; thereby establishing a relation between the pixel coordinates of the RGB sensor and the pixel coordinates of the depth sensor; the method specifically comprises the following steps:
from equation 9, the relationship between the RGB sensor pixel and the depth sensor pixel can be obtained as
In the practical application process, the relation between the depth image pixel coordinate system and the RGB image pixel coordinate system can be obtained by only obtaining the external reference matrix M, wherein the conversion relation between the depth image pixel coordinate and the RGB image pixel coordinate system and the distance Z between the target and the cameraRiOr ZLjAnd (4) correlating. Equation 10 is transformed into equations 11 and 12 by a determinant. .
The conversion matrix M is an external parameter matrix of the images collected by the RGB image sensor and the depth image sensor. Is an attribute of the camera itself and can be obtained by the camera itself.
The aligned depth image is interpolated to the same resolution as the RGB image.
And aligning the interpolated depth image with RGB one by one according to pixels. Each pixel point of the RGB image obtained at the moment contains depth information besides RGB three-channel color information.
And storing the image which completes the coordinate conversion and realizes the pixel alignment into a storage module. The storage module comprises an RGB image, a depth image and an RGBD image containing depth information after coordinate alignment. According to the practical application scene, an RGB image, a depth image or an RGBD image containing depth information can be output, and a gigabit Ethernet interface is used for data output.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.
Claims (1)
1. A high-resolution camera containing depth information is characterized by comprising a light source emitting unit, an RGB image acquisition unit, a depth image acquisition unit, an FPGA processing unit, an ARM processor, a storage module and a gigabit Ethernet unit;
the system comprises a light source emitting unit, an RGB image acquisition unit, an FPGA processing unit, an ARM processor, a depth image acquisition unit, an FPGA processing unit and an ARM processor, wherein the light source emitting unit is connected with the depth image acquisition unit; the storage module and the gigabit Ethernet unit are respectively connected with the ARM processor;
the RGB image acquisition unit is used for acquiring RGB images, and the depth image acquisition unit is used for acquiring depth images; the light source emission unit is used as an active light source of the depth image acquisition unit; the FPGA processing unit completes the fusion calculation of the RGB image and the depth image; the ARM processor is used for configuring the working mode of the sensor and controlling data flow; the storage module is used for storing temporary data for data processing, and the gigabit Ethernet unit is used for outputting the data;
the specific process of acquiring the depth image by the depth image acquisition unit is as follows: emitting a beam of modulated light of infrared light by the light source emitting unit, and acquiring the distance between a target and a camera by calculating the flight time of the emitted light;
the specific mode of the RGB image acquisition unit for acquiring the RGB image is as follows: passing through a high resolution RGB image sensor;
the specific process of the FPGA processing unit for completing the fusion calculation of the RGB image and the depth image comprises the following steps:
the method comprises the following steps: the RGB image acquisition unit and the depth image acquisition unit store the acquired RGB image and depth image into the storage module;
step two: calculating the depth image according to the pixels and the conversion matrix M to obtain a depth image superposed with the RGB image;
wherein, at different distances, the conversion relationship between the RGB image pixels and the depth image pixels is different, and the conversion relationship is related to the depth value; calculating the corresponding relation between RGB image pixels and depth image pixels in real time according to the depth value obtained by the depth image sensor, namely obtaining different conversion matrixes M at different depths;
step three: interpolating the aligned depth image to the same resolution as the RGB image;
step four: aligning the interpolated depth image with the RGB image one by one according to pixels; each pixel point of the RGB image obtained at the moment contains depth information besides RGB three-channel color information, so that a target image is obtained;
the FPGA processing unit is used for transmitting the target image to the storage module for storage; the storage module comprises an RGB image, a depth image and an RGBD image containing depth information after coordinate alignment; according to an actual application scene, an RGB image, a depth image or an RGBD image containing depth information can be output, and a gigabit Ethernet interface is used for data output; the depth image acquisition unit is a depth image sensor based on TOF; the RGB image acquisition unit is a high-resolution RGB color sensor; the modulation frequency of the modulated light of the infrared light emitted by the light source emission unit is 12 MHZ; the depth image acquisition unit and the RGB image acquisition unit use sensors with the same target surface size in the design process; the conversion matrix M is obtained in the following manner:
s1: firstly, establishing a relation between a depth image pixel coordinate system and an RGB image pixel coordinate system;
s2: the homogeneous coordinate of each pixel of an RGB sensor in an RGB image acquisition unit is described as follows:
in the formula, pixRiThe resolution of an RGB image is i ═ P × N, ZRiIs the distance of the pixel to the target; (u)Ri,vRi) Is the pixel coordinate;
s3: establishing a conversion relation between an RGB pixel coordinate system and an RGB image camera coordinate system;
in formula 2, dx and dy represent distances between pixels in the horizontal direction and the vertical direction of the RGB sensor pixels; (X)Ri,YRi) Is the representation of the object in the RGB image coordinate system, (XC)Ri,YCRi) Is a representation of the target in the RGB camera coordinate system; f. ofRIs the focal length of the RGB image lens;
s4: deriving formula 3 according to formula 2, specifically:
s5: describing the homogeneous coordinate of each pixel of the TOF sensor in the depth image acquisition unit as follows:
in the formula, pixLjThe depth image is represented by a depth sensor in a pixel coordinate system, and the resolution of the depth image is j ═ B × C; zLjIs the distance of the pixel to the target; (u)Lj,vLj) Is the depth image pixel coordinate system coordinate;
s6: the transformation relationship between the depth image pixel coordinate system and the depth image camera coordinate system is established in formula 5, specifically, (XC)Lj,YCLj) Is the coordinates of the depth image camera coordinate system; f. ofLIs the focal length of the depth image lens;
equation 5;
s7: deducing formula 6 according to formula 5
S8: establishing a relation between an RGB sensor camera coordinate system and a depth sensor camera coordinate system, and describing the difference between an image acquired by an RGB camera and an image acquired by a TOF camera through a conversion matrix M; the conversion matrix M is obtained by simultaneously shooting the same target through two cameras and calculating the difference between two pictures; the conversion matrix is used for describing the relationship between the shot picture and the pixel point of the sensor chip; the method specifically comprises the following steps:
s9: from equations 3, 6 and 7, we can obtain:
s10: due to ZRi≈ZLjEquation 8 is reduced to equation 9; thereby establishing a relation between the pixel coordinates of the RGB sensor and the pixel coordinates of the depth sensor; the method specifically comprises the following steps:
from equation 9, the relationship between the RGB sensor pixel and the depth sensor pixel can be obtained as
In the application process, the relation between the depth image pixel coordinate system and the RGB image pixel coordinate system can be obtained as long as the conversion matrix M is obtained, wherein the conversion relation between the depth image pixel coordinate and the RGB image pixel coordinate system and the distance Z between the target and the cameraRiOr ZLjCorrelation; obtaining a formula 11 and a formula 12 by performing determinant transformation on the formula 10; the formula 11 and the formula 12 are obtained through the formula 10, and are used for obtaining the change relation of pixel coordinates between the pixel point of the RGB sensor and the pixel point of the TOF sensor along with the change of the distance;
the conversion matrix M is an external reference matrix of the image acquired by the RGB image sensor and the depth image sensor, is an attribute of the camera itself, and can be obtained by the camera itself.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010505524.1A CN111654626B (en) | 2020-06-05 | 2020-06-05 | High-resolution camera containing depth information |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010505524.1A CN111654626B (en) | 2020-06-05 | 2020-06-05 | High-resolution camera containing depth information |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111654626A CN111654626A (en) | 2020-09-11 |
| CN111654626B true CN111654626B (en) | 2021-11-30 |
Family
ID=72348834
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010505524.1A Active CN111654626B (en) | 2020-06-05 | 2020-06-05 | High-resolution camera containing depth information |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111654626B (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112911265B (en) * | 2021-02-01 | 2023-01-24 | 北京都视科技有限公司 | Fusion processor and fusion processing system |
| CN115482285A (en) * | 2021-05-31 | 2022-12-16 | 腾讯科技(深圳)有限公司 | Image alignment method, device, equipment and storage medium |
| CN113938664A (en) * | 2021-09-10 | 2022-01-14 | 思特威(上海)电子科技股份有限公司 | Signal acquisition method of pixel array, image sensor, equipment and storage medium |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2871843B1 (en) * | 2013-11-12 | 2019-05-29 | LG Electronics Inc. -1- | Digital device and method for processing three dimensional image thereof |
| US9721385B2 (en) * | 2015-02-10 | 2017-08-01 | Dreamworks Animation Llc | Generation of three-dimensional imagery from a two-dimensional image using a depth map |
| CN107534764B (en) * | 2015-04-30 | 2020-03-17 | 深圳市大疆创新科技有限公司 | System and method for enhancing image resolution |
| CN106254854B (en) * | 2016-08-19 | 2018-12-25 | 深圳奥比中光科技有限公司 | Preparation method, the apparatus and system of 3-D image |
| CN106572339B (en) * | 2016-10-27 | 2018-11-30 | 深圳奥比中光科技有限公司 | A kind of image acquisition device and image capturing system |
| CN106780618B (en) * | 2016-11-24 | 2020-11-03 | 周超艳 | Three-dimensional information acquisition method and device based on heterogeneous depth camera |
| CN109816731B (en) * | 2017-11-21 | 2021-08-27 | 西安交通大学 | Method for accurately registering RGB (Red Green blue) and depth information |
| CN209375823U (en) * | 2018-12-20 | 2019-09-10 | 武汉万集信息技术有限公司 | 3D camera |
-
2020
- 2020-06-05 CN CN202010505524.1A patent/CN111654626B/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| CN111654626A (en) | 2020-09-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109615652B (en) | Depth information acquisition method and device | |
| EP3373251B1 (en) | Scan colorization with an uncalibrated camera | |
| EP3422955B1 (en) | System and method for assisted 3d scanning | |
| CN111325801B (en) | A joint calibration method of lidar and camera | |
| CN103959012B (en) | 6 degrees of freedom position and orientation determination | |
| Varga et al. | Super-sensor for 360-degree environment perception: Point cloud segmentation using image features | |
| Alismail et al. | Automatic calibration of a range sensor and camera system | |
| CN112766328B (en) | Depth image construction method for intelligent robots by fusing lidar, binocular camera and ToF depth camera data | |
| CN111654626B (en) | High-resolution camera containing depth information | |
| Lindner et al. | Calibration of the intensity-related distance error of the PMD ToF-camera | |
| CN108629756B (en) | A Kinectv2 Depth Image Invalid Point Repair Method | |
| CN113096189B (en) | ITOF depth camera calibration and depth optimization method | |
| CN102779347A (en) | Method and device for tracking and locating target for aircraft | |
| CN111950426A (en) | Target detection method, device and vehicle | |
| CN106875435B (en) | Method and system for obtaining depth image | |
| CN106556825A (en) | A kind of combined calibrating method of panoramic vision imaging system | |
| CN110517284B (en) | A Target Tracking Method Based on LiDAR and PTZ Camera | |
| CN112017259B (en) | Indoor positioning and image building method based on depth camera and thermal imager | |
| JP7489253B2 (en) | Depth map generating device and program thereof, and depth map generating system | |
| CN111854636A (en) | Multi-camera array three-dimensional detection system and method | |
| CN104732557A (en) | Color point cloud generating method of ground laser scanner | |
| Herau et al. | Moisst: Multimodal optimization of implicit scene for spatiotemporal calibration | |
| CN111982071B (en) | 3D scanning method and system based on TOF camera | |
| CN108230273B (en) | Three-dimensional image processing method of artificial compound eye camera based on geometric information | |
| CN117250956A (en) | Mobile robot obstacle avoidance method and obstacle avoidance device with multiple observation sources fused |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| CB02 | Change of applicant information | ||
| CB02 | Change of applicant information |
Address after: 230000 intersection of Fangxing Avenue and Yulan Avenue, Taohua Industrial Park, Hefei Economic and Technological Development Zone, Anhui Province Applicant after: Hefei Taihe Intelligent Technology Group Co.,Ltd. Address before: 230601 intersection of Fangxing Avenue and Yulan Avenue, Taohua Industrial Park Development Zone, Hefei Economic and Technological Development Zone, Anhui Province Applicant before: HEFEI TAIHE OPTOELECTRONIC TECHNOLOGY Co.,Ltd. |
|
| GR01 | Patent grant | ||
| GR01 | Patent grant |