[go: up one dir, main page]

CN112700502B - Binocular camera system and binocular camera space calibration method - Google Patents

Binocular camera system and binocular camera space calibration method Download PDF

Info

Publication number
CN112700502B
CN112700502B CN202011593703.1A CN202011593703A CN112700502B CN 112700502 B CN112700502 B CN 112700502B CN 202011593703 A CN202011593703 A CN 202011593703A CN 112700502 B CN112700502 B CN 112700502B
Authority
CN
China
Prior art keywords
camera
event
image
address
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011593703.1A
Other languages
Chinese (zh)
Other versions
CN112700502A (en
Inventor
吴金建
李汉标
石光明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202011593703.1A priority Critical patent/CN112700502B/en
Publication of CN112700502A publication Critical patent/CN112700502A/en
Application granted granted Critical
Publication of CN112700502B publication Critical patent/CN112700502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明提出了一种双目相机系统和双目相机空间标定方法,解决了现有技术成像质量差和无深度信息的问题。双目相机系统依次级联有相机模块、校正模块、标定模块和存储模块;相机模块包括并联的事件相机和普通CMOS相机。空间标定方法包括:相机模块获取地址‑事件数据流、图像和拍摄时间;校正模块对数据流和图像进行畸变和极线校正;标定模块对校正后数据流和图像空间位置标定;存储模块将校正后的数据流和图像及其对应关系以单应矩阵集合存储在持久化存储器中。本发明不用分光镜,避免了因此导致的成像差。本发明具有视差,能通过视差恢复空间深度信息。用于高帧率、高分辨率的数据重构,对高速运动目标的识别和跟踪等领域。

The invention proposes a binocular camera system and a binocular camera space calibration method, which solve the problems of poor imaging quality and no depth information in the prior art. The binocular camera system is sequentially cascaded with a camera module, a correction module, a calibration module and a storage module; the camera module includes a parallel event camera and a common CMOS camera. The spatial calibration method includes: the camera module obtains the address-event data stream, image and shooting time; the correction module performs distortion and epipolar correction on the data stream and image; the calibration module calibrates the corrected data stream and image spatial position; the storage module corrects The final data streams and images and their corresponding relations are stored in the persistent memory as a set of homography matrices. The present invention does not use a beam splitter, thus avoiding the resulting poor imaging. The invention has parallax, and can restore spatial depth information through the parallax. It is used for high frame rate and high resolution data reconstruction, identification and tracking of high-speed moving targets and other fields.

Description

双目相机系统和双目相机空间标定方法Binocular camera system and binocular camera space calibration method

技术领域technical field

本发明属于计算机视觉技术领域,涉及双目相机的结构和空间标定,具体是一种双目相机系统和双目相机空间标定方法,用于重建高时间分辨率和高空间分辨率的图像数据,提高对高速运动目标识别和跟踪的准确率,也可用于包含事件相机和普通CMOS相机的双目相机系统的空间标定。The invention belongs to the technical field of computer vision, and relates to the structure and spatial calibration of a binocular camera, in particular to a binocular camera system and a binocular camera space calibration method, which are used to reconstruct image data with high temporal resolution and high spatial resolution. Improve the accuracy of high-speed moving target recognition and tracking, and can also be used for spatial calibration of binocular camera systems including event cameras and ordinary CMOS cameras.

背景技术Background technique

双目相机系统可以通过两个相机的视差获取物体与相机之间的距离,在机器人和SLAM领域应用广泛。但是使用普通CMOS相机的双目相机系统存在很多缺陷,当双目相机系统与目标之间的相对运动速度过快时,会产生严重的图像模糊,使得双目相机无法很好的工作。The binocular camera system can obtain the distance between the object and the camera through the parallax of the two cameras, and is widely used in the fields of robotics and SLAM. However, there are many defects in the binocular camera system using a common CMOS camera. When the relative motion between the binocular camera system and the target is too fast, serious image blurring will occur, making the binocular camera unable to work well.

事件相机是一种新型相机,它的每个像素单独感光,当某个像素感受到的光强发生变化时,事件相机才会输出数据,否则事件相机不会输出数据。当事件相机与目标之间的相对运动速度较快时,也不会产生模糊现象。而且事件相机具有动态范围大,数据量小的优点。事件相机输出的数据格式与普通CMOS相机也不同,事件相机输出的数据为地址-事件数据流。The event camera is a new type of camera, each pixel of which is individually sensitive to light, and the event camera will only output data when the light intensity felt by a certain pixel changes, otherwise the event camera will not output data. There is also no blurring when the relative motion between the event camera and the target is fast. Moreover, the event camera has the advantages of large dynamic range and small data volume. The data format output by an event camera is also different from that of an ordinary CMOS camera. The data output by an event camera is an address-event data stream.

将事件相机和普通CMOS相机组成双目相机系统,可以结合两种相机的优点,得到高动态范围、高帧率和高空间分辨率的图像数据,并用来计算空间深度信息。The binocular camera system composed of an event camera and a common CMOS camera can combine the advantages of the two cameras to obtain image data with high dynamic range, high frame rate and high spatial resolution, and use it to calculate spatial depth information.

申请公布号为CN108038888A,名为“混合相机系统及其空间标定方法及装置”公开了一种混合相机系统和空间标定方法,这种相机及其空间标定方法使用事件相机、普通CMOS相机和分光片组成双目相机系统,通过最小化空间投影误差实现空间标定,其不足之处在于:使用分光片对事件相机和普通CMOS相机之间进行空间位置粗标定,导致两个相机各自只能接收到不足50%的光线,降低了成像质量;使用分光片导致两个相机之间没有视差,损失了空间深度信息。The application publication number is CN108038888A, titled "Hybrid Camera System and Its Spatial Calibration Method and Device", which discloses a hybrid camera system and its spatial calibration method. This camera and its spatial calibration method use an event camera, a common CMOS camera and a beam splitter A binocular camera system is formed to achieve spatial calibration by minimizing the spatial projection error. Its shortcoming is that the spatial position between the event camera and the ordinary CMOS camera is roughly calibrated by using a spectroscopic sheet, resulting in that each of the two cameras can only receive insufficient 50% of the light reduces the image quality; the use of spectroscopic sheets results in no parallax between the two cameras, and the loss of spatial depth information.

发明内容Contents of the invention

本发明的目的在于克服上述现有技术存在的缺陷,提出了一种能识别和跟踪高速运动目标,保留视差信息和保证成像质量的双目相机系统和双目相机空间标定方法。The purpose of the present invention is to overcome the above-mentioned defects in the prior art, and propose a binocular camera system and binocular camera space calibration method that can identify and track high-speed moving targets, retain parallax information and ensure imaging quality.

本发明是一种双目相机系统,其特征在于,依次级联有相机模块、校正模块、标定模块和存储模块,形成双目相机系统;镜头朝向相同且并联的事件相机和普通CMOS相机构成相机模块,其中各模块描述如下:The present invention is a binocular camera system, which is characterized in that a camera module, a correction module, a calibration module and a storage module are cascaded in sequence to form a binocular camera system; an event camera and an ordinary CMOS camera with the same lens orientation and connected in parallel constitute a camera modules, each of which is described as follows:

所述相机模块,包括有事件相机和CMOS相机,用于获取地址-事件数据流、图像和图像拍摄时间,其中,事件相机用于获取地址-事件数据流,CMOS相机用于获取图像和图像的拍摄时间,事件相机和CMOS相机同时输出数据并且事件相机和CMOS相机的输出图像的尺寸相同,事件相机和CMOS相机镜头主轴平行,事件相机和CMOS相机的传感器平面在同一平面上;Described camera module comprises event camera and CMOS camera, is used for obtaining address-event data flow, image and image shooting time, wherein, event camera is used for obtaining address-event data flow, and CMOS camera is used for obtaining image and image Shooting time, the event camera and the CMOS camera output data at the same time and the output images of the event camera and the CMOS camera have the same size, the lens axes of the event camera and the CMOS camera are parallel, and the sensor planes of the event camera and the CMOS camera are on the same plane;

所述校正模块,用于分别对事件相机输出的地址-事件数据流和CMOS相机输出的图像进行畸变校正和极线校正,并输出校正后的地址-事件数据流和校正后的图像,其中,校正模块存储有事先标定的事件相机的相机矩阵和畸变矩阵,存储有CMOS相机的相机矩阵和畸变矩阵,还存储有事先标定的事件相机和CMOS相机之间的旋转矩阵和平移矩阵;The correction module is used to respectively perform distortion correction and epipolar correction on the address-event data stream output by the event camera and the image output by the CMOS camera, and output the corrected address-event data stream and the corrected image, wherein, The calibration module stores the camera matrix and distortion matrix of the pre-calibrated event camera, stores the camera matrix and distortion matrix of the CMOS camera, and also stores the rotation matrix and translation matrix between the pre-calibrated event camera and the CMOS camera;

所述标定模块,用于对畸变校正和极线校正后的地址-事件数据流和图像进行空间位置标定,获取表示地址-事件数据流中事件坐标与图像像素坐标对应关系的单应矩阵并输出;The calibration module is used to calibrate the spatial position of the address-event data stream and image after distortion correction and epipolar line correction, obtain and output a homography matrix representing the corresponding relationship between event coordinates and image pixel coordinates in the address-event data stream ;

所述存储模块,包括有持久化存储器,用于持久化存储标定结果,标定结果包括校正后的地址-事件数据流、校正后的图像和地址-事件数据流中事件坐标与图像像素坐标的单应矩阵。The storage module includes a persistent memory for persistent storage of the calibration results, the calibration results include the corrected address-event data stream, the corrected image and the address-event data stream in the event coordinates and image pixel coordinates should matrix.

本发明还是一种双目相机空间标定方法,是在双目相机系统上实现的,其特征在于,包括如下步骤:The present invention is also a spatial calibration method for a binocular camera, which is implemented on a binocular camera system, and is characterized in that it includes the following steps:

(1)相机模块获取地址-事件数据流、图像和每幅图像的拍摄时间:(1) The camera module obtains the address-event data stream, images and the shooting time of each image:

相机模块中的事件相机获取地址-事件数据流E={ei|0<i≤N1}并输出,同时相机模块中的CMOS相机获取N2幅图像和每幅图像拍摄的时刻S={sr|0<r≤N2}并输出,其中ei表示第i个事件,ei=(xi,yi,gi,ti),xi和yi分别表示ei的触发位置像素的横坐标和纵坐标,gi表示ei的灰度值,gi>0,ti表示ei的触发的时间,sr=(Ir,tpr),Ir表示图像序列S中的第r幅图像,tpr表示图像Ir拍摄的时刻,N1>0,N1表示地址-事件数据流E中的事件数量,N2>0;The event camera in the camera module acquires the address-event data stream E={e i |0<i≤N 1 } and outputs it, while the CMOS camera in the camera module acquires N 2 images and the moment S={ s r |0<r≤N 2 } and output, where e i represents the i-th event, e i = (xi , y i , g i , t i ), and xi and y i respectively represent the triggering of e i The abscissa and ordinate of the position pixel, g i represents the gray value of e i , g i >0, t i represents the triggering time of e i , s r = (I r , tp r ), I r represents the image sequence For the rth image in S, tp r represents the moment when image I r is taken, N 1 >0, N 1 represents the number of events in the address-event data stream E, N 2 >0;

(2)校正模块对每个事件ei和图像Ir进行畸变校正和极线校正:(2) The correction module performs distortion correction and epipolar line correction on each event e i and image I r :

校正模块通过事件相机的相机矩阵和畸变矩阵对每个事件ei进行畸变校正,得到畸变校正后的事件e1,i,通过CMOS相机的相机矩阵和畸变矩阵对每幅图像Ir进行畸变校正,得到畸变校正后的图像I1,r,并通过事件相机和CMOS相机之间的旋转矩阵和平移矩阵对事件e1,i和图像I1,r进行极线校正,得到极线校正后的事件e2,i和图像I2,rThe correction module performs distortion correction on each event e i through the camera matrix and distortion matrix of the event camera to obtain the distortion-corrected event e 1,i , and performs distortion correction on each image I r through the camera matrix and distortion matrix of the CMOS camera , get the distortion-corrected image I 1,r , and perform epipolar correction on the event e 1,i and image I 1,r through the rotation matrix and translation matrix between the event camera and the CMOS camera, and obtain the epipolar-corrected image event e 2,i and image I 2,r ;

(3)标定模块对校正后的地址-事件数据流和图像进行空间位置标定:(3) The calibration module performs spatial position calibration on the corrected address-event data stream and image:

标定模块对每一幅图像I2,r,根据图像I2,r的拍摄时刻tpr,将校正后的地址-事件数据流E2切分为多个地址-事件数据流段E2,r,并将地址-事件数据流段E2,r中的事件根据事件的坐标累积到矩阵中,用SIFT算子分别计算事件累积所地的矩阵和图像I2,r的特征点,并进行特征点匹配,得到多对匹配点,并通过匹配点计算矩阵和图像I2,r之间的单应矩阵,单应矩阵表示地址-事件数据流的事件坐标和图像像素坐标的对应关系;For each image I 2,r , the calibration module divides the corrected address-event data stream E 2 into multiple address-event data stream segments E 2, r according to the shooting time tp r of the image I 2 ,r , and accumulate the events in the address-event data flow segment E 2,r into the matrix according to the coordinates of the event, use the SIFT operator to calculate the matrix where the event is accumulated and the feature points of the image I 2,r respectively, and perform feature Point matching obtains many pairs of matching points, and calculates the homography matrix between the matrix and the image I 2,r by the matching points, and the homography matrix represents the correspondence between the event coordinates of the address-event data stream and the image pixel coordinates;

(4)存储模块存储标定结果:存储模块将校正后的地址-事件数据流、校正后的图像和地址-事件数据流中事件坐标与图像像素坐标的单应矩阵的集合存储在持久化存储器中,完成对事件相机和CMOS相机的空间标定。(4) The storage module stores the calibration results: the storage module stores the corrected address-event data stream, the corrected image, and the homography matrix of event coordinates and image pixel coordinates in the address-event data stream in a persistent memory. , to complete the spatial calibration of the event camera and the CMOS camera.

本发明旨在不使用分光片的情况下,提供一种双目相机系统并对双目相机系统进行空间标定。The present invention aims to provide a binocular camera system and perform spatial calibration on the binocular camera system without using a spectroscopic sheet.

本发明与现有技术相比,具有如下优点:Compared with the prior art, the present invention has the following advantages:

提高了成像质量:由于本发明不使用分光片,事件相机和CMOS相机可以接收更多的光线,成像质量相较于使用分光片的双目相机系统更高。Improved imaging quality: Since the present invention does not use a spectroscopic sheet, the event camera and CMOS camera can receive more light, and the imaging quality is higher than that of a binocular camera system using a spectroscopic sheet.

相机之间有视差,能获得深度图像:由于本发明中事件相机和CMOS相机之间存在视差,所以通过本发明的所得的地址-事件数据流和图像可以获得深度信息,而使用分光镜的双目相机系统两个相机之间没有视差,难以获得深度图像。There is a parallax between the cameras, and a depth image can be obtained: due to the parallax between the event camera and the CMOS camera in the present invention, the depth information can be obtained through the obtained address-event data stream and image of the present invention, and the dual There is no parallax between the two cameras of the objective camera system, and it is difficult to obtain a depth image.

相机可用于识别、跟踪高速运动的目标:由于本发明中的事件相机在拍摄高速运动目标时不会产生模糊,所以通过事件相机可以获得高速运动目标的轨迹和姿态。本发明中的普通CMOS相机可以补全事件相机无法获得的场景中的背景信息和颜色信息,可以弥补事件相机无法拍摄静止目标的缺陷。本发明将事件相机和普通CMOS相机相结合,融合两种相机的优势,提高对高速运动目标识别和轨迹跟踪的准确率。The camera can be used to identify and track high-speed moving targets: Since the event camera in the present invention does not produce blur when shooting high-speed moving targets, the trajectory and attitude of the high-speed moving target can be obtained through the event camera. The ordinary CMOS camera in the present invention can complement the background information and color information in the scene that the event camera cannot obtain, and can make up for the defect that the event camera cannot photograph stationary objects. The invention combines an event camera and a common CMOS camera, integrates the advantages of the two cameras, and improves the accuracy of high-speed moving target recognition and trajectory tracking.

附图说明Description of drawings

图1是本发明双目相机系统的整体结构示意图。FIG. 1 is a schematic diagram of the overall structure of the binocular camera system of the present invention.

图2是本发明双目相机空间标定方法的实现流程图。Fig. 2 is a flow chart of the implementation of the binocular camera space calibration method of the present invention.

具体实施方式Detailed ways

下面结合附图和具体实施例,对本发明作进一步详细描述:Below in conjunction with accompanying drawing and specific embodiment, the present invention is described in further detail:

实施例1Example 1

由于现有普通CMOS相机成像原理的限制,普通CMOS相机在拍摄高速运动目标时有拖影现象,会造成图像的模糊,难以有效利用其拍摄的图像数据。事件相机是一种与普通CMOS相机成像原理不同的相机,事件相机模拟了生物视网膜的成像特性,它只对运动的目标成像,所以事件相机的数据量更小,可以拍摄高速运动的目标。将CMOS相机和事件相机相结合,构成双目相机系统,可以有效利用两种相机的优势,获得同时具有高帧率、高动态范围、高空间分辨率的优点的数据。Due to the limitation of the imaging principle of the existing common CMOS camera, the common CMOS camera has a smear phenomenon when shooting a high-speed moving target, which will cause the image to be blurred, and it is difficult to effectively use the image data captured by it. The event camera is a camera with a different imaging principle from the ordinary CMOS camera. The event camera simulates the imaging characteristics of the biological retina. It only images moving targets, so the data volume of the event camera is smaller, and it can shoot high-speed moving targets. Combining a CMOS camera and an event camera to form a binocular camera system can effectively use the advantages of the two cameras to obtain data with the advantages of high frame rate, high dynamic range, and high spatial resolution.

本发明是一种双目相机系统,参见图1,本发明的双目相机系统依次级联有相机模块、校正模块、标定模块和存储模块,形成双目相机系统。镜头朝向相同且并联的事件相机和普通CMOS相机构成相机模块,其中各模块描述如下:The present invention is a binocular camera system. Referring to FIG. 1 , the binocular camera system of the present invention is sequentially cascaded with a camera module, a correction module, a calibration module and a storage module to form a binocular camera system. The event camera and the ordinary CMOS camera with the same lens facing in parallel constitute a camera module, and each module is described as follows:

本发明中的相机模块,包括有事件相机和CMOS相机,用于获取地址-事件数据流、图像和图像拍摄时间,其中,事件相机用于获取地址-事件数据流,CMOS相机用于获取图像和图像的拍摄时间,事件相机和CMOS相机同时输出数据并且事件相机和CMOS相机的输出图像的尺寸相同,事件相机和CMOS相机镜头主轴平行,事件相机和CMOS相机的传感器平面在同一平面上。The camera module among the present invention comprises event camera and CMOS camera, is used for obtaining address-event data flow, image and image capture time, wherein, event camera is used for obtaining address-event data flow, and CMOS camera is used for obtaining image and The shooting time of the image, the event camera and the CMOS camera output data at the same time and the output images of the event camera and the CMOS camera have the same size, the lens axes of the event camera and the CMOS camera are parallel, and the sensor planes of the event camera and the CMOS camera are on the same plane.

本实施例中所用的事件相机为Celex-V型相机,事件相机和普通CMOS相机均使用焦距为16mm的镜头。The event camera used in this embodiment is a Celex-V camera, and both the event camera and the common CMOS camera use a lens with a focal length of 16 mm.

本发明中的校正模块,用于分别对事件相机输出的地址-事件数据流和CMOS相机输出的图像进行畸变校正和极线校正,并输出校正后的地址-事件数据流和校正后的图像,其中,校正模块存储有事先标定的事件相机的相机矩阵和畸变矩阵,存储有事先标定的CMOS相机的相机矩阵和畸变矩阵,还存储有事先标定的事件相机和CMOS相机之间的旋转矩阵和平移矩阵。The correction module in the present invention is used to respectively perform distortion correction and epipolar line correction on the address-event data stream output by the event camera and the image output by the CMOS camera, and output the corrected address-event data stream and the corrected image, Among them, the correction module stores the camera matrix and distortion matrix of the pre-calibrated event camera, stores the camera matrix and distortion matrix of the pre-calibrated CMOS camera, and also stores the rotation matrix and translation between the pre-calibrated event camera and the CMOS camera matrix.

本实施例中校正模块使用Nvidia TX2实现,通过USB接口和事件相机、普通CMOS相机连接。In this embodiment, the correction module is implemented using Nvidia TX2, and is connected to an event camera or a common CMOS camera through a USB interface.

本发明中的标定模块,用于对畸变校正和极线校正后的地址-事件数据流和图像进行空间位置标定,获取地址-事件数据流中事件坐标与图像像素坐标的单应矩阵并输出。本实施例中的标定模块使用Nvidia TX2实现。The calibration module in the present invention is used to calibrate the spatial position of the address-event data stream and image after distortion correction and epipolar line correction, obtain and output the homography matrix of event coordinates and image pixel coordinates in the address-event data stream. The calibration module in this embodiment is implemented using Nvidia TX2.

本发明中的存储模块,包括有持久化存储器,用于持久化存储校正后的地址-事件数据流、校正后的图像和表示地址-事件数据流中事件坐标与图像像素坐标的单应矩阵的集合。The storage module in the present invention includes a persistent memory for persistently storing the corrected address-event data stream, the corrected image, and the homography matrix representing event coordinates and image pixel coordinates in the address-event data stream gather.

由于现有技术中使用分光片的双目相机系统中的两个相机之间没有视差,所以对使用分光片的双目相机系统进行空间标定时只需要事先标定,但是由于分光片的影响,这种双目相机系统获得的图像质量比较差,并且这种双目相机系统结构复杂,对装配精度要求较高。本发明针对这种现状,提出一种不使用分光片的双目相机系统,由于没有使用分光片,不需要将分光片和相机镜头进行对齐,简化了系统的结构,降低了成本。并且由于不使用分光片,保证了两个相机的成像质量。本发明中的两个相机之间具有视差,所以本发明所拍摄的数据没有损失空间深度信息,可以用来计算深度图。Since there is no parallax between the two cameras in the binocular camera system using the spectroscopic sheet in the prior art, it is only necessary to calibrate in advance when performing spatial calibration on the binocular camera system using the spectroscopic sheet, but due to the influence of the spectroscopic sheet, this The image quality obtained by this binocular camera system is relatively poor, and the structure of this binocular camera system is complex, requiring high assembly precision. Aiming at this situation, the present invention proposes a binocular camera system without using a spectroscopic sheet. Since the spectroscopic sheet is not used, it is not necessary to align the spectroscopic sheet with the camera lens, which simplifies the structure of the system and reduces the cost. And because no beam splitter is used, the imaging quality of the two cameras is guaranteed. There is a parallax between the two cameras in the present invention, so the data captured by the present invention does not lose spatial depth information and can be used to calculate a depth map.

由于本发明中的事件相机在拍摄高速运动目标时不会产生模糊,所以通过事件相机可以获得高速运动目标的轨迹和姿态。本发明中的普通CMOS相机可以补全事件相机无法获得的场景中的背景信息和颜色信息,可以弥补事件相机无法拍摄静止目标的缺陷。本发明将事件相机和普通CMOS相机相结合,融合两种相机的优势,提高对高速运动目标识别和轨迹跟踪的准确率。Since the event camera in the present invention does not produce blur when photographing a high-speed moving target, the track and posture of the high-speed moving target can be obtained through the event camera. The ordinary CMOS camera in the present invention can complement the background information and color information in the scene that the event camera cannot obtain, and can make up for the defect that the event camera cannot photograph stationary objects. The invention combines an event camera and a common CMOS camera, integrates the advantages of the two cameras, and improves the accuracy of high-speed moving target recognition and trajectory tracking.

本发明的双目相机系统,可以用来获取高时间分辨率和高空间分辨率的数据,并且可以获得深度信息,可以应用于无人驾驶系统的视觉导航,提高无人驾驶系统的安全性。The binocular camera system of the present invention can be used to obtain data with high temporal resolution and high spatial resolution, and can obtain depth information, and can be applied to visual navigation of unmanned driving systems to improve the safety of unmanned driving systems.

实施例2Example 2

一种双目相机系统的总体构成同实施例1,本发明的校正模块中存储的事先标定的事件相机的相机矩阵和畸变矩阵由张正友标定法获得,校正模块存储的事先标定的CMOS相机的相机矩阵和畸变矩阵由张正友标定法获得;校正模块存储的事先标定的事件相机和CMOS相机之间的旋转矩阵和平移矩阵,由Bouguet极线校正方法获得。本实施例中,对事件相机和普通CMOS相机进行标定时,使用的标定板为棋盘格标定板。对本相机中的事件相机进行标定时,通过Celex-V相机获取全幅灰度图像。The overall composition of a binocular camera system is the same as in Embodiment 1. The camera matrix and distortion matrix of the pre-calibrated event camera stored in the correction module of the present invention are obtained by Zhang Zhengyou's calibration method, and the camera of the pre-calibrated CMOS camera stored in the correction module The matrix and distortion matrix are obtained by Zhang Zhengyou’s calibration method; the rotation matrix and translation matrix between the pre-calibrated event camera and the CMOS camera stored in the correction module are obtained by Bouguet’s epipolar correction method. In this embodiment, when calibrating the event camera and the common CMOS camera, the calibration board used is a checkerboard calibration board. When calibrating the event camera in this camera, a full-scale grayscale image is acquired through the Celex-V camera.

本发明对双目相机中的每个相机分别进行畸变校正,可以降低相机图像的畸变,提高图像质量。本发明不使用分光片,不仅简化了系统的结构,而且免于将相机镜头和分光片进行精密对齐,本发明通过对双目相机进行极线校正,可以将同一物点在两个相机中的成像的像点限制在同一水平线上,缩小了特征点匹配时的搜索范围,降低了计算量,实现了实时标定。The invention performs distortion correction on each camera in the binocular camera respectively, which can reduce the distortion of the camera image and improve the image quality. The present invention does not use a beam splitter, which not only simplifies the structure of the system, but also eliminates the need for precise alignment of the camera lens and the beam splitter. The invention can align the same object point in the two cameras by correcting the epipolar line of the binocular camera. The image points of imaging are limited to the same horizontal line, which narrows the search range when matching feature points, reduces the amount of calculation, and realizes real-time calibration.

实施例3Example 3

由于本发明中的双目相机系统中两个相机之间具有视差,所以同一个目标在两个相机的像面上的坐标与该目标和双目相机系统的相对位置有关,当同一个目标与双目相机系统的相对位置发生改变时,该目标在双目相机系统中两个相机的各自的像面上的坐标也会发生改变,所以需要根据目标与双目相机系统的实时相对位置对双目相机系统进行实时标定。本发明的空间标定方法利用本发明的双目相机系统实时拍摄数据,再对这些数据进行校正和实时标定。Due to the parallax between the two cameras in the binocular camera system in the present invention, the coordinates of the same target on the image planes of the two cameras are related to the relative positions of the target and the binocular camera system. When the relative position of the binocular camera system changes, the coordinates of the target on the respective image planes of the two cameras in the binocular camera system will also change. The camera system is calibrated in real time. The spatial calibration method of the present invention utilizes the binocular camera system of the present invention to capture data in real time, and then corrects and calibrates these data in real time.

本发明还是一种双目相机空间标定方法,是在上述的双目相机系统上实现的,参见图2,包括如下步骤:The present invention is also a binocular camera space calibration method, which is implemented on the above-mentioned binocular camera system, referring to Fig. 2, including the following steps:

(1)相机模块获取地址-事件数据流、图像和每幅图像的拍摄时间:(1) The camera module obtains the address-event data stream, images and the shooting time of each image:

相机模块中的事件相机获取地址-事件数据流E并输出,E={ei|0<i≤N1},i为事件的序号,同时相机模块中的CMOS相机获取N2幅图像和每幅图像拍摄的时刻S={sr|0<r≤N2}并输出,其中ei表示第i个事件,ei=(xi,yi,gi,ti),xi和yi分别表示ei的触发位置像素的横坐标和纵坐标,gi表示ei的灰度值,gi>0,ti表示ei的触发的时间,sr=(Ir,tpr),sr包含图像和图像的拍摄时刻,Ir表示图像序列S中的第r幅图像,r为图像的序号,tpr表示图像Ir拍摄的时刻,N1>0,N1表示地址-事件数据流E中的事件总数量,N2>0。本实施例中,相邻两幅图像拍摄时刻的时间差小于30ms,以降低事件相机在相邻两幅图像拍摄时刻之间产生的事件数量。The event camera in the camera module obtains the address-event data stream E and outputs it, E={e i |0<i≤N 1 }, i is the serial number of the event, and the CMOS camera in the camera module obtains N 2 images and each The moment S={s r |0<r≤N 2 } of the images taken and output, where e i represents the i-th event, e i =(xi , y i , g i , t i ), xi and y i represent the abscissa and ordinate of the trigger position pixel of e i respectively, g i represents the gray value of e i , g i >0, t i represents the triggering time of e i , s r =(I r ,tp r ), s r includes the image and the shooting time of the image, I r represents the rth image in the image sequence S, r is the serial number of the image, tp r represents the time when the image I r is shot, N 1 >0, N 1 represents Address - the total number of events in the event data stream E, N 2 >0. In this embodiment, the time difference between the shooting moments of two adjacent images is less than 30 ms, so as to reduce the number of events generated by the event camera between the shooting moments of the two adjacent images.

(2)校正模块对每个事件ei和每幅图像Ir分别进行畸变校正和极线校正:(2) The correction module performs distortion correction and epipolar line correction on each event e i and each image I r respectively:

校正模块通过事件相机的相机矩阵和畸变矩阵对每个事件ei进行畸变校正,得到畸变校正后的事件e1,i,通过CMOS相机的相机矩阵和畸变矩阵对每幅图像Ir进行畸变校正,得到畸变校正后的图像I1,r,并通过事件相机和CMOS相机之间的旋转矩阵和平移矩阵对e1,i和I1,r进行极线校正,得到极线校正后的事件e2,i和I2,r。本实施例中,对事件ei进行畸变校正和极线校正,是指对事件ei的坐标进行畸变校正和极线校正,事件ei的时间ti和灰度gi不变,对CMOS相机进行畸变校正和极线校正,是指对图像Ir中的像素坐标进行校正,图像Ir的拍摄时间tpr不变。The correction module performs distortion correction on each event e i through the camera matrix and distortion matrix of the event camera to obtain the distortion-corrected event e 1,i , and performs distortion correction on each image I r through the camera matrix and distortion matrix of the CMOS camera , get the distortion-corrected image I 1,r , and perform epipolar correction on e 1,i and I 1,r through the rotation matrix and translation matrix between the event camera and the CMOS camera, and obtain the epipolar-corrected event e 2,i and I 2,r . In this embodiment, performing distortion correction and epipolar line correction on event e i refers to performing distortion correction and epipolar line correction on the coordinates of event e i , and the time t i and gray level g i of event e i remain unchanged. For CMOS The camera performs distortion correction and epipolar line correction, which refers to correcting the pixel coordinates in the image I r , and the shooting time tp r of the image I r remains unchanged.

(3)标定模块对校正后的地址-事件数据流和图像进行空间位置标定:(3) The calibration module performs spatial position calibration on the corrected address-event data stream and image:

标定模块对每一幅图像I2,r,根据图像I2,r的拍摄时刻tpr,将极线校正后的地址-事件数据流E2切分为多个地址-事件数据流段E2,r,并将地址-事件数据流段E2,r中的事件根据事件的坐标累积到全零矩阵M中,矩阵M为标定模块构建的全零矩阵,用于计算地址-事件数据流和图像I2,r的匹配点,用SIFT算子对矩阵和图像I2,r进行特征点匹配,得到多个匹配点,并通过匹配点计算矩阵M和图像I2,r之间的单应矩阵,单应矩阵表示地址-事件数据流的事件坐标和图像像素坐标的对应关系。For each image I 2,r , the calibration module divides the epipolar-corrected address-event data stream E 2 into multiple address-event data stream segments E 2 according to the shooting time tp r of the image I 2, r ,r , and the events in the address-event data flow segment E 2,r are accumulated into the all-zero matrix M according to the coordinates of the event. The matrix M is an all-zero matrix constructed by the calibration module, which is used to calculate the address-event data flow and For the matching points of image I 2,r , use the SIFT operator to match the feature points of the matrix and image I 2,r to obtain multiple matching points, and calculate the homography between the matrix M and image I 2,r through the matching points Matrix, the homography matrix represents the correspondence between address-event data stream event coordinates and image pixel coordinates.

(4)存储模块存储标定结果:存储模块将校正后的地址-事件数据流、校正后的图像和地址-事件数据流中事件坐标与图像像素坐标的单应矩阵的集合H={Hr|0<r≤N2}存储在持久化存储器中,完成对事件相机和CMOS相机的空间标定。本实施例中,数据存储格式为hdf5格式,在一个文件中存储一幅图像、该图像对应的地址-事件数据流段以及表示该图像像素坐标和对应的地址-事件数据流段事件坐标对应关系的单应矩阵。也可以使用多个文件分别存储一幅图像、该图像对应的地址-事件数据流段以及表示该图像像素坐标和对应的地址-事件数据流段事件坐标对应关系的单应矩阵,数据存储格式并不限于hdf5格式。(4) The storage module stores the calibration result: the storage module stores the corrected address-event data stream, the corrected image, and the set H={H r | 0<r≤N 2 } is stored in the persistent memory to complete the spatial calibration of the event camera and the CMOS camera. In this embodiment, the data storage format is hdf5 format, and an image, the address-event data flow segment corresponding to the image, and the corresponding relationship between the pixel coordinates of the image and the corresponding address-event data flow segment event coordinates are stored in a file The homography matrix of . It is also possible to use multiple files to respectively store an image, the address-event data flow segment corresponding to the image, and a homography matrix representing the corresponding relationship between the pixel coordinates of the image and the corresponding address-event data flow segment event coordinates, and the data storage format is the same as Not limited to hdf5 format.

本发明利用图像的拍摄时间,对事件相机输出的地址-事件数据流进行分段,实现了对图像和地址-事件数据流时间上的对齐,降低了对双目相机空间标定的误差。The invention uses the shooting time of the image to segment the address-event data stream output by the event camera, realizes the time alignment of the image and the address-event data stream, and reduces the error of spatial calibration of the binocular camera.

实施例4Example 4

由于本发明中事件相机输出的数据为一种非结构化的地址-事件数据流,所以需要将地址-事件数据流转为结构化数据,以便对地址-事件数据流进行处理。通过对地址-事件数据流和图像之间进行特征点匹配,得到多对匹配点,并通过这些匹配点计算匹配点之间的单应矩阵,完成对双目相机系统的标定。Since the data output by the event camera in the present invention is an unstructured address-event data stream, it is necessary to convert the address-event data stream into structured data in order to process the address-event data stream. By matching the feature points between the address-event data stream and the image, multiple pairs of matching points are obtained, and the homography matrix between the matching points is calculated through these matching points to complete the calibration of the binocular camera system.

一种双目相机系统及其双目相机空间标定方法同实施例1-3,步骤(3)中所述标定模块对校正后的地址-事件数据流和图像进行空间位置标定,包括有以下步骤:A binocular camera system and its binocular camera space calibration method are the same as those in Embodiment 1-3, and the calibration module described in step (3) performs spatial position calibration on the corrected address-event data stream and image, including the following steps :

(3a)构建全零矩阵M=zeros(H,W),其中H和W分别表示CMOS相机输出图像的总行数和总列数,H≥32,W≥32,令M中的每一个元素m=0,并令i=1。(3a) Construct an all-zero matrix M=zeros(H, W), where H and W represent the total number of rows and total columns of the CMOS camera output image respectively, H≥32, W≥32, let each element m in M =0, and let i=1.

(3b)令r=1,r表示校正后的普通CMOS相机图像的序号。(3b) Let r=1, r represents the serial number of the normal CMOS camera image after correction.

(3c)根据图像I2,r的拍摄时间tpr将地址-事件数据流划分为地址-事件数据流段:(3c) divide the address-event data flow into address-event data flow segments according to the shooting time tp r of the image I 2,r :

(3c1)设图像Ir对应的地址-事件数据流事件集合为E2,r,令E2,r为空集合。(3c1) Let the address-event data stream event set corresponding to the image I r be E 2,r , and let E 2,r be an empty set.

(3c2)判断事件e2,i=(x2,i,y2,i,gi,ti)中0<x2,i≤W且0<y2,i≤H是否成立,若是,执行步骤(3c3),否则,令i=i+1,并执行步骤(3c2)。(3c2) Judging whether 0<x 2, i ≤W and 0<y 2,i ≤H in the event e 2,i =(x 2,i ,y 2,i ,g i ,t i ) holds true, if so, Execute step (3c3), otherwise, set i=i+1, and execute step (3c2).

(3c3)将事件e2,i=(x2,i,y2,i,gi,ti)加入到集合E2,r中,并执行步骤(3c4)。(3c3) Add the event e 2,i =(x 2,i ,y 2,i ,gi , t i ) to the set E 2,r , and execute step (3c4).

(3c4)判断ti≤tpr是否成立,若是,令i=i+1,并执行步骤(3c2),否则,得到图像Ir对应的地址-事件数据流段E2,r={e2,r,b|0<b≤B},其中,e2,r,b=(x2,r,b,y2,r,b,g2,r,b,t2,r,b),B表示集合E2,r中的事件总数量。(3c4) Judging whether t i ≤ tp r is established, if so, set i=i+1, and execute step (3c2), otherwise, obtain the address-event data flow segment E 2,r ={e 2 , corresponding to the image I r ,r,b |0<b≤B}, where, e 2,r,b =(x 2,r,b ,y 2,r,b ,g 2,r,b ,t 2,r,b ) , B represents the total number of events in the set E 2,r .

在步骤(3c)中,本发明根据图像I2,r的拍摄时间tpr和地址-事件数据流中事件的事件信息,找到图像I2,r拍摄时事件相机产生的事件,有利于提高对图像和地址-事件数据流进行特征点匹配的准确度。In step (3c), according to the shooting time tp r of the image I 2, r and the event information of the event in the address-event data stream, the present invention finds the event generated by the event camera when the image I 2,r is shot, which is conducive to improving the understanding of Accuracy of feature point matching for image and address-event data streams.

(3d)开始将地址-事件数据流段中的事件累积到矩阵M中,令b=1,b是一个地址-事件数据流段E2,r中的事件序号。(3d) Begin to accumulate the events in the address-event data stream segment into the matrix M, let b=1, b is the event sequence number in an address-event data stream segment E 2,r .

(3e)令其中,g2,r,b是地址-事件数据流段中事件e2,r,b的灰度值,为矩阵M中的坐标为(x2,r,b,y2,r,b)的元素,此时矩阵M已经不再是全零矩阵。(3e) order Among them, g 2, r, b is the gray value of event e 2, r, b in the address-event data flow segment, is the element whose coordinates are (x 2,r,b ,y 2,r,b ) in the matrix M, and the matrix M is no longer an all-zero matrix.

(3f)判断b≤B是否成立,若是,令b=b+1,并执行步骤(3e),否则,当b>B时,完成对地址-事件数据流段的累积,执行步骤(3g)。(3f) Judging whether b≤B is established, if so, make b=b+1, and execute step (3e), otherwise, when b>B, complete the accumulation of address-event data flow segments, and execute step (3g) .

(3g)开始进行图像I2,r和矩阵M进行特征点匹配,用SIFT算子提取矩阵M中的特征点,得到Nr,3个特征点HPr={pr,l|0<l≤Nr,3},其中,pr,l=(xr,l,yr,l,Fr,l),l为特征点pr,l的序号,(xr,l,yr,l)表示特征点的坐标,Fr,l表示特征点的特征描述符,Nr,3>8。(3g) Start to match feature points of image I 2, r and matrix M, use SIFT operator to extract feature points in matrix M, and get N r, 3 feature points HP r ={p r,l |0<l ≤N r,3 }, where p r,l =(x r,l ,y r,l ,F r,l ), l is the sequence number of the feature point p r,l , (x r,l ,y r ,l ) represents the coordinates of feature points, F r,l represents the feature descriptor of feature points, N r,3 >8.

(3h)令l=1,l为矩阵M的特征点集合中特征点的序号。(3h) Let l=1, l is the serial number of the feature point in the feature point set of matrix M.

(3i)开始在图像I2,r中寻找与特征点pr,l匹配的特征点,用SIFT算子计算图像I2,r在yr,l行上像素的特征点,得到W个特征点HPr,l'={pr,l,k'|0<k≤W},其中,pr,l,k'表示图像I2,r在yr,l行上的第k个特征点,pr,l,k'=(xr,l,k',yr,l,k',Fr,l,k'),k为图像I2,r中在yr,l行上特征点的序号,(xr,l,k',yr,l,k')表示特征点pr,l,k'的坐标,Fr,l,k'表示特征点pr,l,k'的特征描述符。(3i) Start looking for feature points in image I 2,r that match feature points p r,l , use SIFT operator to calculate the feature points of image I 2,r on the line y r,l , and get W features Point HP r,l '={p r,l,k '|0<k≤W}, where p r,l,k 'represents the kth feature of image I 2,r on line y r,l point, p r, l, k '=(x r, l, k ', y r, l, k ', F r, l, k '), k is the line in y r, l in image I 2, r The serial number of the feature point above, (x r, l, k ', y r, l, k ') indicates the coordinates of the feature point p r, l, k ', and F r, l, k ' indicates the feature point p r, l , the feature descriptor for k '.

(3j)令k=1,k表示图像I2,r在yr,l行上的第k个特征点。(3j) Let k=1, k represents the kth feature point of image I 2,r on line y r,l .

(3k)计算Fl和Fr,l,k'之间的欧式距离or,l,k(3k) Calculate the Euclidean distance or r, l,k between F l and F r,l,k '.

(3l)判断k<W是否成立,若是,令k=k+1,并执行步骤(3k),否则,得到特征描述符距离集合Or,l={or,l,k|0<k≤W},并执行步骤(3m)。(3l) Determine whether k<W is true, if so, set k=k+1, and execute step (3k), otherwise, obtain the feature descriptor distance set O r,l ={or r,l,k |0<k ≤W}, and execute step (3m).

(3m)设Or,l中最小值为or,l,k',则pr,l对应的特征点为pr,l,k′′。(3m) Suppose the minimum value of O r,l is or r,l,k' , then the feature point corresponding to p r,l is p r,l,k′ ′.

(3n)判断l≤Nr,3是否成立,若是,令l=l+1,并执行步骤(3i),否则,得到地址-事件数据流集合E2,r中事件和图像I2,r的像素的对应点的集合KPr={fr,l=(pr,l,pr,l,k′′)|0<l≤Nr,3},并执行步骤(3o)。(3n) Determine whether l≤N r,3 is established, if so, set l=l+1, and execute step (3i), otherwise, get the event and image I 2,r in the address-event data stream set E 2,r The set KP r ={f r,l =(p r,l ,p r,l,k′ )|0<l≤N r,3 } of the corresponding points of the pixel, and perform step (3o).

(3o)用八点法通过KPr计算地址-事件数据流集合E2,r中事件坐标和图像Ir中的像素坐标的单应矩阵Hr(3o) Calculate the homography matrix H r of the event coordinates in the address-event data stream set E 2,r and the pixel coordinates in the image I r by the eight-point method through KP r .

(3p)判断r≤N2是否成立,若是,令r=r+1,并执行步骤(3c),否则,对所有图像和地址-事件数据流的标定,得到表示地址-事件数据流中事件坐标与图像像素坐标对应关系的单应矩阵的集合H={Hr|0<r<N2}。(3p) Judging whether r≤N 2 is established, if so, let r=r+1, and execute step (3c), otherwise, for all images and address-event data stream calibration, get the event in the address-event data stream A set of homography matrices H={H r |0<r<N 2 } of the corresponding relationship between coordinates and image pixel coordinates.

本发明通过图像的拍摄时刻和地址-事件数据流中事件的时间信息,对地址-事件数据流进行分段划分,保证了只对拍摄时刻相近的图像和地址-事件数据流进行特征匹配,此时两个相机拍摄的是相同的场景,提高了空间标定的准确率。空间标定后的可以获取事件相机和普通CMOS相机的视差,用于恢复空间深度信息。在空间标定之后,可以将事件相机和普通CMOS相机的数据融合,普通CMOS相机可以补全事件相机无法获得的场景中的背景信息和颜色信息,事件相机可以获取高时间分辨率的数据,将两者的数据融合,可以获取同时具有高时间分辨率和高空间分辨率的优点的数据。The present invention divides the address-event data stream into segments through the shooting time of the image and the time information of the events in the address-event data stream, so as to ensure that only images with similar shooting times and address-event data streams are featured matched. When the two cameras shoot the same scene, the accuracy of spatial calibration is improved. After spatial calibration, the parallax of the event camera and the ordinary CMOS camera can be obtained to restore the spatial depth information. After spatial calibration, the data of the event camera and the common CMOS camera can be fused. The common CMOS camera can complement the background information and color information in the scene that the event camera cannot obtain, and the event camera can obtain high-time resolution data. The data fusion of the former can obtain data with the advantages of both high temporal resolution and high spatial resolution.

本发明解决了现有技术中存在的成像质量差和损失空间深度信息的问题,双目相机空间标定方法包括以下步骤:相机模块获取地址-事件数据流、图像和每幅图像的拍摄时间;校正模块对事件和图像进行畸变校正和极线校正;标定模块对校正后的地址-事件数据流和图像进行空间位置标定,存储模块将校正后的地址-事件数据流、校正后的图像和表示地址-事件数据流中事件坐标与图像像素坐标的对应关系的单应矩阵的集合存储在持久化存储器中。The present invention solves the problems of poor imaging quality and loss of spatial depth information in the prior art. The binocular camera spatial calibration method includes the following steps: the camera module obtains the address-event data stream, images and the shooting time of each image; The module performs distortion correction and epipolar correction on events and images; the calibration module performs spatial position calibration on the corrected address-event data stream and image, and the storage module stores the corrected address-event data stream, corrected image and address - A collection of homography matrices corresponding to event coordinates and image pixel coordinates in the event data stream is stored in a persistent memory.

下面再将双目相机系统与其标定方法结合在一起,对本发明进一步说明。Next, the binocular camera system and its calibration method will be combined together to further illustrate the present invention.

实施例5Example 5

一种双目相机系统及其双目相机空间标定方法同实施例1-4。A binocular camera system and a binocular camera space calibration method thereof are the same as those in Embodiments 1-4.

本发明的一种双目相机系统,参照图1,本发明的双目相机系统依次级联有相机模块、校正模块、标定模块和存储模块,形成双目相机系统。镜头朝向相同且并联的事件相机和普通CMOS相机构成相机模块,其中各模块描述如下:A binocular camera system of the present invention, referring to FIG. 1 , the binocular camera system of the present invention is sequentially cascaded with a camera module, a correction module, a calibration module and a storage module to form a binocular camera system. The event camera and the ordinary CMOS camera with the same lens facing in parallel constitute a camera module, and each module is described as follows:

本发明的相机模块,包括有事件相机和CMOS相机,用于获取地址-事件数据流、图像和图像拍摄时间,其中,事件相机用于获取地址-事件数据流,CMOS相机用于获取图像和图像的拍摄时间,事件相机和CMOS相机同时输出数据并且事件相机和CMOS相机的输出图像的尺寸相同,事件相机和CMOS相机镜头主轴平行,事件相机和CMOS相机的传感器平面在同一平面上。The camera module of the present invention includes an event camera and a CMOS camera for obtaining address-event data flow, image and image shooting time, wherein the event camera is used for obtaining address-event data flow, and the CMOS camera is used for obtaining image and image The shooting time of the event camera and the CMOS camera output data at the same time and the output images of the event camera and the CMOS camera have the same size, the lens axes of the event camera and the CMOS camera are parallel, and the sensor planes of the event camera and the CMOS camera are on the same plane.

本发明的校正模块,用于分别对事件相机输出的地址-事件数据流和CMOS相机输出的图像进行畸变校正和极线校正,并输出校正后的地址-事件数据流和校正后的图像,其中,校正模块存储有事先标定的事件相机的相机矩阵和畸变矩阵,存储有CMOS相机的相机矩阵和畸变矩阵,还存储有事件相机和CMOS相机之间的旋转矩阵和平移矩阵。The correction module of the present invention is used to respectively perform distortion correction and epipolar line correction on the address-event data stream output by the event camera and the image output by the CMOS camera, and output the corrected address-event data stream and the corrected image, wherein , the correction module stores the camera matrix and distortion matrix of the pre-calibrated event camera, stores the camera matrix and distortion matrix of the CMOS camera, and also stores the rotation matrix and translation matrix between the event camera and the CMOS camera.

本发明的标定模块,用于对畸变校正和极线校正后的地址-事件数据流和图像进行空间位置标定,获取地址-事件数据流中事件坐标与图像像素坐标的单应矩阵并输出。The calibration module of the present invention is used to calibrate the spatial position of the address-event data stream and image after distortion correction and epipolar line correction, obtain and output the homography matrix of event coordinates and image pixel coordinates in the address-event data stream.

本发明的存储模块,包括有持久化存储器,用于持久化存储标定结果,标定结果包括校正后的地址-事件数据流、校正后的图像和地址-事件数据流中事件坐标与图像像素坐标的单应矩阵。The storage module of the present invention includes a persistent memory for persistently storing the calibration results, the calibration results including the corrected address-event data stream, the corrected image and the event coordinates and image pixel coordinates in the address-event data stream homography matrix.

在本发明一种双目相机系统的基础上,还设计了双目相机空间标定方法,参照图2,本发明的一种双目相机空间标定方法,可以用于对高速运动目标的数据采集、识别和跟踪,并且获得事件相机和普通CMOS相机的视差,用于恢复空间深度信息。On the basis of a binocular camera system of the present invention, a binocular camera space calibration method is also designed. Referring to FIG. 2, a binocular camera space calibration method of the present invention can be used for data collection, Identify and track, and obtain the parallax of the event camera and the common CMOS camera for recovering the spatial depth information.

包括如下步骤:Including the following steps:

(1)相机模块获取地址-事件数据流、图像和每幅图像的拍摄时间:(1) The camera module obtains the address-event data stream, images and the shooting time of each image:

相机模块中的事件相机获取地址-事件数据流E={ei|0<i≤N1}并输出,同时相机模块中的CMOS相机获取N2幅图像和每幅图像拍摄的时刻S={sr|0<r≤N2}并输出,其中ei表示第i个事件,ei=(xi,yi,gi,ti),xi和yi分别表示ei的触发位置像素的横坐标和纵坐标,gi表示ei的灰度值,gi>0,ti表示ei的触发的时间,sr=(Ir,tpr),Ir表示图像序列S中的第r幅图像,tpr表示图像Ir拍摄的时刻,N1>0,N1表示地址-事件数据流E中的事件数量,N2>0。The event camera in the camera module acquires the address-event data stream E={e i |0<i≤N 1 } and outputs it, while the CMOS camera in the camera module acquires N 2 images and the moment S={ s r |0<r≤N 2 } and output, where e i represents the i-th event, e i = (xi , y i , g i , t i ), and xi and y i respectively represent the triggering of e i The abscissa and ordinate of the position pixel, g i represents the gray value of e i , g i >0, t i represents the triggering time of e i , s r = (I r , tp r ), I r represents the image sequence For the rth image in S, tp r represents the time when image I r is taken, N 1 >0, N 1 represents the number of events in the address-event data stream E, N 2 >0.

(2)校正模块对每个事件ei和图像Ir进行畸变校正和极线校正:(2) The correction module performs distortion correction and epipolar line correction on each event e i and image I r :

校正模块通过事件相机的相机矩阵和畸变矩阵对每个事件ei进行畸变校正,得到畸变校正后的事件e1,i,通过CMOS相机的相机矩阵和畸变矩阵对每幅图像sr进行畸变校正,得到畸变校正后的图像I1,r,并通过事件相机和CMOS相机之间的旋转矩阵和平移矩阵对e1,i和I1,r进行极线校正,得到极线校正后的事件e2,r和I2,rThe correction module performs distortion correction on each event e i through the camera matrix and distortion matrix of the event camera to obtain the distortion-corrected event e 1,i , and performs distortion correction on each image s r through the camera matrix and distortion matrix of the CMOS camera , get the distortion-corrected image I 1,r , and perform epipolar correction on e 1,i and I 1,r through the rotation matrix and translation matrix between the event camera and the CMOS camera, and obtain the epipolar-corrected event e 2,r and I 2,r .

(3)标定模块对校正后的地址-事件数据流和图像进行空间位置标定:(3) The calibration module performs spatial position calibration on the corrected address-event data stream and image:

(3a)构建全零矩阵M=zeros(H,W),其中H和W分别表示CMOS相机输出图像的总行数和总列数,H≥32,W≥32,令M中的每一个元素m=0,并令i=1。(3a) Construct an all-zero matrix M=zeros(H, W), where H and W represent the total number of rows and total columns of the CMOS camera output image respectively, H≥32, W≥32, let each element m in M =0, and let i=1.

(3b)令r=1,r表示校正后的普通CMOS相机图像的序号。(3b) Let r=1, r represents the serial number of the normal CMOS camera image after correction.

(3c)根据图像I2,r的拍摄时间tpr将地址-事件数据流划分为地址-事件数据流段:(3c) divide the address-event data flow into address-event data flow segments according to the shooting time tp r of the image I 2,r :

(3c1)设图像Ir对应的地址-事件数据流事件集合为E2,r,令E2,r为空集合。(3c1) Let the address-event data stream event set corresponding to the image I r be E 2,r , and let E 2,r be an empty set.

(3c2)判断事件e2,i=(x2,i,y2,i,gi,ti)中0<x2,i≤W且0<y2,i≤H是否成立,若是,执行步骤(3c3),否则,令i=i+1,并执行步骤(3c2)。(3c2) Judging whether 0<x 2, i ≤W and 0<y 2,i ≤H in the event e 2,i =(x 2,i ,y 2,i ,g i ,t i ) holds true, if so, Execute step (3c3), otherwise, set i=i+1, and execute step (3c2).

(3c3)将事件e2,i=(x2,i,y2,i,gi,ti)加入到集合E2,r中,并执行步骤(3c4)。(3c3) Add the event e 2,i =(x 2,i ,y 2,i ,gi , t i ) to the set E 2,r , and execute step (3c4).

(3c4)判断ti≤tpr是否成立,若是,令i=i+1,并执行步骤(3c2),否则,得到图像Ir对应的地址-事件数据流段E2,r={e2,r,b|0<b≤B},其中,e2,r,b=(x2,r,b,y2,r,b,g2,r,b,t2,r,b),B表示集合E2,r中的事件总数量。(3c4) Judging whether t i ≤ tp r is established, if so, set i=i+1, and execute step (3c2), otherwise, obtain the address-event data flow segment E 2,r ={e 2 , corresponding to the image I r ,r,b |0<b≤B}, where, e 2,r,b =(x 2,r,b ,y 2,r,b ,g 2,r,b ,t 2,r,b ) , B represents the total number of events in the set E 2,r .

(3d)开始将地址-事件数据流段中的事件累积到矩阵M中,令b=1,b是一个地址-事件数据流段E2,r中的事件序号。(3d) Begin to accumulate the events in the address-event data stream segment into the matrix M, let b=1, b is the event sequence number in an address-event data stream segment E 2,r .

(3e)令其中,g2,r,b是地址-事件数据流段中事件e2,r,b的灰度值,为矩阵M中的坐标为(x2,r,b,y2,r,b)的元素,此时矩阵M已经不再是全零矩阵。(3e) order Among them, g 2, r, b is the gray value of event e 2, r, b in the address-event data flow segment, is the element whose coordinates are (x 2,r,b ,y 2,r,b ) in the matrix M, and the matrix M is no longer an all-zero matrix.

(3f)判断b≤B是否成立,若是,令b=b+1,并执行步骤(3e),否则,当b>B时,完成对地址-事件数据流段的累积,执行步骤(3g)。(3f) Judging whether b≤B is established, if so, make b=b+1, and execute step (3e), otherwise, when b>B, complete the accumulation of address-event data flow segments, and execute step (3g) .

(3g)开始进行图像I2,r和矩阵M进行特征点匹配,用SIFT算子提取矩阵M中的特征点,得到Nr,3个特征点HPr={pr,l|0<l≤Nr,3},其中,pr,l=(xr,l,yr,l,Fr,l),l为特征点pr,l的序号,(xr,l,yr,l)表示特征点的坐标,Fr,l表示特征点的特征描述符,Nr,3>8。(3g) Start to match feature points of image I 2, r and matrix M, use SIFT operator to extract feature points in matrix M, and get N r, 3 feature points HP r ={p r,l |0<l ≤N r,3 }, where p r,l =(x r,l ,y r,l ,F r,l ), l is the sequence number of the feature point p r,l , (x r,l ,y r ,l ) represents the coordinates of feature points, F r,l represents the feature descriptor of feature points, N r,3 >8.

(3h)令l=1,l为矩阵M的特征点集合中特征点的序号。(3h) Let l=1, l is the serial number of the feature point in the feature point set of matrix M.

(3i)开始在图像I2,r中寻找与特征点pr,l匹配的特征点,用SIFT算子计算图像I2,r在yr,l行上像素的特征点,得到W个特征点HPr,l'={pr,l,k'|0<k≤W},其中,pr,l,k'表示图像I2,r在yr,l行上的第k个特征点,pr,l,k'=(xr,l,k',yr,l,k',Fr,l,k'),k为图像I2,r中在yr,l行上特征点的序号,(xr,l,k',yr,l,k')表示特征点pr,l,k'的坐标,Fr,l,k'表示特征点pr,l,k'的特征描述符。(3i) Start looking for feature points in image I 2,r that match feature points p r,l , use SIFT operator to calculate the feature points of image I 2,r on the line y r,l , and get W features Point HP r,l '={p r,l,k '|0<k≤W}, where p r,l,k 'represents the kth feature of image I 2,r on line y r,l point, p r, l, k '=(x r, l, k ', y r, l, k ', F r, l, k '), k is the line in y r, l in image I 2, r The serial number of the feature point above, (x r, l, k ', y r, l, k ') indicates the coordinates of the feature point p r, l, k ', and F r, l, k ' indicates the feature point p r, l , the feature descriptor for k '.

(3j)令k=1,k表示图像I2,r在yr,l行上的第k个特征点。(3j) Let k=1, k represents the kth feature point of image I 2,r on line y r,l .

(3k)计算Fl和Fr,l,k'之间的欧式距离or,l,k(3k) Calculate the Euclidean distance or r, l,k between F l and F r,l,k '.

(3l)判断k<W是否成立,若是,令k=k+1,并执行步骤(3k),否则,得到特征描述符距离集合Or,l={or,l,k|0<k≤W},并执行步骤(3m)。(3l) Determine whether k<W is true, if so, set k=k+1, and execute step (3k), otherwise, obtain the feature descriptor distance set O r,l ={or r,l,k |0<k ≤W}, and execute step (3m).

(3m)设Or,l中最小值为or,l,k',则pr,l对应的特征点为pr,l,k′′。(3m) Suppose the minimum value of O r,l is or r,l,k' , then the feature point corresponding to p r,l is p r,l,k′ ′.

(3n)判断l≤Nr,3是否成立,若是,令l=l+1,并执行步骤(3i),否则,得到地址-事件数据流集合E2,r中事件和图像I2,r的像素的对应点的集合KPr={fr,l=(pr,l,pr,l,k′′)|0<l≤Nr,3},并执行步骤(3o)。(3n) Determine whether l≤N r,3 is established, if so, set l=l+1, and execute step (3i), otherwise, get the event and image I 2,r in the address-event data stream set E 2,r The set KP r ={f r,l =(p r,l ,p r,l,k′ )|0<l≤N r,3 } of the corresponding points of the pixel, and perform step (3o).

(3o)用八点法通过KPr计算地址-事件数据流集合E2,r中事件坐标和图像Ir中的像素坐标的单应矩阵Hr(3o) Calculate the homography matrix H r of the event coordinates in the address-event data stream set E 2,r and the pixel coordinates in the image I r by the eight-point method through KP r .

(3p)判断r≤N2是否成立,若是,令r=r+1,并执行步骤(3c),否则,对所有图像和地址-事件数据流的标定,得到表示地址-事件数据流中事件坐标与图像像素坐标对应关系的单应矩阵的集合H={Hr|0<r<N2}。(3p) Judging whether r≤N 2 is established, if so, let r=r+1, and execute step (3c), otherwise, for all images and address-event data stream calibration, get the event in the address-event data stream A set of homography matrices H={H r |0<r<N 2 } of the corresponding relationship between coordinates and image pixel coordinates.

综上所述,本发明提出的一种双目相机系统和双目相机空间标定方法,解决了现有技术中存在的成像质量差和损失空间深度信息的问题。双目相机系统依次级联有相机模块、校正模块、标定模块和存储模块,其中相机模块包括并联的事件相机和普通CMOS相机。双目相机空间标定方法包括以下步骤:相机模块获取地址-事件数据流、图像和每幅图像的拍摄时间;校正模块对事件和图像进行畸变校正和极线校正;标定模块对校正后的地址-事件数据流和图像进行空间位置标定;存储模块将校正后的地址-事件数据流、校正后的图像和地址-事件数据流中事件坐标与图像像素坐标的单应矩阵集合存储在持久化存储器中。本发明双目相机系统不使用分光镜,避免了因使用分光镜而导致的相机成像质量下降的问题。本发明中两个相机之间具有视差,可以通过两个相机之间的视差恢复空间深度信息。本发明可以重建高时间分辨率、高空间分辨率、高动态范围的图像数据,用于高速运动目标的识别和跟踪。同时,本发明还可以用于恢复空间深度信息,可以用于无人驾驶系统的视觉导航,重建出环境信息,增强无人驾驶系统的安全性。In summary, a binocular camera system and a binocular camera spatial calibration method proposed by the present invention solve the problems of poor imaging quality and loss of spatial depth information existing in the prior art. The binocular camera system is sequentially cascaded with a camera module, a correction module, a calibration module and a storage module, wherein the camera module includes a parallel event camera and a common CMOS camera. The binocular camera space calibration method includes the following steps: the camera module obtains the address-event data stream, image and shooting time of each image; the correction module performs distortion correction and epipolar line correction on the event and image; the calibration module corrects the corrected address- Spatial position calibration of the event data stream and image; the storage module stores the corrected address-event data stream, the corrected image, and the homography matrix set of event coordinates and image pixel coordinates in the address-event data stream in a persistent memory . The binocular camera system of the present invention does not use a beam splitter, which avoids the problem of camera imaging quality degradation caused by the use of a beam splitter. In the present invention, there is a parallax between the two cameras, and spatial depth information can be restored through the parallax between the two cameras. The invention can reconstruct image data with high temporal resolution, high spatial resolution and high dynamic range, and is used for identification and tracking of high-speed moving targets. At the same time, the present invention can also be used to restore spatial depth information, can be used for visual navigation of unmanned driving systems, reconstructs environmental information, and enhances the safety of unmanned driving systems.

Claims (4)

1.一种双目相机系统,其特征在于,依次级联有相机模块、校正模块、标定模块和存储模块,形成双目相机系统;镜头朝向相同且并联的事件相机和普通CMOS相机构成相机模块,其中各模块描述如下:1. A binocular camera system, characterized in that a camera module, a calibration module, a calibration module and a storage module are cascaded in sequence to form a binocular camera system; the camera lens is directed towards the same parallel event camera and a common CMOS camera to form a camera module , where each module is described as follows: 所述相机模块,包括有事件相机和CMOS相机,用于获取地址-事件数据流、图像和图像拍摄时间,其中,事件相机用于获取地址-事件数据流,CMOS相机用于获取图像和图像的拍摄时间,事件相机和CMOS相机同时输出数据并且事件相机和CMOS相机的输出图像的尺寸相同,事件相机的镜头和CMOS相机的镜头主轴平行,事件相机和CMOS相机的传感器平面在同一平面上;Described camera module comprises event camera and CMOS camera, is used for obtaining address-event data flow, image and image shooting time, wherein, event camera is used for obtaining address-event data flow, and CMOS camera is used for obtaining image and image Shooting time, the event camera and the CMOS camera output data at the same time and the output images of the event camera and the CMOS camera have the same size, the lens of the event camera is parallel to the lens axis of the CMOS camera, and the sensor planes of the event camera and the CMOS camera are on the same plane; 所述校正模块,用于分别对事件相机输出的地址-事件数据流和CMOS相机输出的图像进行畸变校正和极线校正,并输出校正后的地址-事件数据流和校正后的图像,其中,校正模块存储有事先标定的事件相机的相机矩阵和畸变矩阵,存储有事先标定的CMOS相机的相机矩阵和畸变矩阵,还存储有事先标定的事件相机和CMOS相机之间的旋转矩阵和平移矩阵;The correction module is used to respectively perform distortion correction and epipolar correction on the address-event data stream output by the event camera and the image output by the CMOS camera, and output the corrected address-event data stream and the corrected image, wherein, The correction module stores the camera matrix and distortion matrix of the pre-calibrated event camera, stores the camera matrix and distortion matrix of the pre-calibrated CMOS camera, and also stores the rotation matrix and translation matrix between the pre-calibrated event camera and the CMOS camera; 所述标定模块,用于对畸变校正和极线校正后的地址-事件数据流和图像进行空间位置标定,获取表示地址-事件数据流中事件坐标与图像像素坐标对应关系的单应矩阵并输出;The calibration module is used to calibrate the spatial position of the address-event data stream and image after distortion correction and epipolar line correction, obtain and output a homography matrix representing the corresponding relationship between event coordinates and image pixel coordinates in the address-event data stream ; 所述存储模块,包括有持久化存储器,用于持久化存储标定结果,标定结果包括校正后的地址-事件数据流、校正后的图像和表示地址-事件数据流中事件坐标和图像像素坐标对应关系的单应矩阵。The storage module includes a persistent memory for persistently storing the calibration results, the calibration results include the corrected address-event data stream, the corrected image and the correspondence between event coordinates and image pixel coordinates in the address-event data stream The homography matrix of the relation. 2.根据权利要求1所述的双目相机系统,其特征在于,所述校正模块中存储的事先标定的事件相机的相机矩阵和畸变矩阵由张正友标定法获得,校正模块存储的事先标定的CMOS相机的相机矩阵和畸变矩阵由张正友标定法获得;校正模块存储的事先标定的事件相机和CMOS相机之间的旋转矩阵和平移矩阵,由Bouguet极线校正方法获得。2. The binocular camera system according to claim 1, wherein the camera matrix and distortion matrix of the event camera calibrated in advance stored in the correction module are obtained by Zhang Zhengyou calibration method, and the CMOS calibrated in advance stored in the correction module The camera matrix and distortion matrix of the camera are obtained by Zhang Zhengyou’s calibration method; the rotation matrix and translation matrix between the pre-calibrated event camera and the CMOS camera stored in the correction module are obtained by Bouguet’s epipolar correction method. 3.一种双目相机空间标定方法,是在权利要求1-2所述的双目相机系统上实现的,其特征在于,包括有如下步骤:3. A binocular camera space calibration method is implemented on the binocular camera system according to claim 1-2, characterized in that it comprises the following steps: (1)相机模块获取地址-事件数据流、图像和每幅图像的拍摄时间:(1) The camera module obtains the address-event data stream, images and the shooting time of each image: 相机模块中的事件相机获取地址-事件数据流E={ei|0<i≤N1}并输出,同时相机模块中的CMOS相机获取N2幅图像和每幅图像拍摄的时刻S={sr|0<r≤N2}并输出,其中ei表示第i个事件,ei=(xi,yi,gi,ti),xi和yi分别表示ei的触发位置像素的横坐标和纵坐标,gi表示ei的灰度值,gi>0,ti表示ei的触发的时间,sr=(Ir,tpr),Ir表示图像序列S中的第r幅图像,tpr表示图像Ir拍摄的时刻,N1>0,N1表示地址-事件数据流E中的事件数量,N2>0;The event camera in the camera module acquires the address-event data stream E={e i |0<i≤N 1 } and outputs it, while the CMOS camera in the camera module acquires N 2 images and the moment S={ s r |0<r≤N 2 } and output, where e i represents the i-th event, e i = (xi , y i , g i , t i ), and xi and y i respectively represent the triggering of e i The abscissa and ordinate of the position pixel, g i represents the gray value of e i , g i >0, t i represents the triggering time of e i , s r = (I r , tp r ), I r represents the image sequence For the rth image in S, tp r represents the moment when image I r is taken, N 1 >0, N 1 represents the number of events in the address-event data stream E, N 2 >0; (2)校正模块对每个事件ei和图像Ir进行畸变校正和极线校正:(2) The correction module performs distortion correction and epipolar line correction on each event e i and image I r : 校正模块通过事件相机的相机矩阵和畸变矩阵对每个事件ei进行畸变校正,得到畸变校正后的事件e1,i,通过CMOS相机的相机矩阵和畸变矩阵对每幅图像Ir进行畸变校正,得到畸变校正后的图像I1,r,并通过事件相机和CMOS相机之间的旋转矩阵和平移矩阵对事件e1,i和图像I1,r进行极线校正,得到极线校正后的事件e2,i和图像I2,rThe correction module performs distortion correction on each event e i through the camera matrix and distortion matrix of the event camera to obtain the distortion-corrected event e 1,i , and performs distortion correction on each image I r through the camera matrix and distortion matrix of the CMOS camera , get the distortion-corrected image I 1,r , and perform epipolar correction on the event e 1,i and image I 1,r through the rotation matrix and translation matrix between the event camera and the CMOS camera, and obtain the epipolar-corrected image event e 2,i and image I 2,r ; (3)标定模块对校正后的地址-事件数据流和图像进行空间位置标定:(3) The calibration module performs spatial position calibration on the corrected address-event data stream and image: 标定模块对每一幅图像I2,r,根据图像I2,r的拍摄时刻tpr,将校正后的地址-事件数据流E2切分为多个地址-事件数据流段E2,r,并将地址-事件数据流段E2,r中的事件根据事件的坐标累积到矩阵中,用SIFT算子分别计算事件累积的矩阵和图像I2,r的特征点,并进行特征点匹配,得到多对匹配点,并通过匹配点计算矩阵和图像I2,r之间的单应矩阵,单应矩阵表示地址-事件数据流的事件坐标和图像像素坐标的对应关系;For each image I 2,r , the calibration module divides the corrected address-event data stream E 2 into multiple address-event data stream segments E 2, r according to the shooting time tp r of the image I 2 ,r , and accumulate the events in the address-event data flow segment E 2,r into the matrix according to the coordinates of the event, use the SIFT operator to calculate the feature points of the event accumulation matrix and the image I 2,r respectively, and perform feature point matching , obtain many pairs of matching points, and calculate the homography matrix between the matrix and the image I 2,r by the matching points, the homography matrix represents the corresponding relationship between the event coordinates of the address-event data stream and the image pixel coordinates; (4)存储模块存储标定结果:存储模块将校正后的地址-事件数据流、校正后的图像和表示地址-事件数据流中事件坐标与图像像素坐标的对应关系的单应矩阵的集合存储在持久化存储器中,完成对事件相机和CMOS相机的空间标定。(4) The storage module stores the calibration result: the storage module stores the corrected address-event data stream, the corrected image and the homography matrix representing the correspondence between the event coordinates and the image pixel coordinates in the address-event data stream. In the persistent memory, the spatial calibration of the event camera and the CMOS camera is completed. 4.根据权利要求3所述的双目相机系统的双目相机空间标定方法,其特征在于,步骤(3)中所述标定模块对校正后的地址-事件数据流和图像进行空间位置标定,包括有以下步骤:4. the binocular camera space calibration method of binocular camera system according to claim 3, is characterized in that, described in the step (3) calibration module carries out spatial position calibration to corrected address-event data flow and image, Include the following steps: (3a)构建全零矩阵M=zeros(H,W),其中H和W分别表示CMOS相机输出图像的行数和列数,H≥32,W≥32,令M中的每一个元素m=0,并令i=1;(3a) Construct all-zero matrix M=zeros(H, W), wherein H and W represent the row number and column number of CMOS camera output image respectively, H≥32, W≥32, make each element m in M= 0, and let i=1; (3b)令r=1;(3b) let r=1; (3c)根据图像I2,r的拍摄时间tpr将地址-事件数据流划分为地址-事件数据流段:(3c) divide the address-event data flow into address-event data flow segments according to the shooting time tp r of the image I 2,r : (3c1)设图像Ir对应的地址-事件数据流事件集合为E2,r,令E2,r为空集合;(3c1) Set the address-event data stream event set corresponding to the image I r as E 2,r , let E 2,r be an empty set; (3c2)判断事件e2,i=(x2,i,y2,i,gi,ti)中0<x2,i≤W且0<y2,i≤H是否成立,若是,执行步骤(3c3),否则,令i=i+1,并执行步骤(3c2);(3c2) Judging whether 0<x 2, i ≤W and 0<y 2,i ≤H in the event e 2,i =(x 2,i ,y 2,i ,g i ,t i ) holds true, if so, Execute step (3c3), otherwise, let i=i+1, and execute step (3c2); (3c3)将事件e2,i=(x2,i,y2,i,gi,ti)加入到集合E2,r中,并执行步骤(3c4);(3c3) Add the event e 2,i = (x 2,i ,y 2,i ,g i ,t i ) into the set E 2,r , and perform step (3c4); (3c4)判断ti≤tpr是否成立,若是,令i=i+1,并执行步骤(3c2),否则,得到图像Ir对应的地址-事件数据流段E2,r={e2,r,b|0<b≤B},其中,e2,r,b=(x2,r,b,y2,r,b,g2,r,b,t2,r,b),B表示集合E2,r中的事件数量;(3c4) Judging whether t i ≤ tp r is established, if so, set i=i+1, and execute step (3c2), otherwise, obtain the address-event data flow segment E 2,r ={e 2 , corresponding to the image I r ,r,b |0<b≤B}, where, e 2,r,b =(x 2,r,b ,y 2,r,b ,g 2,r,b ,t 2,r,b ) , B represents the number of events in the set E 2,r ; (3d)令b=1;(3d) let b=1; (3e)令其中,/>表示矩阵M中坐标为(x2,r,b,y2,r,b)的元素;(3e) order where, /> Indicates the element whose coordinates are (x 2,r,b ,y 2,r,b ) in the matrix M; (3f)判断b≤B是否成立,若是,令b=b+1,并执行步骤(3e),否则,执行步骤(3g);(3f) judging whether b≤B is established, if so, make b=b+1, and perform step (3e), otherwise, perform step (3g); (3g)用SIFT算子提取矩阵M中的特征点,得到Nr,3个特征点HPr={pr,l|0<l≤Nr,3},其中,pr,l=(xr,l,yr,l,Fr,l),(xr,l,yr,l)表示特征点的坐标,Fr,l表示特征点的特征描述符,Nr,3>8;(3g) Use the SIFT operator to extract the feature points in the matrix M to obtain N r,3 feature points HP r ={p r,l |0<l≤N r,3 }, where p r,l =( x r,l ,y r,l ,F r,l ), (x r,l ,y r,l ) represent the coordinates of the feature point, F r,l represent the feature descriptor of the feature point, N r,3 >8; (3h)令l=1;(3h) Let l=1; (3i)用SIFT算子计算图像I2,r在yr,l行上像素的特征点,得到W个特征点HPr,l'={pr,l,k'|0<k≤W},其中,pr,l,k'表示图像I2,r在yr,l行上的第k个特征点,pr,l,k'=(xr,l,k',yr,l,k',Fr,l,k'),(xr,l,k',yr,l,k')表示特征点pr,l,k'的坐标,Fr,l,k'表示特征点pr,l,k'的特征描述符;(3i) Use the SIFT operator to calculate the feature points of the pixels of the image I 2,r on the y r,l line, and obtain W feature points HP r,l '={p r,l,k '|0<k≤W }, where p r,l,k 'represents the kth feature point of image I 2,r on line y r,l, p r,l,k '=(x r,l,k ',y r ,l,k ',F r,l,k '), (x r,l,k ',y r,l,k ') represent the coordinates of the feature point p r,l,k ', F r,l, k 'represents the feature descriptor of the feature point p r,l,k '; (3j)令k=1;(3j) let k=1; (3k)计算Fr,l和Fr,l,k'之间的欧式距离or,l,k(3k) calculate the Euclidean distance o r, l , k between F r, l and F r, l, k '; (3l)判断k<W是否成立,若是,令k=k+1,并执行步骤(3k),否则,得到特征描述符距离集合Or,l={or,l,k|0<k≤W},并执行步骤(3m);(3l) Determine whether k<W is true, if so, set k=k+1, and execute step (3k), otherwise, obtain the feature descriptor distance set O r,l ={or r,l,k |0<k ≤W}, and perform step (3m); (3m)设Or,l中最小值为or,l,k',则pr,l对应的特征点为pr,l,k′′;(3m) Suppose the minimum value of O r,l is o r,l,k' , then the feature point corresponding to p r,l is p r,l,k′ ′; (3n)判断l≤Nr,3是否成立,若是,令l=l+1,并执行步骤(3i),否则,得到地址-事件数据流集合E2,r中事件和图像Ir的像素的对应点的集合KPr={fr,l=(pr,l,pr,l,k′′)|0<l≤Nr,3},并执行步骤(3o);(3n) Determine whether l≤Nr , 3 is established, if so, set l=l+1, and execute step (3i), otherwise, obtain the address-event data stream set E 2,r and the pixel of the event and image Ir The set of corresponding points KP r ={f r,l =(p r,l ,p r,l,k′ ′)|0<l≤N r,3 }, and perform step (3o); (3o)用八点法通过KPr计算表示地址-事件数据流集合E2,r中事件坐标和图像Ir中的像素坐标对应关系的单应矩阵Hr(3o) Calculate the homography matrix H r representing the corresponding relationship between the event coordinates in the address-event data stream set E 2,r and the pixel coordinates in the image I r through KP r by the eight-point method; (3p)判断r≤N2是否成立,若是,令r=r+1,并执行步骤(3c),否则,得到表示地址-事件数据流中事件坐标与图像像素坐标对应关系的单应矩阵的集合H={Hr|0<r<N2}。(3p) judging whether r≤N2 is established, if so, make r=r+1, and perform step (3c), otherwise, obtain the homography matrix representing the corresponding relationship between event coordinates and image pixel coordinates in the address-event data stream Set H={H r |0<r<N 2 }.
CN202011593703.1A 2020-12-29 2020-12-29 Binocular camera system and binocular camera space calibration method Active CN112700502B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011593703.1A CN112700502B (en) 2020-12-29 2020-12-29 Binocular camera system and binocular camera space calibration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011593703.1A CN112700502B (en) 2020-12-29 2020-12-29 Binocular camera system and binocular camera space calibration method

Publications (2)

Publication Number Publication Date
CN112700502A CN112700502A (en) 2021-04-23
CN112700502B true CN112700502B (en) 2023-08-01

Family

ID=75511809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011593703.1A Active CN112700502B (en) 2020-12-29 2020-12-29 Binocular camera system and binocular camera space calibration method

Country Status (1)

Country Link
CN (1) CN112700502B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822945A (en) * 2021-09-28 2021-12-21 天津朗硕机器人科技有限公司 Workpiece identification and positioning method based on binocular vision
CN114401391B (en) * 2021-12-09 2023-01-06 北京邮电大学 Method and device for generating virtual viewpoint
CN114092569B (en) * 2022-01-19 2022-08-05 安维尔信息科技(天津)有限公司 Binocular camera online calibration method and system based on multi-sensor fusion
CN117911540B (en) * 2024-03-18 2024-09-17 安徽大学 Event camera calibration device and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103776419A (en) * 2014-01-24 2014-05-07 华南理工大学 Binocular-vision distance measurement method capable of widening measurement range
KR101589167B1 (en) * 2015-02-09 2016-01-27 동의대학교 산학협력단 System and Method for Correcting Perspective Distortion Image Using Depth Information
CN107255443A (en) * 2017-07-14 2017-10-17 北京航空航天大学 Binocular vision sensor field calibration method and device under a kind of complex environment
CN107729893A (en) * 2017-10-12 2018-02-23 清华大学 A kind of vision positioning method of clapper die spotting press, system and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103776419A (en) * 2014-01-24 2014-05-07 华南理工大学 Binocular-vision distance measurement method capable of widening measurement range
KR101589167B1 (en) * 2015-02-09 2016-01-27 동의대학교 산학협력단 System and Method for Correcting Perspective Distortion Image Using Depth Information
CN107255443A (en) * 2017-07-14 2017-10-17 北京航空航天大学 Binocular vision sensor field calibration method and device under a kind of complex environment
CN107729893A (en) * 2017-10-12 2018-02-23 清华大学 A kind of vision positioning method of clapper die spotting press, system and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
双目图像校正VLSI硬件电路结构设计;卫钦智;陈松;;信息技术与网络安全(第06期);全文 *
基于双目立体视觉的倒车环境障碍物测量方法;刘昱岗;王卓君;王福景;张祖涛;徐宏;;交通运输系统工程与信息(第04期);全文 *

Also Published As

Publication number Publication date
CN112700502A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
CN112700502B (en) Binocular camera system and binocular camera space calibration method
JP6080417B2 (en) Image processing apparatus and image processing method
CN112767542A (en) Three-dimensional reconstruction method of multi-view camera, VR camera and panoramic camera
CN104363369B (en) The image recovery method and device of a kind of light-field camera
CN110657785A (en) An efficient method and system for acquiring scene depth information
CN109615661A (en) Device and method for calibrating internal parameters of light field camera
CN113875219B (en) Image processing method and device, electronic device, computer-readable storage medium
CN111243033A (en) Method for optimizing external parameters of binocular camera
US9070189B2 (en) Image blurring correction device
CN102368137B (en) Embedded calibrating stereoscopic vision system
CN111127379B (en) Rendering method of light field camera 2.0 and electronic equipment
CN109963080A (en) Image acquisition method and device, electronic equipment and computer storage medium
CN111080705A (en) Calibration method and device for automatic focusing binocular camera
CN109559353A (en) Camera module scaling method, device, electronic equipment and computer readable storage medium
JP2016213819A (en) Image processing device, imaging device, image processing method, program, and storage medium
CN112258581B (en) On-site calibration method for panoramic camera with multiple fish glasses heads
CN105530419B (en) Image acquisition system, image acquisition and processing system, and image acquisition and processing method
CN108900734A (en) A kind of wide-angle lens automatic distortion correction device and method
CN118071603A (en) Light field image super-resolution method, device and medium for space angle information interaction
CN109345595B (en) Stereoscopic vision sensor calibration method based on spherical lens
CN114782492B (en) A super-resolution enhancement method for motion images by fusing pulse data and frame data
CN118368398A (en) Multispectral-based large-view-field binocular stereoscopic vision perception method
CN112866546B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN114842046A (en) Device and method for measuring high-dynamic scene image of moving object
CN107330933B (en) Arbitrary focal surface shooting method based on camera array

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant