CN114640801B - A vehicle-side panoramic view assisted driving system based on image fusion - Google Patents
A vehicle-side panoramic view assisted driving system based on image fusion Download PDFInfo
- Publication number
- CN114640801B CN114640801B CN202210124847.5A CN202210124847A CN114640801B CN 114640801 B CN114640801 B CN 114640801B CN 202210124847 A CN202210124847 A CN 202210124847A CN 114640801 B CN114640801 B CN 114640801B
- Authority
- CN
- China
- Prior art keywords
- image
- fisheye
- panoramic
- images
- image processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Mathematical Optimization (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Signal Processing (AREA)
- Pure & Applied Mathematics (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Algebra (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Processing (AREA)
Abstract
一种基于图像融合的车端全景视角辅助驾驶系统,包括:用于采集车身周围360°道路信息的图像采集模块,用来实时处理图像采集模块输出图像的嵌入式图像处理设备,以及用于显示车端全景图像的图像显示设备。其中,嵌入式图像处理设备与其他两种设备之间用线缆建立物理连接;使用三个安装在车辆上不同位置且视角为180°的鱼眼摄像机采集图像,并采用视频编码器和视频采集卡将多路模拟视频图像整合成一路。嵌入式图像处理设备将整合后的数字视频图像经过鱼眼图像处理模块以及全景图像拼接器,将三个不同角度的数字视频图像拼接成一个全景图像,并通过Web全景播放器将拼接后的全景图像在图像显示设备上显示。本发明能消除大型专用车辆行驶时的视野盲区。
A car-side panoramic view assisted driving system based on image fusion, including: an image acquisition module for collecting 360° road information around the vehicle body, an embedded image processing device for real-time processing of images output by the image acquisition module, and a display Image display device for vehicle-side panoramic images. Among them, a physical connection is established between the embedded image processing device and the other two devices using cables; images are collected using three fisheye cameras installed at different locations on the vehicle with a viewing angle of 180°, and a video encoder and video capture are used The card integrates multiple analog video images into one. The embedded image processing device passes the integrated digital video image through the fisheye image processing module and the panoramic image splicer, splicing the digital video images from three different angles into a panoramic image, and displays the spliced panorama through the Web panoramic player. The image is displayed on the image display device. The invention can eliminate blind spots in the field of vision when large special vehicles are traveling.
Description
技术领域Technical field
本发明涉及大型专用车辆安全驾驶领域,具体涉及到一种基于图像融合的车端全景视角辅助驾驶系统。The invention relates to the field of safe driving of large special vehicles, and specifically to a vehicle-side panoramic view assisted driving system based on image fusion.
背景技术Background technique
近年来随着我国城镇化建设进程的不断推进,城市道路上出现了越来越多的大型专用车辆,包括但不仅限于城市公交车、水泥罐车、渣土车等。大型专用车辆的出现,极大的方便了人们的日常生产和生活。但是这些大型专用车辆往往具有车身较长、车体较高的特点,使得这些车辆具有范围较广的视野盲区,在道路上行驶时特别是在转弯时会因为视野盲区较大而发生严重的交通事故,这为道路上其他车辆的行驶或者行人的行走带来了较大的安全隐患。目前,全国主要城市已经在推行大型专用车辆“右转必停”的政策,但因为大型专用车辆的视野盲区导致的交通事故一直在发生。因此,为了提高大型专用车辆的行驶安全性以及尽可能地降低其他道路车辆行驶或者行人行走的危险性,就需要减少甚至完全消除大型专用车辆在道路上行驶时存在的视野盲区。In recent years, with the continuous advancement of my country's urbanization process, more and more large-scale special vehicles have appeared on urban roads, including but not limited to city buses, cement tankers, muck trucks, etc. The emergence of large-scale special vehicles has greatly facilitated people's daily production and life. However, these large special-purpose vehicles often have the characteristics of long and high body, which makes these vehicles have a wide range of blind spots. When driving on the road, especially when turning, serious traffic accidents will occur due to the large blind spots. accidents, which brings greater safety risks to other vehicles or pedestrians walking on the road. At present, major cities across the country have implemented the policy of "turning right and must stop" for large special vehicles. However, traffic accidents caused by the blind spots of large special vehicles have been occurring. Therefore, in order to improve the driving safety of large special-purpose vehicles and reduce the risk of other road vehicles or pedestrians as much as possible, it is necessary to reduce or even completely eliminate the blind spots in the field of vision that exist when large special-purpose vehicles are driving on the road.
为了实现上述目的,各类减少车辆行驶时的视野盲区的系统被研究、开发。其中,王禄杰等提出了一种汽车后视镜前装探头车载系统(王禄杰;马飞龙;石雕;等.汽车后视镜前装探头车载系统[P].中国专利:CN112776729A,2021-05-11.),通过在车身安装车体异物检测组件来达到解决汽车视野盲区的问题,但是存在设计复杂、成本较高等局限性。叶胜娟等提出了一种消除汽车视野盲区的汽车影像装置(叶胜娟;王海峰;杨迎飞;等.一种消除汽车视野盲区的汽车影像装置[P].中国专利:CN213948279U,2021-08-13.),通过在车辆上安装固定显示器,调整显示器固定角度来解决汽车的视野盲区问题,但是存在视野角度固定,需要手动调节显示器角度,不能完整显示车身周围环境信息的缺陷。In order to achieve the above goals, various systems for reducing blind spots in the vehicle's field of vision while driving have been researched and developed. Among them, Wang Lujie et al. proposed a car rearview mirror front-mounted probe vehicle-mounted system (Wang Lujie; Ma Feilong; Stone Carving; et al. Car rearview mirror front-mounted probe vehicle-mounted system [P]. Chinese patent: CN112776729A, 2021-05-11 .), by installing foreign object detection components on the car body to solve the problem of blind spots in the car's field of vision, but there are limitations such as complex design and high cost. Ye Shengjuan et al. proposed a car imaging device that eliminates blind spots in the car's field of vision (Ye Shengjuan; Wang Haifeng; Yang Yingfei; et al. A car imaging device that eliminates blind spots in the car's field of view [P]. Chinese Patent: CN213948279U, 2021-08-13.), The problem of blind spots in the car's field of view is solved by installing a fixed display on the vehicle and adjusting the fixed angle of the display. However, there is a drawback that the viewing angle is fixed, the display angle needs to be manually adjusted, and the surrounding environment information of the vehicle body cannot be fully displayed.
发明内容Contents of the invention
为克服现有技术的上述问题,本发明提供了一种基于图像融合的车端全景视角辅助驾驶系统,旨在减少甚至完全消除大型专用车辆的视野盲区,从而来提高大型专用车辆的行驶安全性。In order to overcome the above-mentioned problems of the prior art, the present invention provides a vehicle-side panoramic view assisted driving system based on image fusion, aiming to reduce or even completely eliminate the blind spots of large special vehicles, thereby improving the driving safety of large special vehicles. .
本发明的一种基于图像融合的车端全景视角辅助驾驶系统,包括图像采集设备、嵌入式图像处理设备和图像显示设备。The present invention provides a vehicle-side panoramic view assisted driving system based on image fusion, including an image acquisition device, an embedded image processing device and an image display device.
所述图像采集设备采用三个鱼眼相机来采集车身周围360°的道路环境,使用视频编码器将三路鱼眼相机的输出图像整合成一路模拟视频,并由视频采集卡将该路模拟视频转化为数字视频,通过线缆传输给嵌入式图像处理设备中的鱼眼图像处理模块进行图像处理;The image acquisition equipment uses three fisheye cameras to collect the 360° road environment around the vehicle body, uses a video encoder to integrate the output images of the three fisheye cameras into one analog video, and uses the video capture card to convert the analog video Convert it into digital video and transmit it through cables to the fisheye image processing module in the embedded image processing device for image processing;
所述嵌入式图像处理设备包括鱼眼图像处理模块;Web全景播放器;全景图像拼接器;The embedded image processing device includes a fisheye image processing module; a Web panoramic player; a panoramic image splicer;
鱼眼图像处理模块,用来处理视频采集卡输出的原始鱼眼图像。原始的鱼眼相机为了拍摄到更大的视场角,会导致图像周围的像素信息畸变严重,所以需要采用经纬度展开的方式将原始鱼眼图像矫正为环视图来提高最后全景拼接的效果。该方法主要通过对鱼眼图像中的像素坐标进行一系列的变化,将2D笛卡尔坐标系中的像素坐标变换到球形的笛卡尔坐标系中,最终将球形笛卡尔坐标系下的坐标转换为经纬度坐标,在此之后,基于经纬度坐标进行像素点的映射,以此来达到将鱼眼图像转换成环视图的目的。具体操作步骤如下:The fisheye image processing module is used to process the original fisheye image output by the video capture card. In order to capture a larger field of view, the original fisheye camera will cause severe distortion of the pixel information around the image. Therefore, it is necessary to use latitude and longitude expansion to correct the original fisheye image into a surround view to improve the final panoramic stitching effect. This method mainly transforms the pixel coordinates in the 2D Cartesian coordinate system into the spherical Cartesian coordinate system by making a series of changes to the pixel coordinates in the fisheye image, and finally converts the coordinates in the spherical Cartesian coordinate system into After that, the pixel points are mapped based on the longitude and latitude coordinates to achieve the purpose of converting the fisheye image into a surround view. The specific steps are as follows:
1)获取到原始的鱼眼图像后,以三视角鱼眼成像的圆心和半径编写圆形的蒙版函数来截取出目标的图像区域,其中,截取后的图像区域的像素点的坐标范围如公式(1)所示:1) After obtaining the original fisheye image, write a circular mask function based on the center and radius of the three-view fisheye imaging to intercept the target image area, where the coordinate range of the pixels in the intercepted image area is as follows Formula (1) shows:
x∈[0,cols-1],y∈[0,rows-1] (1)x∈[0,cols-1],y∈[0,rows-1] (1)
其中,x,y分别是截取后图像的像素点坐标的横坐标与纵坐标,cols是原始鱼眼图像的横向宽度,rows是原始鱼眼图像的纵向宽度;Among them, x and y are the abscissa and ordinate of the pixel coordinates of the intercepted image respectively, cols is the horizontal width of the original fisheye image, and rows is the vertical width of the original fisheye image;
2)为了控制最终经过图像融合后的视频分辨率,需要控制步骤1)中输出图片的尺寸大小;2) In order to control the final video resolution after image fusion, it is necessary to control the size of the output image in step 1);
3)将截取后的图像区域的像素坐标点(x,y)从2D笛卡尔坐标系转换成标准坐标A(xA,yA),转换关系如公式(2)所示:3) Convert the pixel coordinate point (x, y) of the intercepted image area from the 2D Cartesian coordinate system to the standard coordinate A (x A , y A ). The conversion relationship is as shown in formula (2):
其中,x,y分别是截取后图像的像素点坐标的横坐标与纵坐标,cols是原始鱼眼图像的横向宽度,rows是原始鱼眼图像的纵向宽度;Among them, x and y are the abscissa and ordinate of the pixel coordinates of the intercepted image respectively, cols is the horizontal width of the original fisheye image, and rows is the vertical width of the original fisheye image;
4)将标准坐标A(xA,yA)转换成球形的三维笛卡尔坐标P(xp,yp,zp),转换公式如公式(3)、(4)所示:4) Convert the standard coordinates A (x A , y A ) into spherical three-dimensional Cartesian coordinates P (x p , y p , z p ). The conversion formula is as shown in formulas (3) and (4):
P(p,φ,θ) (3)P(p,φ,θ) (3)
其中,P为球面上一点坐标与原点O之间的连线OP的径向距离,θ为OP与z轴之间的夹角,φ为OP在xOy平面的投影与x轴的夹角,r为球的半径,F为鱼眼相机的焦距,将球坐标系根据公式(5)转换为笛卡尔坐标系:Among them, P is the radial distance of the line OP between the coordinates of a point on the sphere and the origin O, θ is the angle between OP and the z-axis, φ is the angle between the projection of OP on the xOy plane and the x-axis, r is the radius of the ball, F is the focal length of the fisheye camera, and the spherical coordinate system is converted into a Cartesian coordinate system according to formula (5):
xp=psinθcosφ,yp=psinθsinφ,zp=pcosθ (5)x p = psinθcosφ, y p = psinθsinφ, z p = pcosθ (5)
5)将空间坐标系P转换成经纬度坐标,转换关系如公式(6)所示:5) Convert the spatial coordinate system P into longitude and latitude coordinates. The conversion relationship is as shown in formula (6):
其中,xp,yp,zp是P点的坐标,latitude是经度坐标,longitude是纬度坐标;Among them, x p , y p , z p are the coordinates of point P, latitude is the longitude coordinate, and longitude is the latitude coordinate;
6)根据步骤5)中的经纬度坐标转换映射为展开图的像素坐标(xo,yo),映射关系如公式(7)所示:6) Convert and map the longitude and latitude coordinates in step 5) to the pixel coordinates (x o , y o ) of the expanded image. The mapping relationship is as shown in formula (7):
其中,x0表示的是展开图中的像素横坐标,y0表示的是展开图中的像素纵坐标;Among them, x 0 represents the abscissa of the pixel in the expanded image, and y 0 represents the ordinate of the pixel in the expanded image;
7)再完成像素点映射后,画面中会出现没有被像素点映射到的黑色空隙点,针对这些黑色区域再利用cubic插值算法进行填补来达到输出图像完整的效果。7) After completing the pixel mapping, black gaps that are not mapped to pixels will appear in the picture. These black areas are filled with the cubic interpolation algorithm to achieve a complete output image effect.
全景图像拼接器,用于将三个经过鱼眼图像处理模块处理后的鱼眼图像进行全景图像的拼接。为了保证拼接后视野的连贯性,需要将三个不同方向的鱼眼相机按照固定顺序进行编号,并且在后续的操作中始终保持顺序不变。在图像处理过程,用SIFT算法计算出每幅图像的特征点,将其作为尺度空间、缩放、旋转和仿射变换保持不变的图像局部不变描述算子。还需要寻找相邻图像间的匹配特征点,用RANSAC方法来进一步筛选出特征匹配点,从而通过寻找特征匹配点之间的映射关系计算出单应矩阵。最后根据计算得到的单应矩阵对图像进行透视变化,最后将透视变换后的图像进行拼接,实现车端全景图像拼接的功能。具体操作步骤如下:Panoramic image splicer is used to splice three fisheye images processed by the fisheye image processing module into a panoramic image. In order to ensure the coherence of the field of view after splicing, the three fisheye cameras in different directions need to be numbered in a fixed order, and the order will always remain unchanged in subsequent operations. In the image processing process, the SIFT algorithm is used to calculate the feature points of each image, which are used as local invariant description operators of the image that remain unchanged in scale space, scaling, rotation and affine transformation. It is also necessary to find matching feature points between adjacent images, and use the RANSAC method to further filter out the feature matching points, so as to calculate the homography matrix by finding the mapping relationship between the feature matching points. Finally, the perspective changes are performed on the image according to the calculated homography matrix, and finally the perspective transformed images are spliced to realize the function of splicing panoramic images on the vehicle side. The specific steps are as follows:
1)在获取到经过鱼眼图像处理模块处理后的鱼眼图像,先对不同角度的图像依次进行固定编号,并保持后续图像编号的一致性;1) After obtaining the fisheye image processed by the fisheye image processing module, first fixedly number the images from different angles in sequence, and maintain the consistency of subsequent image numbering;
2)用OpenCV自带的SIFT算法计算出每幅图像的特征点,并将其作为尺度空间、缩放、旋转和仿射变换保持不变的图像局部不变描述算子;2) Use the SIFT algorithm that comes with OpenCV to calculate the feature points of each image, and use them as local invariant description operators of the image that remain unchanged in scale space, scaling, rotation and affine transformation;
3)图像拼接还需要寻找相邻图像间的匹配特征点,所以本发明中采用计算欧式距离测度的方法对三个视角的鱼眼图像进行粗匹配,接着用比较最近邻欧式距离与次邻欧式距离的SIFT匹配方式在两幅图像的特征点中进行筛选,当最近邻欧式距离与次邻欧氏距离的比值小于0.8时选为匹配点;3) Image splicing also requires finding matching feature points between adjacent images. Therefore, in the present invention, the method of calculating the Euclidean distance measure is used to roughly match the fisheye images from three viewing angles, and then the nearest neighbor Euclidean distance and the second neighbor Euclidean distance are compared. The SIFT matching method of distance is used to filter the feature points of the two images. When the ratio of the nearest neighbor Euclidean distance to the next neighbor Euclidean distance is less than 0.8, the matching point is selected;
4)对经过步骤3)处理后的粗匹配点再通过RANSAC方法进一步筛选出误匹配点,从而提高后续图像处理的精度,接着找出特征点之间的映射关系,从而计算出单应矩阵;4) Use the RANSAC method to further screen out the mismatching points from the rough matching points processed in step 3), thereby improving the accuracy of subsequent image processing, and then find out the mapping relationship between the feature points to calculate the homography matrix;
5)用步骤4)计算所得的单应矩阵对鱼眼图像处理模块处理后的鱼眼图像进行透视变换,接着将透视变换后的图像进行拼接,最后合成视频流,实现全景拼接的功能。5) Use the homography matrix calculated in step 4) to perform perspective transformation on the fisheye image processed by the fisheye image processing module, then splice the perspective transformed images, and finally synthesize the video stream to realize the panoramic splicing function.
Web全景播放器,用于在网页显示全景图像拼接器输出的全景图像。为了减少视频显示的延时,该全景播放器采用rtc.js播放器插件构建前端播放器对视频进行播放。也为了使前端能够支持全景视频的播放,采用three.js+video标签+rtc.js的技术来实现全景播放器。该全景播放器主要通过three.js建立一个球形模型,并将视频标签当作球体表面渲染材质对球体进行贴图,从而来达到将全景视频投影到球体上的效果。该全景播放器主要通过three.js建立一个球形模型,并将视频标签当作球体表面渲染材质对球体进行贴图,从而达到将全景视频投影到球体上的效果,在嵌入式图像处理设备上安装浏览器,便可在图像显示设备上浏览全景图像;Web panoramic player is used to display the panoramic image output by the panoramic image splicer on the web page. In order to reduce the delay in video display, the panoramic player uses the rtc.js player plug-in to build a front-end player to play the video. In order to enable the front end to support the playback of panoramic videos, the technology of three.js+video tag+rtc.js is used to implement the panoramic player. The panoramic player mainly builds a spherical model through three.js, and uses the video tag as the surface rendering material of the sphere to map the sphere, so as to achieve the effect of projecting the panoramic video onto the sphere. The panoramic player mainly builds a spherical model through three.js, and uses the video tag as the surface rendering material of the sphere to map the sphere, so as to achieve the effect of projecting the panoramic video onto the sphere. It is installed and browsed on the embedded image processing device. You can browse the panoramic image on the image display device by using the device;
所述图像显示设备与嵌入式图像处理设备建立物理连接,用来显示Web播放器所呈现的全景图像。The image display device establishes a physical connection with the embedded image processing device and is used to display the panoramic image presented by the Web player.
与现有技术相比,本发明的有益效果是:用三个鱼眼相机便可以获得车身周围360°的环境信息,并通过视频编码器以及视频采集卡将三路模拟视频数据整合成一路数字视频数据,将该路数字视频数据输入到嵌入式图像处理设备进行进一步处理,大大节省了嵌入式设备的端口资源,整体上的设计成本也比较低。同时使用集成有AI芯片的嵌入式设备来提高视频的实时处理能力和图像输出能力,结合使用自行设计的全景播放器实现对全景视频的播放,实现了大幅减少甚至完全消除大型专用车辆行驶时存在范围比较广的视野盲区的问题,具有良好的辅助驾驶效果。Compared with the existing technology, the beneficial effects of the present invention are: using three fisheye cameras, 360° environmental information around the vehicle body can be obtained, and the three-channel analog video data can be integrated into one digital channel through a video encoder and a video capture card. Video data, the digital video data is input to the embedded image processing device for further processing, which greatly saves the port resources of the embedded device, and the overall design cost is relatively low. At the same time, embedded devices integrated with AI chips are used to improve the real-time processing capabilities and image output capabilities of videos, and a self-designed panoramic player is used to play panoramic videos, significantly reducing or even completely eliminating the presence of large-scale special vehicles when driving. It solves the problem of blind spots in a relatively wide range of vision and has a good assisted driving effect.
附图说明Description of the drawings
图1是本发明的系统总体框架图;Figure 1 is an overall framework diagram of the system of the present invention;
图2是本发明的摄像头安装示意图;Figure 2 is a schematic diagram of the installation of the camera of the present invention;
图3是本发明的鱼眼图像处理流程图;Figure 3 is a flow chart of fisheye image processing of the present invention;
图4是本发明的图像拼接融合的处理流程图。Figure 4 is a processing flow chart of image splicing and fusion according to the present invention.
具体实施方式Detailed ways
以下结合附图对本发明实例做进一步详述:The examples of the present invention will be further described in detail below in conjunction with the accompanying drawings:
如图1所示,一种基于图像融合的车端全景视角辅助驾驶系统,由图像采集设备、嵌入式图像处理设备以及图像显示设备三部分组成。其中,嵌入式图像处理设备与图像采集设备以及图像显示设备之间用线缆建立物理连接,图像采集设备主要实现将多路鱼眼相机的输出模拟视频图像通过视频编码器、视频采集卡等硬件设备,用硬件编码的方式把多路模拟视频整合成一路数字视频,在减少数据传输量的同时,将数字视频数据输入到嵌入式图像处理设备中进行图像处理。嵌入式图像处理设备在获取到输入的数字图像后,先将畸变严重的原始鱼眼图像用坐标变化的方法处理成环视图,从而获得比较丰富的图像信息,再将三路处理成环视图的鱼眼图像进行图像拼接形成视频流。最终通过Web全景播放器在Web端将拼接后的全景图像呈现在图像显示设备上,从而实现大幅减少甚至完全消除大型专用车辆行驶时视野盲区范围较广的功能,提高大型专用车辆行驶的安全性。As shown in Figure 1, a car-side panoramic view assisted driving system based on image fusion consists of three parts: image acquisition equipment, embedded image processing equipment and image display equipment. Among them, the embedded image processing equipment, image acquisition equipment and image display equipment are physically connected with cables. The image acquisition equipment mainly realizes the output analog video images of multi-channel fisheye cameras through video encoders, video capture cards and other hardware. The device uses hardware encoding to integrate multiple channels of analog video into one channel of digital video. While reducing the amount of data transmission, the digital video data is input into the embedded image processing device for image processing. After the embedded image processing device obtains the input digital image, it first processes the severely distorted original fisheye image into a ring view using a coordinate change method to obtain relatively rich image information, and then processes the three channels into a ring view. Fisheye images are stitched together to form a video stream. Finally, the spliced panoramic image is presented on the image display device on the Web through the Web panoramic player, thus achieving the function of significantly reducing or even completely eliminating the wide range of blind spots when large special vehicles are driving, and improving the safety of large special vehicles. .
如图2所示,本发明中所使用的鱼眼相机的视角是180°,为了实现对车身周围360°环境信息的采集以及较好的图像拼接效果,需要将三个鱼眼相机安装在同一高度的不同位置上,并且每个鱼眼相机之间需要间隔120°。按照图2的安装示意图,并且根据最终图像显示效果来调整三个相机在大型专用车辆上的安装位置,便可以获得车身周围360°的全景环境图像信息,从而获得良好的辅助驾驶效果。As shown in Figure 2, the angle of view of the fisheye camera used in the present invention is 180°. In order to achieve the collection of 360° environmental information around the vehicle body and a better image stitching effect, three fisheye cameras need to be installed on the same at different heights, and each fisheye camera needs to be spaced 120° apart. According to the installation diagram in Figure 2, and by adjusting the installation positions of the three cameras on the large special vehicle according to the final image display effect, 360° panoramic environmental image information around the vehicle body can be obtained, thereby achieving good assisted driving effects.
如图3所示,所述鱼眼图像处理模块通过对鱼眼图像中的像素坐标进行一系列的变换、像素点的映射以及空隙点的图像填补等技术来达到输出图像最优的效果,其主要的实施步骤如下:As shown in Figure 3, the fisheye image processing module achieves the optimal effect of the output image by performing a series of transformations on the pixel coordinates in the fisheye image, mapping of pixels, and image filling of gap points. The main implementation steps are as follows:
1)获取到原始的鱼眼图像后,以三视角鱼眼成像的圆心和半径编写圆形的蒙版函数来截取出目标的图像区域,其中,截取后的图像区域的像素点的坐标范围如公式(1)所示:1) After obtaining the original fisheye image, write a circular mask function based on the center and radius of the three-view fisheye imaging to intercept the target image area, where the coordinate range of the pixels in the intercepted image area is as follows Formula (1) shows:
x∈[0,cols-1],y∈[0,rows-1] (1)x∈[0,cols-1],y∈[0,rows-1] (1)
其中,x,y分别是截取后图像的像素点坐标的横坐标与纵坐标,cols是原始鱼眼图像的横向宽度,rows是原始鱼眼图像的纵向宽度;Among them, x and y are the abscissa and ordinate of the pixel coordinates of the intercepted image respectively, cols is the horizontal width of the original fisheye image, and rows is the vertical width of the original fisheye image;
2)为了控制最终经过图像融合后的视频分辨率,需要控制步骤1)中输出图片的尺寸大小;2) In order to control the final video resolution after image fusion, it is necessary to control the size of the output image in step 1);
3)将截取后的图像区域的像素坐标点(x,y)从2D笛卡尔坐标系转换成标准坐标A(xA,yA),转换关系如公式(2)所示:3) Convert the pixel coordinate point (x, y) of the intercepted image area from the 2D Cartesian coordinate system to the standard coordinate A (x A , y A ). The conversion relationship is as shown in formula (2):
其中,x,y分别是截取后图像的像素点坐标的横坐标与纵坐标,cols是原始鱼眼图像的横向宽度,rows是原始鱼眼图像的纵向宽度;Among them, x and y are the abscissa and ordinate of the pixel coordinates of the intercepted image respectively, cols is the horizontal width of the original fisheye image, and rows is the vertical width of the original fisheye image;
4)将标准坐标A(xA,yA)转换成球形的三维笛卡尔坐标P(xp,yp,zp),转换公式如公式(3)、(4)所示:4) Convert the standard coordinates A (x A , y A ) into spherical three-dimensional Cartesian coordinates P (x p , y p , z p ). The conversion formula is as shown in formulas (3) and (4):
P(p,φ,θ) (3)P(p,φ,θ) (3)
其中,P为球面上一点坐标与原点O之间的连线OP的径向距离,θ为OP与z轴之间的夹角,φ为OP在xOy平面的投影与x轴的夹角,r为球的半径,F为鱼眼相机的焦距,将球坐标系根据公式(5)转换为笛卡尔坐标系:Among them, P is the radial distance of the line OP between the coordinates of a point on the sphere and the origin O, θ is the angle between OP and the z-axis, φ is the angle between the projection of OP on the xOy plane and the x-axis, r is the radius of the ball, F is the focal length of the fisheye camera, and the spherical coordinate system is converted into a Cartesian coordinate system according to formula (5):
xp=psinθcosφ,yp=psinθsinφ,zp=pcosθ (5)x p = psinθcosφ, y p = psinθsinφ, z p = pcosθ (5)
5)将空间坐标系P转换成经纬度坐标,转换关系如公式(6)所示:5) Convert the spatial coordinate system P into longitude and latitude coordinates. The conversion relationship is as shown in formula (6):
其中,xp,yp,zp是P点的坐标,latitude是经度坐标,longitude是纬度坐标;Among them, x p , y p , z p are the coordinates of point P, latitude is the longitude coordinate, and longitude is the latitude coordinate;
6)根据步骤5)中的经纬度坐标转换映射为展开图的像素坐标(xo,yo),映射关系如公式(7)所示:6) Convert and map the longitude and latitude coordinates in step 5) to the pixel coordinates (x o , y o ) of the expanded image. The mapping relationship is as shown in formula (7):
其中,x0表示的是展开图中的像素横坐标,y0表示的是展开图中的像素纵坐标;Among them, x 0 represents the abscissa of the pixel in the expanded image, and y 0 represents the ordinate of the pixel in the expanded image;
7)再完成像素点映射后,画面中会出现没有被像素点映射到的黑色空隙点,针对这些黑色区域再利用cubic插值算法进行填补来达到输出图像完整的效果。7) After completing the pixel mapping, black gaps that are not mapped to pixels will appear in the picture. These black areas are filled with the cubic interpolation algorithm to achieve a complete output image effect.
如图4所述,所述全景图像拼接器的技术方法包括图像特征点提取计算、相邻图像间匹配特征点的匹配、寻找特征点之间的映射关系、单应矩阵的计算以及图像的透视变换以及图像拼接等,其主要实施步骤如下所示:As shown in Figure 4, the technical method of the panoramic image splicer includes extraction and calculation of image feature points, matching of feature points between adjacent images, finding the mapping relationship between feature points, calculation of the homography matrix and perspective of the image. Transformation and image splicing, etc., the main implementation steps are as follows:
1)在获取到经过鱼眼图像处理模块处理后的鱼眼图像,先对不同角度的图像依次进行固定编号,并保持后续图像编号的一致性;1) After obtaining the fisheye image processed by the fisheye image processing module, first fixedly number the images from different angles in sequence, and maintain the consistency of subsequent image numbering;
2)用OpenCV自带的SIFT算法计算出每幅图像的特征点,并将其作为尺度空间、缩放、旋转和仿射变换保持不变的图像局部不变描述算子;2) Use the SIFT algorithm that comes with OpenCV to calculate the feature points of each image, and use them as local invariant description operators of the image that remain unchanged in scale space, scaling, rotation and affine transformation;
3)图像拼接还需要寻找相邻图像间的匹配特征点,所以本发明中采用计算欧式距离测度的方法对三个视角的鱼眼图像进行粗匹配,接着用比较最近邻欧式距离与次邻欧式距离的SIFT匹配方式在两幅图像的特征点中进行筛选,当最近邻欧式距离与次邻欧氏距离的比值小于0.8时选为匹配点;3) Image splicing also requires finding matching feature points between adjacent images. Therefore, in the present invention, the method of calculating the Euclidean distance measure is used to roughly match the fisheye images from three viewing angles, and then the nearest neighbor Euclidean distance and the second neighbor Euclidean distance are compared. The SIFT matching method of distance is used to filter the feature points of the two images. When the ratio of the nearest neighbor Euclidean distance to the next neighbor Euclidean distance is less than 0.8, the matching point is selected;
4)对经过步骤3)处理后的粗匹配点再通过RANSAC方法进一步筛选出误匹配点,从而提高后续图像处理的精度,接着找出特征点之间的映射关系,从而计算出单应矩阵;4) Use the RANSAC method to further screen out the mismatching points from the rough matching points processed in step 3), thereby improving the accuracy of subsequent image processing, and then find out the mapping relationship between the feature points to calculate the homography matrix;
5)用步骤4)计算所得的单应矩阵对鱼眼图像处理模块处理后的鱼眼图像进行透视变换,接着将透视变换后的图像进行拼接,最后合成视频流,实现全景拼接的功能。5) Use the homography matrix calculated in step 4) to perform perspective transformation on the fisheye image processed by the fisheye image processing module, then splice the perspective transformed images, and finally synthesize the video stream to realize the panoramic splicing function.
本说明书实施例所述的内容仅仅是对发明构思的实现形式的列举,本发明的保护范围不应当被视为仅限于实施例所陈述的具体形式,本发明的保护范围也及于本领域技术人员根据本发明构思所能够想到的等同技术手段。The content described in the embodiments of this specification is only an enumeration of the implementation forms of the inventive concept. The protection scope of the present invention should not be considered to be limited to the specific forms stated in the embodiments. The protection scope of the present invention also extends to those skilled in the art. Equivalent technical means that a person can think of based on the concept of the present invention.
Claims (3)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210124847.5A CN114640801B (en) | 2022-02-10 | 2022-02-10 | A vehicle-side panoramic view assisted driving system based on image fusion |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210124847.5A CN114640801B (en) | 2022-02-10 | 2022-02-10 | A vehicle-side panoramic view assisted driving system based on image fusion |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114640801A CN114640801A (en) | 2022-06-17 |
| CN114640801B true CN114640801B (en) | 2024-02-20 |
Family
ID=81946324
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210124847.5A Active CN114640801B (en) | 2022-02-10 | 2022-02-10 | A vehicle-side panoramic view assisted driving system based on image fusion |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114640801B (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116245748B (en) * | 2022-12-23 | 2024-04-26 | 珠海视熙科技有限公司 | Distortion correction method, device, equipment, system and storage medium for ring-looking lens |
| CN117893719B (en) * | 2024-03-15 | 2024-12-03 | 鹰驾科技(深圳)有限公司 | Method and system for splicing self-adaptive vehicle body in all-around manner |
| CN117935127B (en) * | 2024-03-22 | 2024-06-04 | 国任财产保险股份有限公司 | Intelligent damage assessment method and system for panoramic video exploration |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106357976A (en) * | 2016-08-30 | 2017-01-25 | 深圳市保千里电子有限公司 | Omni-directional panoramic image generating method and device |
| CN106683045A (en) * | 2016-09-28 | 2017-05-17 | 深圳市优象计算技术有限公司 | Binocular camera-based panoramic image splicing method |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180176465A1 (en) * | 2016-12-16 | 2018-06-21 | Prolific Technology Inc. | Image processing method for immediately producing panoramic images |
-
2022
- 2022-02-10 CN CN202210124847.5A patent/CN114640801B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106357976A (en) * | 2016-08-30 | 2017-01-25 | 深圳市保千里电子有限公司 | Omni-directional panoramic image generating method and device |
| CN106683045A (en) * | 2016-09-28 | 2017-05-17 | 深圳市优象计算技术有限公司 | Binocular camera-based panoramic image splicing method |
Non-Patent Citations (2)
| Title |
|---|
| 基于3D空间球面的车载全景快速生成方法;曹立波;夏家豪;廖家才;张冠军;张瑞锋;;中国公路学报(01);全文 * |
| 基于球面空间匹配的双目鱼眼全景图像生成;何林飞;朱煜;林家骏;黄俊健;陈旭东;;计算机应用与软件(02);全文 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114640801A (en) | 2022-06-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114640801B (en) | A vehicle-side panoramic view assisted driving system based on image fusion | |
| CN110381255B (en) | Vehicle-mounted video monitoring system and method applying 360-degree panoramic looking-around technology | |
| CN108263283B (en) | Method for calibrating and splicing panoramic all-round looking system of multi-marshalling variable-angle vehicle | |
| CN107133988B (en) | Calibration method and calibration system for camera in vehicle-mounted panoramic looking-around system | |
| CN106952311B (en) | Auxiliary parking system and method based on panoramic stitching data mapping table | |
| US9858639B2 (en) | Imaging surface modeling for camera modeling and virtual view synthesis | |
| US6947059B2 (en) | Stereoscopic panoramic image capture device | |
| CN101276465B (en) | Wide-angle image automatic stitching method | |
| CN107154022B (en) | A dynamic panorama stitching method suitable for trailers | |
| CN102045546B (en) | Panoramic parking assist system | |
| CN102164274B (en) | Vehicle-mounted virtual panoramic system with variable field of view | |
| Zhu et al. | Monocular 3d vehicle detection using uncalibrated traffic cameras through homography | |
| CN109087251B (en) | Vehicle-mounted panoramic image display method and system | |
| CN113468991B (en) | A parking space detection method based on panoramic video | |
| CN1323547C (en) | A three-line calibration method for external parameters of vehicle-mounted cameras | |
| CN102881016A (en) | Vehicle 360-degree surrounding reconstruction method based on internet of vehicles | |
| US20090079830A1 (en) | Robust framework for enhancing navigation, surveillance, tele-presence and interactivity | |
| CN110736472A (en) | An indoor high-precision map representation method based on the fusion of vehicle surround view image and millimeter-wave radar | |
| CN102291541A (en) | Virtual synthesis display system of vehicle | |
| JP3381351B2 (en) | Ambient situation display device for vehicles | |
| CN106856000A (en) | A kind of vehicle-mounted panoramic image seamless splicing processing method and system | |
| CN110363085A (en) | A Surround View Realization Method for Heavy-duty Articulated Vehicles Based on Articulation Angle Compensation | |
| CN101102514A (en) | Real-time panoramic seamless distortion-free video camera | |
| CN109883433A (en) | Vehicle localization method in structured environment based on 360-degree panoramic view | |
| CN111626227A (en) | Method for realizing vehicle bottom perspective panoramic system based on binocular vision |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |