CN104506828B - A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes - Google Patents
A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes Download PDFInfo
- Publication number
- CN104506828B CN104506828B CN201510016447.2A CN201510016447A CN104506828B CN 104506828 B CN104506828 B CN 104506828B CN 201510016447 A CN201510016447 A CN 201510016447A CN 104506828 B CN104506828 B CN 104506828B
- Authority
- CN
- China
- Prior art keywords
- sub
- image
- video
- images
- splicing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Processing (AREA)
Abstract
本发明公开了一种无有效重叠变结构的定点定向视频实时拼接方法,所述方法包括:分别采集不同位置的视频流信息;将压缩后的视频流信息按照时间顺序分为多个第一静态视频帧组;将所述第一静态视频帧组中每一路视频流信息对应的静态图像,转换为被拍对象的俯视图,形成第二静态视频帧组;对所述第二静态视频帧组中的每个视频帧进行定位、全景粗拼接、补偿融合、全景精拼接得到实时全景视频流。本发明针对不同视角、不同方向对同一场景采集的无有效重叠变结构的图像进行拼接处理,再实时生成视频流。本发明的拼接方法不但提高了图像拼接精度,同时保证了图像拼接效率,满足了视频流拼接的实时性要求。
The invention discloses a fixed-point directional video real-time mosaic method without effective overlapping and variable structure. The method includes: separately collecting video stream information at different positions; dividing the compressed video stream information into a plurality of first static Video frame group; the static image corresponding to each road video stream information in the first static video frame group is converted into a top view of the object to be photographed to form a second static video frame group; for the second static video frame group Each video frame is positioned, panoramic rough stitching, compensation fusion, and panoramic fine stitching to obtain a real-time panoramic video stream. According to the invention, images without effective overlap and variable structure collected in the same scene are spliced according to different viewing angles and different directions, and then video streams are generated in real time. The stitching method of the invention not only improves the stitching precision of images, but also ensures the stitching efficiency of images and meets the real-time requirement of stitching video streams.
Description
技术领域technical field
本发明涉实时视频图像拼接技术领域,更具体涉及一种无有效重叠变结构的定点定向视频实时拼接方法。The invention relates to the technical field of real-time video image splicing, and more particularly relates to a fixed-point directional video real-time splicing method without effective overlapping variable structure.
背景技术Background technique
近年来随着视频拼接技术的快速发展,其在建立大视角的高分辨率图像领域、虚拟现实领域、医学图像领域、遥感技术以及军事领域中均有广泛的应用。视频拼接技术主要包含图像拼接技术和视频实时合成技术。其一,图像拼接技术是实现视频拼接技术的核心,主要包括图像配准和图像融合两个关键环节:图像配准是实现图像拼接的基础,其目标是对在不同摄像位置和角度下的多幅图像进行匹配;图像融合则是通过消除由于几何校正、动态的场景或光照变化引起的相邻图像间的强度或颜色不连续现象,合成高质量的图像。其二,视频实时合成技术则是通过采用FPGA、英特尔的IPP、英伟达的CUDA等平行计算架构,提高图像拼接算法的执行效率来实现的。In recent years, with the rapid development of video stitching technology, it has been widely used in the fields of high-resolution images with large viewing angles, virtual reality, medical images, remote sensing technology, and military fields. Video stitching technology mainly includes image stitching technology and video real-time synthesis technology. First, image stitching technology is the core of video stitching technology, mainly including image registration and image fusion. Image fusion is to synthesize high-quality images by eliminating the intensity or color discontinuity between adjacent images caused by geometric correction, dynamic scenes or lighting changes. Second, real-time video synthesis technology is realized by using parallel computing architectures such as FPGA, Intel's IPP, and Nvidia's CUDA to improve the execution efficiency of image stitching algorithms.
从图像采集的角度来看,大致可以分为四类。1)单相机固定支点,镜头旋转对同一场景进行图像采集;2)单相机固定在一个滑轨上平行移动对同一场景进行图像采集;3)多相机在不同视向角,不同方向对同一场景进行图像采集,图像间具有可用与图像拼接的有效重叠区域;4)多相机在不同视向角,不同方向对同一场景进行图像采集,图像间无能用于图像拼接的有效重叠区域,甚至图像间有较小的间隙和漏洞。根据本发明研究的实际问题,主要关注第四类情况,即利用多路摄像头定点定向对同一场景进行视频采集和拼接。From the perspective of image acquisition, it can be roughly divided into four categories. 1) A single camera with a fixed fulcrum, and the lens rotates to collect images of the same scene; 2) A single camera is fixed on a slide rail and moves in parallel to collect images of the same scene; 3) Multiple cameras capture the same scene at different viewing angles and directions For image acquisition, there is an effective overlapping area between images that can be used for image stitching; 4) Multiple cameras collect images of the same scene at different viewing angles and directions, and there is no effective overlapping area between images that can be used for image stitching, or even between images. There are minor gaps and holes. According to the actual problems studied by the present invention, the fourth type of situation is mainly concerned, that is, video collection and splicing of the same scene are performed by using multiple cameras in fixed-point orientation.
从图像拼接的核心技术图像配准角度来看,目前图像配准技术有两种,即相位相关度法和几何特征法。相位相关度法具有场景无关性,能够将纯粹二维平移的图像精确地对齐。但相位相关度法只适合解决存在纯平移或旋转的图像配准问题,在仿射和透视变换模型中,该方法就无法配准图像,而在实际过程中不可能做到相机位置与成像平面的绝对平行,故其实际使用价值低。几何特征法是通过基于图像中的低级几何特征,例如边模型、角模型、顶点模型等,对图像进行拼接的方法。但基于几何特征的图像配准方法的前提是两图像必须要有一定量的重叠区域,并且匹配的图像在时间上必须具有连贯的特征,对于多相机在不同视角,不同方向对同一场景采集的无重叠区域的图像拼接无能无力。From the perspective of image registration, the core technology of image stitching, there are currently two image registration technologies, namely phase correlation method and geometric feature method. The phase correlation method is scene-independent and can precisely align images with pure two-dimensional translation. However, the phase correlation method is only suitable for solving image registration problems with pure translation or rotation. In the affine and perspective transformation models, this method cannot register images, and it is impossible to achieve the camera position and imaging plane in the actual process. The absolute parallel, so its actual use value is low. The geometric feature method is a method of splicing images based on low-level geometric features in the image, such as edge models, corner models, and vertex models. However, the premise of the image registration method based on geometric features is that the two images must have a certain amount of overlapping areas, and the matched images must have coherent features in time. Image stitching in overlapping areas is impotent.
从图像融合的角度来看,其目的是从图像颜色亮度和结构上消除图像间的拼接缝。图像融合的方法很多,对于图像颜色和亮度的融合算法,简单的有光强加权平均和加权平均融合,复杂的有图像Voronoi权重法和高斯样条插值法。其核心思想是先对图像进行分割,利用重叠区域作为匹配的标准,再通过图像校正、颜色变换和像素插值等手段来融合图像的拼接缝。而对于图像结构上的过度差异,一般都是采用简单的羽化的方法,根据欧氏距离对权重进行平均,再利用滤波技术消除由于羽化造成的图像模糊现象和拼接视频中的鬼影现象。显然,对于多相机在不同视角,不同方向对同一场景采集的无重叠区域的图像拼接缝的融合现有方法无法处理,并且对于实时的视频流融合来说,现有技术也无法满足实时性的要求。From the perspective of image fusion, its purpose is to eliminate the stitching seams between images from the perspective of image color brightness and structure. There are many methods of image fusion. For the fusion algorithm of image color and brightness, there are simple light intensity weighted average and weighted average fusion, and complex ones include image Voronoi weight method and Gaussian spline interpolation method. Its core idea is to segment the image first, use the overlapping area as the matching standard, and then fuse the stitching seam of the image by means of image correction, color transformation and pixel interpolation. For the excessive difference in the image structure, a simple feathering method is generally used, and the weights are averaged according to the Euclidean distance, and then the filtering technology is used to eliminate the blurring of the image caused by the feathering and the ghosting phenomenon in the spliced video. Obviously, the existing methods cannot handle the fusion of image stitching seams in non-overlapping areas collected by multiple cameras from different perspectives and directions in the same scene, and for real-time video stream fusion, the existing technology cannot meet real-time performance. requirements.
专利公开号CN103501415A发明专利是一种基于重叠部分结构变形的视频实时拼接方法,其工作原理是首先计算两幅图像各自的拼接缝,然后在两条拼接缝上进行一维特征点的提取与匹配,将匹配的特征点移动到重合位置并记录位移量,在设定的变形扩散影响范围内进行结构形变的扩散,最后计算结构变形后的梯度图,利用梯度域上的融合方法完成图像融合得到最终的拼接图像。为了满足视频拼接的实时性的要求,专利中将图像拼接算法在FPGA上实现,从而达到快速高效的视频拼接效果。显然此专利采用的方法无法对多相机在不同视角,不同方向对同一场景采集的无重叠区域的图像进行拼接处理,无法满足本发明研究的实际问题的实际需求。Patent Publication No. CN103501415A invention patent is a real-time video stitching method based on structural deformation of overlapping parts. Its working principle is to first calculate the stitching seams of two images, and then extract one-dimensional feature points on the two stitching seams And matching, move the matched feature points to the coincident position and record the displacement, carry out the diffusion of structural deformation within the set range of influence of deformation diffusion, and finally calculate the gradient map after structural deformation, and use the fusion method on the gradient domain to complete the image Fusion to get the final stitched image. In order to meet the real-time requirements of video stitching, the patent implements the image stitching algorithm on FPGA, so as to achieve fast and efficient video stitching effect. Apparently, the method adopted in this patent cannot splicing images of non-overlapping areas collected by multiple cameras at different angles of view and in different directions on the same scene, and cannot meet the actual needs of the actual problems studied by the present invention.
专利公开号CN101593350A发明专利是一种深度自适应视频拼接的方法、装置和系统。其视频拼接系统包括摄像机阵列、校准装置、视频拼接装置以及像素插值器和混合器。首先,摄像机阵列生成多个源视频;然后,校准装置执行对极校准和摄像机位置校准,确定多个源视频中每一空间相邻图像对的接缝区域,并生成像素索引表;再次,视频拼接装置利用平均偏移值形成像素索引表的补偿项更新像素索引表;最后,像素插值器和混合器利用更新后的像素索引表结合多源视频生成全景视频。显然,此专利希望通过简化图像拼接算法,在拼接质量和拼接效率上寻求一个平衡点,从而实现对视频流的拼接,但随着采集视频的多样性和复杂性,单一串行的图形拼接算法根本无法满足数据量大,计算复杂的视频流拼接的实时性要求,实用性差。Patent Publication No. CN101593350A invention patent is a method, device and system for depth adaptive video splicing. Its video stitching system includes camera array, calibration device, video stitching device, and pixel interpolator and mixer. First, the camera array generates multiple source videos; then, the calibration device performs epipolar calibration and camera position calibration, determines the seam area of each spatially adjacent image pair in multiple source videos, and generates a pixel index table; again, the video The splicing device uses the average offset value to form a compensation item of the pixel index table to update the pixel index table; finally, the pixel interpolator and the mixer use the updated pixel index table to combine the multi-source video to generate a panoramic video. Obviously, this patent hopes to seek a balance between stitching quality and stitching efficiency by simplifying the image stitching algorithm, so as to realize the stitching of video streams. However, with the diversity and complexity of captured videos, a single serial graphic stitching algorithm It can't meet the real-time requirements of video stream splicing with large amount of data and complicated calculation, and its practicability is poor.
发明内容Contents of the invention
针对本领域存在的不足之处,本发明的目的在于提供了一种针对多摄像机在不同视角、不同方向对同一场景采集的无有效重叠变结构的图像进行定点定向实时拼接的方法。Aiming at the deficiencies in this field, the object of the present invention is to provide a method for fixed-point directional real-time splicing of images without effective overlap and variable structure collected by multiple cameras from different angles of view and directions in the same scene.
实现本发明上述目的的技术方案为:The technical scheme that realizes the above-mentioned purpose of the present invention is:
本发明公开了一种无有效重叠变结构的定点定向视频实时拼接方法,所述方法包括以下步骤:The invention discloses a fixed-point directional video real-time mosaic method without effective overlapping variable structure, and the method includes the following steps:
步骤一、安装多摄像机视频采集阵列,分别采集不同位置的视频流信息,并将所述视频流信息进行模数转换、同步和压缩处理;Step 1, installing a multi-camera video acquisition array, respectively collecting video stream information at different locations, and performing analog-to-digital conversion, synchronization and compression processing on the video stream information;
步骤二、将压缩后的视频流信息转换为同一视频格式,按照时间顺序分为多个第一静态视频帧组;其中,每个所述第一静态视频帧组均包括同一时刻的所述多摄像机视频采集阵列采集的n路视频流信息;Step 2, convert the compressed video stream information into the same video format, and divide it into a plurality of first static video frame groups according to time sequence; wherein, each of the first static video frame groups includes the multiple video frames at the same time n-way video stream information collected by the camera video collection array;
步骤三、将所述第一静态视频帧组中每一路视频流信息对应的静态图像,根据侧视图转俯视图几何模型,转换为被拍对象的俯视图,形成第二静态视频帧组;Step 3, converting the static image corresponding to each video stream information in the first static video frame group into a top view of the object to be photographed according to the side view to top view geometric model, forming a second static video frame group;
步骤四、根据定位模型,对所述第二静态视频帧组中的每个视频帧进行定位,进行全景粗拼接,得到全景粗拼接图;Step 4. According to the positioning model, each video frame in the second static video frame group is positioned, and the panoramic rough mosaic is performed to obtain the panoramic rough mosaic image;
步骤五、根据所述定位模型,确定所述全景粗拼接图中,有重叠区域拼接缝位置、无缝无洞无重叠区域拼接缝位置以及有洞或有裂缝区域拼接缝位置;Step 5. According to the positioning model, determine the stitching seam position in the overlapping region, the stitching seam position in the seamless non-hole and non-overlapping region, and the stitching seam position in the region with holes or cracks in the rough panorama mosaic image;
步骤六、对于所述有重叠区域拼接缝或无缝无洞无重叠区域拼接缝,利用亮度和颜色插值算法,对拼接缝进行融合拼接;Step 6. For the stitching seams with overlapping regions or the stitching seams of seamless, hole-free and non-overlapping regions, use brightness and color interpolation algorithms to fuse and stitch the stitching seams;
步骤七、对于所述有洞或有裂缝区域拼接缝的拼接过程如下:Step 7. The splicing process of the splicing seam in the region with holes or cracks is as follows:
根据所述定位模块确定所述第二静态视频帧组中每一视频帧对应的子图像之间以及子图像与漏洞或裂缝之间的接壤关系,并根据接壤关系确定漏洞或裂缝子图像;Determine the bordering relationship between subimages corresponding to each video frame in the second static video frame group and between the subimages and holes or cracks according to the positioning module, and determine the hole or crack subimages according to the bordering relationship;
提取与漏洞或裂缝子图像相邻的子图像的线特征;Extracting line features of subimages adjacent to leak or crack subimages;
对提取的所述线特征进行匹配,得到线特征对;matching the extracted line features to obtain a line feature pair;
利用相邻区域线的特征外推边界点来补偿裂缝或漏洞;Compensate for cracks or holes by extrapolating boundary points using features of adjacent area lines;
对漏洞或裂缝补偿后的所述全景粗拼接图,利用亮度和颜色插值算法,对拼接缝进行融合拼接,获得全景精拼接后的全景视频帧;For the rough panorama mosaic image after compensation for leaks or cracks, use brightness and color interpolation algorithms to fuse and stitch seams to obtain panorama video frames after fine panorama mosaic;
步骤八、将不同时刻的第一静态视频帧组按照所述步骤三至步骤七进行处理,得到不同时刻的全景视频帧,按照时间顺序将所述全景视频帧进行合成,得到实时全景视频流。Step 8: Process the first static video frame group at different times according to the steps 3 to 7 to obtain panoramic video frames at different times, and synthesize the panoramic video frames according to time sequence to obtain a real-time panoramic video stream.
其中,所述步骤二在收到同步视频分割指令后进行,并且所述步骤二运行结束后,将所述第一静态视频帧按照时间顺序进行存储。Wherein, the second step is performed after receiving the synchronous video segmentation instruction, and after the operation of the second step is completed, the first static video frame is stored in chronological order.
其中,所述步骤三中侧视图变俯视图几何模型为:Wherein, the geometric model of the side view variable top view in the step 3 is:
其中,s为比例因子,fx,fy为摄像机的焦距,cx,cy为图像矫正参数,Rx,Ry,Rz为旋转矩阵的三个列向量,t为平移向量,(x,y,z)为所述静态图像侧视图的元素坐标,(X,Y,Z)为对应元素的俯视图坐标。Among them, s is the scale factor, f x , f y is the focal length of the camera, c x , cy are the image correction parameters, R x , R y , R z are the three column vectors of the rotation matrix, t is the translation vector, ( x, y, z) are the element coordinates of the side view of the static image, and (X, Y, Z) are the top view coordinates of the corresponding element.
其中,所述步骤四、五中定位模型为:Wherein, the positioning model in the steps 4 and 5 is:
其中,x0,y0,z0为摄像机镜头中心点坐标,x1,y1,z1为被拍对象与摄像平面xoy的交点坐标,(α,β,γ)为摄像机对应景域的圆锥体母线的方向角,x2,y2,z2为摄像机对应景域的纬圆与所述圆锥体母线的交点坐标,x,y,z为摄像机对应景域与摄像平面xoy交点坐标。Among them, x 0 , y 0 , z 0 are the coordinates of the center point of the camera lens, x 1 , y 1 , z 1 are the coordinates of the intersection point between the object to be photographed and the camera plane xoy, (α, β, γ) are the coordinates of the scene corresponding to the camera The direction angle of the generatrix of the cone, x 2 , y 2 , z 2 are the intersection coordinates of the latitudinal circle corresponding to the scene of the camera and the generatrix of the cone, and x, y, z are the coordinates of the intersection of the scene of the camera and the camera plane xoy.
其中,所述步骤四中全景粗拼接具体为:Wherein, the panoramic rough stitching in the step 4 is specifically:
首先,生成一张和被拍物的全景图景域大小等大的空白图像;First, generate a blank image that is as large as the size of the panorama of the subject;
其次,对所述第二静态视频帧组中的每个视频帧对应的子图像利用所述定位模型进行定位处理,确定每张子图像在空白图像中的位置、大小和方向;Secondly, use the positioning model to perform positioning processing on the sub-image corresponding to each video frame in the second static video frame group, and determine the position, size and direction of each sub-image in the blank image;
再次,按照多摄像机阵列中每个摄像机预定的标号顺序和其拍摄子图像的定位信息逐张将子图像填充到空白图像中对应的地方,实现全景图的粗拼接。Thirdly, the sub-images are filled to the corresponding places in the blank image one by one according to the predetermined numbering sequence of each camera in the multi-camera array and the positioning information of the sub-images taken by them, so as to realize the coarse mosaic of the panorama.
其中,所述步骤七,提取与漏洞或裂缝子图像相邻的子图像的线特征具体为:Wherein, the step 7, extracting the line feature of the sub-image adjacent to the leak or crack sub-image is specifically:
假设以C(x,y)为中心像素点,同时设L(x,y)和R(x,y)分别为以C(x,y)点沿某一个方向的左、右相邻区域的平均灰度值,则均值比估计如式(3)所示;Assume that C(x,y) is the center pixel point, and let L(x,y) and R(x,y) be respectively the left and right adjacent areas of C(x,y) along a certain direction The average gray value, then the mean ratio is estimated as shown in formula (3);
RoA:C(x,y)=max{R(x,y)/L(x,y),L(x,y)/R(x,y)} (3)RoA: C(x,y)=max{R(x,y)/L(x,y), L(x,y)/R(x,y)} (3)
然后,比较与预先确定的阈值T0进行比较,当大于阈值T0时则认为点C为边界点;Then, compare is compared with a predetermined threshold T 0 when When it is greater than the threshold T 0 , point C is considered as a boundary point;
将通过上述算法提取与漏洞或裂缝子图像相邻的子图像中的线特征片段,并重组织成线特征。The line feature fragments in the sub-images adjacent to the hole or crack sub-image will be extracted by the above algorithm, and reorganized into line features.
其中,所述步骤七中,对提取的所述线特征进行匹配具体为:Wherein, in the step seven, matching the extracted line features is specifically:
将所述线特征用对应的线段函数来描述,假设包围漏洞或裂缝的子图像有n个,首先,提取的每幅子图像的线段函数斜率,组成的集合I由式(4)表示如下,The line feature is described by the corresponding line segment function, assuming that there are n sub-images surrounding holes or cracks, first, the slope of the line segment function of each sub-image extracted, the set I formed is expressed by formula (4) as follows,
其中,m,n,l均表示对应子图像中提取的线特征的总数;Among them, m, n, l all represent the total number of line features extracted in the corresponding sub-image;
利用如下式(5)实现子图像间的线特征匹配,Use the following formula (5) to realize line feature matching between sub-images,
其中,均为集合I中任意的一个元素,T1为匹配阈值;满足式其中,所述步骤七中,利用相邻区域线特征外推边界点来补偿裂缝或漏洞具体为:in, are any element in the set I , and T1 is the matching threshold; satisfying the formula wherein, in the step 7, using the adjacent area line features to extrapolate the boundary points to compensate the cracks or loopholes is specifically:
首先,根据已匹配所述的线特征对的对应的第一线段函数,构造一个能满足对应特征对中所有的线特征的第二线段函数,同时认为所述第二线段函数是对漏洞或裂缝的线特征的合理拟合;First, according to the corresponding first line segment function that has matched the line feature pair, a second line segment function that can satisfy all the line features in the corresponding feature pair is constructed, and at the same time, the second line segment function is considered to be a pair of loopholes or Reasonable fitting of line features of fractures;
然后,利用所述第二线段函数外推漏洞或裂缝处,由此对匹配线特征对确定的线特征所在的位置;Then, using the second line segment function to extrapolate the hole or crack, thereby matching the position of the line feature determined by the line feature pair;
最后,对外推出来的漏洞或裂缝的线特征,利用与漏洞或裂缝子图像相邻的子图像中与其对应的匹配线特征对的颜色和亮度,运用颜色和亮度插值算,对拼接缝进行融合拼接。Finally, the line features of the leaks or cracks are extrapolated, using the color and brightness of the corresponding matching line feature pairs in the sub-images adjacent to the leaks or cracks sub-images, using color and brightness interpolation calculations, the splicing seams are processed. Fusion stitching.
其中,所述步骤六、七中,利用颜色和亮度插值算法,进行融合拼接具体为:Wherein, in the steps six and seven, the color and brightness interpolation algorithm is used to carry out fusion stitching specifically as follows:
假定与漏洞或裂缝子图像相邻的子图像有m幅,则裂缝或漏洞中一点P的灰度、颜色和亮度值,可根据m幅子图像中距离点P最近的点的灰度、颜色和亮度值,通过式(6)计算获得Assuming that there are m sub-images adjacent to the hole or crack sub-image, the grayscale, color and brightness value of a point P in the crack or hole can be calculated according to the grayscale, color of the point closest to point P in the m subimages and luminance value, calculated by formula (6)
其中,g(p)表示P点的灰度值、颜色值和亮度值中任意一种,gi(xi,yi)表示第i幅子图像离P点最近点的对应于g(p)的灰度值、颜色值或者亮度值,函数ξ(x)是线性权重函数;Among them, g(p) represents any one of the gray value, color value and brightness value of point P, and g i ( xi , y i ) represents the point corresponding to g(p ), the gray value, color value or brightness value of ), the function ξ(x) is a linear weight function;
通过对裂缝或漏洞中每个像素点逐个进行如上所示的融合操作,获取完整的所述全景视频帧。The complete panoramic video frame is obtained by performing the fusion operation shown above on each pixel in the crack or hole one by one.
其中,所述步骤八中,将所述全景视频流进行压缩、存储和显示。Wherein, in the eighth step, the panoramic video stream is compressed, stored and displayed.
对应于本发明的方法,所用的装置包括多摄像机视频采集阵列U3,多路视频同步采集单元U4,多路视频同步分割单元U5,多路视频侧视图变俯视图单元U6,GPGPU视频帧定位、拼接、补偿和融合单元U7以及实时全景视频流生成单元U9,如图2所示;Corresponding to the method of the present invention, the device used includes multi-camera video acquisition array U3, multi-channel video synchronous acquisition unit U4, multi-channel video synchronous segmentation unit U5, multi-channel video side view variable top view unit U6, GPGPU video frame positioning, splicing , compensation and fusion unit U7 and real-time panoramic video stream generation unit U9, as shown in Figure 2;
所述多摄像机视频采集阵列采集被拍对象不同位置的视频流信息,并传递给所述多路视频同步采集单元;所述多路视频同步采集单元将接收的视频流信息进行模数转换、同步和压缩处理,并传递给所述多路视频同步分割单元;所述多路视频同步分割单元接收所述多路视频同步采集单元的同步视频分割指令,将其接收的信息转换为同一视频格式,按照时间顺序分为多个第一静态视频帧组,并将所述第一静态视频帧组传递给所述多路视频侧视图变俯视图单元;其中,每个所述第一静态视频帧组均包括同一时刻的所述多摄像机视频采集阵列采集的n路视频流信息;所述多路视频侧视图变俯视图单元将接收的所述第一静态视频帧组中每一路视频流信息对应的静态图像,转换为被拍对象的俯视图,形成第二静态视频帧组,并将所述第二静态视频帧组传递给所述GPGPU视频帧定位、拼接、补偿和融合单元;所述GPGPU视频帧定位、拼接、补偿和融合单元对所述第二静态视频帧组中的每个视频帧进行定位、拼接、补偿和融合,得到被拍对象的全景视频帧,并将所述全景视频帧传递给所述实时全景视频流生成单元;所述实时全景视频流生成单元将接收到的不同时刻的全景视频帧,按照时间顺序合成实时全景视频流。上述GPGPU视频帧定位、拼接、补偿和融合单元基于CUDA平行计算构架。The multi-camera video acquisition array collects the video stream information of different positions of the object to be photographed, and transmits it to the multi-channel video synchronous acquisition unit; the multi-channel video synchronous acquisition unit performs analog-to-digital conversion and synchronization on the received video stream information and compression processing, and delivered to the multi-channel video synchronous segmentation unit; the multi-channel video synchronous segmentation unit receives the synchronous video segmentation instruction of the multi-channel video synchronous acquisition unit, converts the information it receives into the same video format, Divide into a plurality of first static video frame groups according to time sequence, and transfer the first static video frame groups to the multi-channel video side view changing top view unit; wherein, each of the first static video frame groups is Including the n-channel video stream information collected by the multi-camera video acquisition array at the same moment; the static image corresponding to each channel of video stream information in the first static video frame group to be received by the multi-channel video side view changing top view unit , converted into a top view of the object to be photographed, forming a second static video frame group, and passing the second static video frame group to the GPGPU video frame positioning, splicing, compensation and fusion unit; the GPGPU video frame positioning, The splicing, compensation and fusion unit performs positioning, splicing, compensation and fusion on each video frame in the second static video frame group to obtain a panoramic video frame of the object to be photographed, and transfers the panoramic video frame to the A real-time panoramic video stream generating unit; the real-time panoramic video stream generating unit synthesizes the received panoramic video frames at different times into a real-time panoramic video stream according to time sequence. The above-mentioned GPGPU video frame positioning, splicing, compensation and fusion units are based on the CUDA parallel computing framework.
多摄相机视频采集阵列Multi-Camera Video Acquisition Array
多摄相机视频采集阵列是由n个按照固定的安装参数进行安装的摄像机构成的摄像阵列,如图2所示,阵列中的各个摄像机通过采用不同的视向角的镜头,以及不同拍摄角度,实现对采集场景U1的基本覆盖。但采集的同一场景U1的各个摄像机图像U2间无有效拼接重叠区域,甚至存在较小的间隙和漏洞的。The multi-camera video acquisition array is a camera array composed of n cameras installed according to fixed installation parameters. As shown in Figure 2, each camera in the array uses lenses with different viewing angles and different shooting angles. Realize the basic coverage of the acquisition scene U1. However, there is no effective splicing overlapping area between the camera images U2 of the same scene U1 collected, and there are even small gaps and loopholes.
多路视频同步采集单元Multi-channel video synchronous acquisition unit
本单元如图2所示,是由多块具有多路视频同步采集功能的视频采集卡组成。其工作流程为,多路视频同步采集单元U4将多摄相机视频采集阵列U3视频源的n路模拟信号通过采集卡上的A/D转换模块分路转换成数字信号,然后传至板卡自带的储存器中,再由视频采集卡上自带的视频压缩芯片和视频同步芯片对各路视频执行同步和压缩算法,从而将庞大的视频信号同步,压缩变小后形成n路视频流,再传递给多路视频同步分割单元U5,完成整个工作流程。As shown in Figure 2, this unit is composed of multiple video capture cards with multi-channel video synchronous capture function. The working process is that the multi-channel video synchronous acquisition unit U4 converts the n-channel analog signals of the multi-camera video acquisition array U3 video sources into digital signals through the A/D conversion module on the acquisition card, and then transmits them to the board itself. In the storage belt, the built-in video compression chip and video synchronization chip on the video capture card execute synchronization and compression algorithms for each channel of video, so that the huge video signal is synchronized, compressed and reduced to form n channels of video streams, Then pass it to the multi-channel video synchronous segmentation unit U5 to complete the whole workflow.
多路视频同步分割单元Multi-channel Video Synchronous Segmentation Unit
本单元是一个FPGA可编程硬件平台,平台里预载了一个平行图像处理算法硬件逻辑电路,该图像处理算法的功能是将图2中多路视频同步采集单元U4传递过来的n路视频流按先后时间顺序分割成若干个静态子图像组(特别申明在本发明中“视频帧”与“子图像”不加区别),并且每组由n路视频流同一时刻的n个静态图像构成,同时再将各个时刻的图像组按先后顺序依次传送给多路视频帧侧视图变俯视图单元U6,完成该单元的整个工作流程。This unit is an FPGA programmable hardware platform, and a parallel image processing algorithm hardware logic circuit is preloaded in the platform. Sequential time sequence is divided into several static sub-image groups (in particular, "video frame" and "sub-image" are not distinguished in the present invention), and each group is composed of n static images at the same moment of n road video streams, and at the same time Then the image groups at each moment are sequentially transmitted to the multi-channel video frame side view changing top view unit U6 to complete the entire working process of this unit.
多路视频帧侧视图变俯视图单元Multi-channel video frame side view variable top view unit
为了节省装置的成本,提高装置的集成度,如图2所示,本单元与多路视频同步分割单元U5同在一个FPGA可编程硬件平台进行实现。本单元的核心图像变换算法硬件逻辑电路作为多路视频同步分割单元U5的后续算法也已预载在平台里,为实现将同一时刻的n个视频帧在多路视频帧侧视图转换成摄像机正对被拍物拍摄的俯视图;本发明在图像变换算法中构建了基于多摄像机安装参数图像几何变换模型用于侧视图变俯视图,其具体步骤如下:In order to save the cost of the device and improve the integration of the device, as shown in Figure 2, this unit and the multi-channel video synchronization division unit U5 are implemented on the same FPGA programmable hardware platform. The hardware logic circuit of the core image conversion algorithm of this unit has also been preloaded in the platform as a follow-up algorithm of the multi-channel video synchronization segmentation unit U5. The bird's-eye view of the subject being photographed; the present invention has built a geometric transformation model based on multi-camera installation parameters in the image transformation algorithm for changing the bird's-eye view from the side view, and its specific steps are as follows:
(1)根据摄像机成像原理,可建立摄像机的实物侧视平面坐标系(x,y,z)到摄像机的虚拟俯视坐标系(X,Y,Z)的坐标变换方程如下式所示,(1) According to the principle of camera imaging, the coordinate transformation equation from the camera’s physical side-view plane coordinate system (x, y, z) to the camera’s virtual top-view coordinate system (X, Y, Z) can be established as shown in the following formula,
其中各个参量的具体含义如下所示:The specific meaning of each parameter is as follows:
s 比例因子s scale factor
fx,fy 摄像机的焦距f x ,f y the focal length of the camera
cx,cy 图像校正参数c x , c y image correction parameters
Rx,Ry,Rz 旋转矩阵的三个列向量Three column vectors of R x , R y , R z rotation matrix
t 平移向量t translation vector
(2)根据多摄像的安装参数结合摄像机拍摄标准图像对摄像机的成像参数进行标定,获取侧视图变俯视图所需要的成像参数。其过程如图3所示,以黑白相间棋格图U61为例,通过成像标定程序标定参数计算所需的标定点U63,利用标定点的位置变化建立计算参数所需的方程组,从而求解侧视图变俯视图变换模型中的参数的值,完成摄像机成像参数标定工作。(2) Calibrate the imaging parameters of the camera according to the installation parameters of the multi-camera and the standard images taken by the camera, and obtain the imaging parameters required for the side view to the top view. The process is shown in Figure 3. Taking the black and white checkerboard U61 as an example, the calibration point U63 required for the calculation of parameters is calibrated through the imaging calibration program, and the equations required for calculating the parameters are established by using the position changes of the calibration points, so as to solve the side The view-to-top view transforms the values of the parameters in the model to complete the calibration of camera imaging parameters.
(3)利用标定好摄像机参数,以及在此基础上建立的侧视图变俯视图的图像几何变换模型,即可实现如图3所示的黑白相间棋格侧视图U61变换成黑白相间棋格俯视图U62。(3) By using the calibrated camera parameters and the image geometric transformation model of changing the side view to the top view established on this basis, the black and white checkerboard side view U61 as shown in Figure 3 can be transformed into the black and white checkerboard top view U62 .
综上,多路视频帧侧视图变俯视图单元U6的工作流程为,首先其接收由多路视频同步分割单元U5传递来的某一时刻子图像组(第一静态视频帧组),然后按照摄像头标号顺序依次利用基于多摄像机安装参数图像几何变换模型将原子图像组的侧视图转换成俯视图,并且把转换后子图像组作为新的子图像组传递给GPGPU视频帧定位、拼接、补偿和融合单元U7,并准备接受下一时刻子图像组,依此类推完成整个工作流程。To sum up, the workflow of the multi-channel video frame side view changing top view unit U6 is as follows: first, it receives the sub-image group (the first static video frame group) at a certain moment delivered by the multi-channel video synchronous segmentation unit U5, and then according to the camera The numbering sequence uses the image geometric transformation model based on multi-camera installation parameters to convert the side view of the atomic image group into a top view, and transfers the converted sub-image group as a new sub-image group to the GPGPU video frame positioning, splicing, compensation and fusion unit U7, and is ready to accept the next sub-image group, and so on to complete the entire workflow.
GPGPU视频帧定位、拼接、补偿和融合单元GPGPU video frame positioning, splicing, compensation and fusion unit
本单元作为本发明的关键单元,是一个以英伟达公司的高性能GPU为硬件平台的图像处理软件系统。其是基于CUDA平行运算构架开发的,由子图像定位功能模块U71、子图像组定点定向全景粗拼接功能模块U72、子图像组拼接缝补偿功能模块U73、子图像组拼接缝融合功能模块U74,4个功能子模块构成。上述每个功能子模块均是本发明提出的一种图像处理算法,其原理分别说明如下:This unit, as the key unit of the present invention, is an image processing software system using NVIDIA's high-performance GPU as the hardware platform. It is developed based on the CUDA parallel computing framework, and consists of the sub-image positioning function module U71, the sub-image group fixed-point orientation panoramic rough mosaic function module U72, the sub-image group seam compensation function module U73, and the sub-image group seam fusion function module U74 , composed of four functional sub-modules. Each of the above-mentioned functional submodules is a kind of image processing algorithm proposed by the present invention, and its principle is described as follows respectively:
(1)子图像定位功能模块(1) Sub-image positioning function module
根据多摄相机视频采集阵列的单个摄像机安装方式是固定的如图4所示,再结合摄像景域形成的原理,则可建立拼接子图像的定位模型,从而确定每个摄像机拍摄的具体区域,其步骤如下,According to the installation method of a single camera of the multi-camera video acquisition array is fixed, as shown in Figure 4, combined with the principle of the formation of the camera scene, the positioning model of the spliced sub-image can be established to determine the specific area shot by each camera. The steps are as follows,
1)首先,根据图4所示单摄像机U711定点安装参数,可确定摄像机镜头中心点p0U712以及摄像机中心线l0 U714与被拍对象所在xoy平面的交点p1 U713,在如图4建立的(x,y,z)坐标系中坐标分别为p0(x0,y0,z0)和p1(x1,y1,z1),则摄像机中心线l0的空间直线方程如次下式所示,1) First, according to the fixed-point installation parameters of the single camera U711 shown in Figure 4, the camera lens center point p 0 U712 and the intersection point p 1 U713 of the camera center line l 0 U714 and the xoy plane where the subject is located can be determined, and established in Figure 4 The coordinates in the (x, y, z) coordinate system are p 0 (x 0 , y 0 , z 0 ) and p 1 (x 1 , y 1 , z 1 ), then the spatial straight line equation of the camera centerline l 0 As shown in the following formula,
2)其次,根据图4所示单摄像机U711定向安装参数,可确定摄像机镜头中心线U714的空间方向角,再结合摄像机的视场角可确定摄像机U711形成景域的圆锥体的母线l1U712的方向角(α,β,γ),又因为母线l1 U712过摄像机镜头中点p0(x0,y0,z0),则空间直线l1U712方程如下式所示,2) Secondly, according to the directional installation parameters of the single camera U711 shown in Figure 4, the spatial orientation angle of the centerline U714 of the camera lens can be determined, and combined with the camera's field of view angle, the generatrix l 1 U712 of the cone forming the scene area can be determined by the camera U711 direction angle (α, β, γ), and because the bus line l 1 U712 passes through the midpoint p 0 (x 0 , y 0 , z 0 ) of the camera lens, the equation of the space straight line l 1 U712 is shown in the following formula,
3)最后,由纬圆法可知摄像机U711在xoy平面形成景域曲线Γ2U717可由下式消除参数x2,y2,z2求得,3) Finally, from the latitude circle method, it can be seen that the scene curve Γ 2 U717 formed by the camera U711 on the xoy plane can be obtained by eliminating the parameters x 2 , y 2 , and z 2 by the following formula,
其中,点M1为任意纬圆Γ1 U715上与母线l1的交点,坐标为(x2,y2,z2)。Wherein, the point M 1 is the intersection point on any latitude circle Γ 1 U715 and the bus line l 1 , and the coordinates are (x 2 , y 2 , z 2 ).
综上所述,子图像定位功能模块,只需依据图2中多摄相机视频采集阵列U3中每个摄像机定点定向安装参数,即可建立的每个摄像机景域曲线方程,从而预知每个拼接子图像在全景图中的区域和大小,实现全景图中拼接子图像的定位。To sum up, the sub-image positioning function module only needs to establish the scene curve equation of each camera according to the fixed-point orientation installation parameters of each camera in the multi-camera video acquisition array U3 in Figure 2, so as to predict each splicing The area and size of the sub-image in the panorama to realize the positioning of the spliced sub-image in the panorama.
(2)子图像组定点定向全景粗拼接功能模块(2) Sub-image group fixed-point orientation panoramic rough mosaic function module
此功能子模块的功能是将多路视频帧侧视图变俯视图单元U6获得的俯视子图像组在子图像定位功能模块的作用下进行全景图的粗拼接。其具体工作流程为:首先,根据预先设定生成一张和全景图景域大小等大的空白图像;其次,对接收的俯视子图像组中的每一张子图像利用图5中子图像定位功能模块U71依次进行定位处理,确定每张子图像在空白图中的位置、大小和方向;再次,按照多摄像机阵列中每个摄像机预定的标号顺序和其拍摄子图像的定位信息逐张将子图像填充到空白图中对应的地方实现全景图的粗拼接;最后,对粗拼接完毕的全景图进行拼接缝标定,标出拼接缝中的重叠区域、缝或洞区域和无缝拼接区域的具体位置、形状和区域大小,完成子图像组定点定向全景粗拼接功能模块U72整个工作流程。The function of this functional sub-module is to convert the multi-channel video frame side view to the top view sub-image group obtained by the top-view sub-image group under the action of the sub-image positioning function module to perform rough splicing of the panorama. The specific workflow is as follows: firstly, generate a blank image with the same size as the panorama area according to preset settings; secondly, use the sub-image positioning function in Fig. 5 for each sub-image in the received overlooking sub-image group Module U71 performs positioning processing in turn to determine the position, size and direction of each sub-image in the blank image; again, according to the predetermined label sequence of each camera in the multi-camera array and the positioning information of its captured sub-images, the sub-images are divided one by one. Fill in the corresponding places in the blank image to realize the rough stitching of the panorama; finally, calibrate the stitching seam of the rough stitched panorama, and mark the overlapping area, seam or hole area and the seamless stitching area in the stitching seam. The specific position, shape and size of the area complete the entire workflow of the sub-image group fixed-point directional panoramic rough mosaic function module U72.
(3)子图像组拼接缝补偿功能模块(3) Sub-image group seam compensation function module
此功能子模块如图6所示,由子图像线特征提取子模块U731、相邻子图像间线特征匹配子模块U732和相邻区域线特征外推边界点补偿缝和漏洞子模块U733三个子模块组成,其工作原理分别说明如下,As shown in Figure 6, this functional sub-module consists of sub-image line feature extraction sub-module U731, line feature matching sub-module U732 between adjacent sub-images and adjacent area line feature extrapolation boundary point compensation seam and loophole sub-module U733 three sub-modules Composition, and its working principle is described as follows,
1)子图像线特征提取子模块1) sub-image line feature extraction sub-module
在此子模块中,要提取每张子图像的二维线特征,必须先获取图像中阶跃型边界。本发明中采用RoA算法来检测子图像的边界,该算法是通过计算相邻区域的均值比来确定目标像素点是否是边缘点。由于该方法采用的是相邻区域的强度均值,所以极大的降低了因斑点噪声而引起的单个像素的强烈波动,使得通过此法获取的子图像的线特征可靠性较高。为了减少计算量,对于每张需要提取线特征的子图像只提取与拼接缝相邻的一定区域内的线特征。算法是通过比较沿某一方向相邻区域来完成的。其步骤为,首先,假设以C(x,y)为中心像素点,同时设L(x,y)和R(x,y)分别为以C(x,y)点沿某一个方向的左、右相邻区域的平均灰度值,则均值比估计如下式所示;In this sub-module, to extract the 2D line features of each sub-image, the step boundary in the image must be obtained first. In the present invention, the RoA algorithm is used to detect the boundary of the sub-image, and the algorithm determines whether the target pixel is an edge point by calculating the mean ratio of adjacent areas. Because this method uses the intensity mean value of adjacent regions, it greatly reduces the strong fluctuation of a single pixel caused by speckle noise, making the line feature of the sub-image obtained by this method more reliable. In order to reduce the amount of calculation, for each sub-image that needs to extract line features, only the line features in a certain area adjacent to the stitching seam are extracted. The algorithm is done by comparing adjacent areas along a certain direction. The steps are as follows. First, assume that C(x,y) is the center pixel point, and set L(x,y) and R(x,y) respectively as the left side of C(x,y) along a certain direction. , the average gray value of the right adjacent area, the mean ratio is estimated as follows;
RoA:C(x,y)=max{R(x,y)/L(x,y),L(x,y)/R(x,y)}RoA: C(x,y)=max{R(x,y)/L(x,y), L(x,y)/R(x,y)}
然后,比较RoA:C(x,y)与预先确定的阈值T0进行比较,当RoA:C(x,y)大于阈值时则认为点C为边界点;最后,将通过上述算法提取子图像中的线特征片段采用一定手段重组织成具有意义的线特征,完成整个功能子模块的功能。Then, compare RoA:C(x,y) with a predetermined threshold T 0 , when RoA:C(x,y) is greater than the threshold, point C is considered as a boundary point; finally, the sub-image will be extracted by the above algorithm The line feature fragments in are reorganized into meaningful line features by certain means to complete the function of the entire functional sub-module.
2)相邻子图像间线特征匹配子模块2) Line feature matching sub-module between adjacent sub-images
在此子模块中,要对相邻子图像间提取的二维线特征进行匹配,就必须先将线特征进行数学化。本发明采用数学拟合的方法,将相邻子图像中提取的线特征用对应的线段函数来描述。以拼接漏洞为例,假设包围拼接漏洞的子图像有n个。首先,提取的每幅子图像的线段函数斜率组成的集合I由下式表示,In this sub-module, to match the two-dimensional line features extracted between adjacent sub-images, the line features must be mathematicalized first. The present invention uses a mathematical fitting method to describe the line features extracted from adjacent sub-images with corresponding line segment functions. Taking stitching holes as an example, it is assumed that there are n sub-images surrounding the stitching holes. First, the set I composed of the slope of the line segment function of each sub-image extracted is represented by the following formula,
其中,m,n,l等下标均表示对应子图像中提取的线特征的总数;然后利用下式实现子图像间的线特征匹配,Among them, the subscripts such as m, n, and l all represent the total number of line features extracted in the corresponding sub-image; then use the following formula to achieve line feature matching between sub-images,
其中,均为集合I中任意的一个元素,但不同时代表同一个元素,T1为匹配阈值是一个较小的正数;最后,将完美匹配的线特征重新组合成线特征对,完成整个子图像的线特征匹配过程。in, are any elements in the set I, but Not representing the same element at the same time, T 1 is a small positive number for the matching threshold; finally, the perfectly matched line features are recombined into line feature pairs to complete the line feature matching process of the entire sub-image.
以图7所示的拼接漏洞T734为例,上述线特征匹配过程可描述为如下步骤:首先,提取如图所示的左图T731、右图T732和下图T733的线特征为分别为{T7311,T7312,T7313,T7314}、{T7321,T7322}、{T7331,T7332,T7333};然后,利用线特征匹配公式的线特征匹配算法匹配从三幅图像提取的线特征;最后,对匹配的线特征重新组合成线特征对为{(T7311,T7331),(T7312,T7332),(T7313,T7322,T7333),(T7314,T7321)},从而完成如图7所示的拼接漏洞的相邻的三幅图的线特征匹配过程。Taking the stitching vulnerability T734 shown in Figure 7 as an example, the above-mentioned line feature matching process can be described as the following steps: First, extract the line features of the left picture T731, the right picture T732, and the bottom picture T733 as shown in the figure, respectively {T7311 , T7312, T7313, T7314}, {T7321, T7322}, {T7331, T7332, T7333}; then, use the line feature matching algorithm of the line feature matching formula to match the line features extracted from the three images; finally, the matched line The features are recombined into line feature pairs as {(T7311, T7331), (T7312, T7332), (T7313, T7322, T7333), (T7314, T7321)}, thus completing the adjacent splicing loopholes shown in Figure 7 Line feature matching process for three images.
3)相邻区域线特征外推边界点补偿缝和漏洞子模块3) Line feature extrapolation of adjacent areas Compensation seam and loophole sub-module of boundary point
在此子模块中,由于存在如图7所示的拼接漏洞T734,要对漏洞T734的图像进行补偿,就必须利用已有的图像信息进行外推来进行。在本发明中,是利用相邻子图像间线特征匹配子模块U732中获得的图7中匹配的线特征对来进行补偿的。具体而言,首先根据已有匹配的线特征对的线段函数,构造一个能满足匹配特征对中所有的线特征的线段函数,同时认为此函数也是对拼接漏洞的线特征的合理拟合;然后利用新构造的线段函数来外推拼接漏洞处,由此对匹配线特征对确定的线特征所在的位置;最后对外推出来的漏洞的线特征,利用原拼接子图像中与其对应的匹配线特征对的灰度值、颜色和亮度进行融合、填补。当对所有的匹配线特征对进行如上类似处理后,即可实现如图7中拼接漏洞中补偿的线特征图像,如T7341、T342、T343和T344所示。通过与原图T735在拼接漏洞处的比较,显然通过相邻区域线特征外推边界点补偿缝和漏洞子模块补偿的漏洞图像基本上与原图像较好的保持一致。In this sub-module, due to the splicing loophole T734 shown in Figure 7, to compensate the image of the loophole T734, it is necessary to use the existing image information for extrapolation. In the present invention, the compensation is performed by using the matched line feature pairs in FIG. 7 obtained in the line feature matching sub-module U732 between adjacent sub-images. Specifically, firstly, according to the line segment function of the existing matching line feature pair, a line segment function that can satisfy all the line features in the matching feature pair is constructed, and at the same time, this function is considered to be a reasonable fit for the line feature of the splicing loophole; then Use the newly constructed line segment function to extrapolate the splicing loopholes, and then match the location of the line features determined by the matching line feature pair; finally, use the corresponding matching line features in the original stitching sub-image to extract the loophole line features Blend and fill the gray value, color and brightness of the image. After performing similar processing on all matching line feature pairs as above, the line feature images compensated for stitching holes in Figure 7 can be realized, as shown in T7341, T342, T343, and T344. By comparing with the original image T735 at stitching holes, it is obvious that the hole image compensated by extrapolating boundary point compensation seams and hole sub-modules through adjacent area line features is basically consistent with the original image.
综上,子图像组拼接缝补偿功能模块工作流程如图6所示,首先,通过子图像线特征提取功能子模块U731对粗拼接视频帧全景图中所有子图像的线特征,并提取的每幅子图像的线段函数斜率组成集合;其次,由相邻子图像间线特征匹配功能子模块U732完成所有子图像的线特征匹配,获得子图像间所有匹配的线特征对;再次,相邻区域线特征外推边界点补偿缝和漏洞功能子模块U733根据已有匹配的线特征对的线段函数,重新构造一个能满足匹配特征对中所有的线特征的线段函数来进行漏洞和缝的线特征外推拟合,从而确定漏洞和缝中线特征的位置;最后,利用原拼接子图像的灰度值、颜色和亮度对漏洞中线特征和图像进行融合、填补,从而获得无拼接缝和漏洞的视频帧全景图,完成此功能模块的整个工作流程。To sum up, the workflow of the sub-image group stitching seam compensation function module is shown in Figure 6. First, through the sub-image line feature extraction function sub-module U731, the line features of all sub-images in the rough stitching video frame panorama are extracted and extracted. The slope of the line segment function of each sub-image forms a set; secondly, the line feature matching function sub-module U732 between adjacent sub-images completes the line feature matching of all sub-images, and obtains all matching line feature pairs between sub-images; again, adjacent sub-images Regional line feature extrapolation boundary point compensation seam and hole function sub-module U733 reconstructs a line segment function that can satisfy all the line features in the matching feature pair according to the line segment function of the existing matching line feature pair to perform hole and seam line Feature extrapolation and fitting, so as to determine the position of the hole and the centerline feature of the seam; finally, use the gray value, color and brightness of the original spliced sub-image to fuse and fill the centerline feature and image of the hole, so as to obtain no seam and hole The panorama of the video frame to complete the entire workflow of this functional module.
(4)子图像组拼接缝融合功能模块(4) Sub-image group seam fusion function module
由前面分析,要利用子图像组拼接缝补偿功能模块U73的实现获取自然完整的无拼接缝和漏洞的视频帧全景图,就必须消除相邻子图像间、相邻子图像与补偿的拼接缝和拼接漏洞间的灰度、颜色和亮度的差异,就必须进行它们之间的灰度、颜色和亮度的融合处理。在本发明中,子图像组拼接缝融合功能模块U74采用Szeliski提出的方法来进行融合操作,此方法是假定某个拼接缝或者拼接漏洞相邻的子图像有m幅,则拼接缝或者拼接漏洞中某一点P的灰度、颜色和亮度值,可由与m幅子图像中此拼接缝或者拼接漏相邻区域中,离P点距离最近的点的灰度、颜色和亮度值,通过下式计算获得,According to the previous analysis, in order to obtain a natural and complete video frame panorama without stitching seams and loopholes by utilizing the sub-image group seam compensation function module U73, it is necessary to eliminate the gap between adjacent sub-images and between adjacent sub-images and compensation. The difference in grayscale, color and brightness between stitching seams and stitching holes must be blended with grayscale, color and brightness. In the present invention, the sub-image group splicing seam fusion function module U74 adopts the method proposed by Szeliski to carry out the fusion operation. Or the grayscale, color and brightness value of a certain point P in the splicing hole can be obtained from the grayscale, color and brightness value of the point closest to point P in the adjacent area of this splicing seam or splicing leak in m sub-images , calculated by the following formula,
其中,g(p)表示P点的灰度值、颜色值和亮度值中任意一种,gi(xi,yi)表示第i幅图像离P点最近点的对应于g(p)的灰度值、颜色值或者亮度值。函数ξ(x)是线性权重函数,由第i幅图像离P点最近点与P点的距离来决定,距离越大权重越大,距离为最大距离时权重为1,反之距离为最小距离时权重为0。通过对拼接缝和拼接漏洞中每个像素点逐个进行如上所示的融合操作,即可获取自然完整的无拼接缝和漏洞的视频帧全景图,实现子图像组拼接缝补偿功能模块U73的功能。Among them, g(p) represents any one of the gray value, color value and brightness value of point P, and g i ( xi , y i ) represents the point corresponding to g(p) of the i-th image closest to point P Gray value, color value or brightness value of . The function ξ(x) is a linear weight function, which is determined by the distance between the closest point of the i-th image to point P and point P. The greater the distance, the greater the weight. When the distance is the maximum distance, the weight is 1. Otherwise, when the distance is the minimum distance Weight is 0. By performing the above-mentioned fusion operation on each pixel in the stitching seam and the stitching hole one by one, a natural and complete video frame panorama without stitching seams and holes can be obtained, and the stitching seam compensation function module of the sub-image group can be realized Function of U73.
通过上面对GPGPU视频帧定位、拼接、补偿和融合单元的各个子功能模块的工作原理逐一说明,则GPGPU视频帧定位、拼接、补偿和融合单元U7的工作流程可描述为如图5所示:首先,将多路视频帧侧视图变俯视图单元U6输出的俯视子图像组作为全景图像拼接的原图像组,在子图像定位功能子模块U71的处理下,预知每个拼接子图像在全景图中的区域和大小;其次,在子图像组定点定向全景粗拼接功能子模块U72中,利用子图像定位功能子模块U71的子图像定位信息进行全景图的粗拼接,获得有拼接缝和拼接漏洞的视频帧全景图;再次,将具有拼接缝和拼接漏洞的视频帧全景图送到子图像组拼接缝补偿功能子模块U73中进行拼接缝和拼接漏洞的补偿;最后,通过子图像组拼接缝融合功能子模块U74对其进行拼接缝和拼接漏洞的融合处理,获得自然完整的无拼接缝和漏洞的视频帧全景图,完成GPGPU视频帧定位、拼接、补偿和融合单元U7的整个工作流程。By explaining the working principle of each sub-function module of the GPGPU video frame positioning, splicing, compensation and fusion unit one by one above, the workflow of the GPGPU video frame positioning, splicing, compensation and fusion unit U7 can be described as shown in Figure 5 : Firstly, the overlooking sub-image group output by the multi-channel video frame side view changing top view unit U6 is used as the original image group of panoramic image mosaic, under the processing of sub-image positioning function sub-module U71, it is predicted that each mosaic sub-image is in the panorama secondly, in the sub-image group fixed-point orientation panorama rough stitching function submodule U72, utilize the sub-image positioning information of the sub-image positioning function sub-module U71 to carry out the rough stitching of the panorama, and obtain stitching seams and stitching The video frame panorama of loophole; Again, will have the video frame panorama of splicing seam and splicing loophole to send in sub-image group stitching seam compensation function submodule U73 and carry out the compensation of splicing seam and splicing loophole; Finally, through sub-image group The image group seam fusion function sub-module U74 performs fusion processing of seams and stitching holes to obtain a natural and complete panorama of video frames without seams and holes, and completes positioning, stitching, compensation and fusion of GPGPU video frames The entire workflow of unit U7.
实时全景视频流生成单元Real-time panoramic video stream generation unit
本单元作为本发明的输出单元如图2所示,实时全景视频流生成单元U9也是一个以英伟达公司的高性能GPU为硬件平台的视频流生成软件系统。在此单元中,利用了多线程调度机制和CUDA并行计算架构,将GPGPU视频帧定位、拼接、补偿和融合单元U7获取的自然完整的无拼接缝和漏洞的视频帧全景图(全景视频帧),按照时间的先后顺序,以24帧每秒的方式形成视频流。同时通过简单的视频压缩算法,将视频流压缩成常用的视频格式进行储存。完成本单元的整个工作流程。This unit is used as the output unit of the present invention as shown in FIG. 2 , and the real-time panoramic video stream generation unit U9 is also a video stream generation software system based on the high-performance GPU of NVIDIA Corporation as the hardware platform. In this unit, the multi-thread scheduling mechanism and the CUDA parallel computing architecture are used to position, splice, compensate and fuse the GPGPU video frame into a natural and complete video frame panorama without seams and holes (panoramic video frame ), form a video stream at 24 frames per second in the order of time. At the same time, through a simple video compression algorithm, the video stream is compressed into a commonly used video format for storage. Complete the entire workflow of this unit.
本发明的有益效果在于:The beneficial effects of the present invention are:
本发明提供了一种针对不同视角、不同方向对同一场景采集的无有效重叠变结构的图像进行拼接处理,再实时生成视频流的视频实时拼接方法;该方法基于采集设备安装参数、性能参数和采集图像的线性特征,建立了图像的定位模型、变换模型以及补偿融合模型,通过利用这些高效的图像处理模型,本发明的拼接方法不但提高了图像拼接精度,同时保证了图像拼接效率,满足了视频流拼接的实时性要求。The present invention provides a real-time video splicing method for splicing images without effective overlap and variable structure collected in the same scene from different angles of view and different directions, and then generating video streams in real time; the method is based on acquisition device installation parameters, performance parameters and The linear feature of the image is collected, and the image positioning model, transformation model and compensation fusion model are established. By using these efficient image processing models, the splicing method of the present invention not only improves the accuracy of image splicing, but also ensures the efficiency of image splicing. Real-time requirements for video stream splicing.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without creative work.
图1为本发明的一种无有效重叠变结构的定点定向视频实时拼接方法;Fig. 1 is a kind of fixed-point directional video real-time mosaic method without effective overlapping variable structure of the present invention;
图2为利用本发明实施例提供的拼接方法进行视频拼接的装置结构示意图;FIG. 2 is a schematic structural diagram of a video splicing device using a splicing method provided by an embodiment of the present invention;
图3是本发明实施例提供的黑白相间琪格侧视图变俯视图示意图Fig. 3 is a schematic diagram of a black and white Qige side view changed to a top view provided by an embodiment of the present invention
图4是本发明实施例提供的子图像定位功能模块原理图;FIG. 4 is a schematic diagram of a sub-image positioning function module provided by an embodiment of the present invention;
图5是本发明实施例提供的GPGPU视频帧定位、拼接、补偿和融合单元结构框图;Fig. 5 is the structural block diagram of GPGPU video frame positioning, splicing, compensation and fusion unit provided by the embodiment of the present invention;
图6是本发明实施例提供的子图像组拼接缝补偿功能模块结构框图;Fig. 6 is a structural block diagram of a sub-image group stitching seam compensation function module provided by an embodiment of the present invention;
图7是本发明实施例提供的子图像组拼接缝补偿功能模块原理示意图;Fig. 7 is a schematic diagram of the principle of the sub-image group splicing seam compensation function module provided by the embodiment of the present invention;
图8为本发明实施例的一种无有效重叠变结构的定点定向视频实时拼接方法的流程图。FIG. 8 is a flow chart of a real-time mosaic method for fixed-point directional video without effective overlapping variable structure according to an embodiment of the present invention.
具体实施方式detailed description
下面结合附图和实施例对本发明作进一步详细描述。以下实施例用于说明本发明,但不能用来限制本发明的范围。The present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments. The following examples are used to illustrate the present invention, but should not be used to limit the scope of the present invention.
结合图8,对本发明具体实施情况进行进一步的说明,本发明应用于国内某2650m3高炉上,分别从高炉的不同方向安装了3个侧视拍摄料面的摄像头,用于获取直径为8.2米的圆面的料面。由于高炉炉内无光、高温和多尘的恶劣环境,使的无法使用单个摄像机获取高炉的整个料面,并且每个摄像机只能定点定向固定安装。这就满足了3个摄像机在不同视向角,不同方向对同一料面进行图像采集,采集图像间无能用于图像拼接的有效重叠区域,甚至图像间有较小的间隙和漏洞的本发明使用前提。首先,根据图2安装视频拼接装置的必要设备,设备安装完毕后,再根据图1的流程开始对采集视频信息进行视频拼接,其工作步骤如下,In conjunction with Fig. 8, the specific implementation of the present invention is further described. The present invention is applied to a 2650m3 blast furnace in China, and three cameras are installed from different directions of the blast furnace to take pictures of the material surface from the side, and are used to obtain a diameter of 8.2 meters. The material surface of the round surface. Due to the dark, high temperature and dusty environment in the blast furnace, it is impossible to use a single camera to capture the entire material surface of the blast furnace, and each camera can only be fixed and fixed at a fixed point. This satisfies the need for three cameras to collect images of the same material surface at different viewing angles and different directions, and there is no effective overlapping area for image splicing between the collected images, and even there are small gaps and loopholes between the images. premise. First, install the necessary equipment of the video splicing device according to Figure 2. After the equipment is installed, start video splicing of the collected video information according to the process in Figure 1. The working steps are as follows,
1、根据多摄像机安装参数S61对安装的3个拍摄料面的摄像机标定其摄像机成像参数S62,再次基础上构建侧视图变俯视图几何变换模型S63,以便子图像组侧视图变俯视图S6使用;1. According to the multi-camera installation parameters S61, calibrate the camera imaging parameters S62 of the installed 3 cameras for shooting the material surface, and build the side view variable top view geometric transformation model S63 again, so that the side view variable top view S6 of the sub-image group can be used;
2、利用3个摄像机定点定向参数及拍摄对象所在的平面S711,构建全景图中拼接子图像的定位模型S712,并利用全景图中拼接子图像的定位模型S712,确定子图像在全景图像中的位置S713、子图像拼接漏洞和断裂的位置S714、子图像拼接重合区域S715以及子图像之间及子图像与漏洞和断裂的接壤关系S716,以方便后期的拼接使用;2. Utilize the fixed-point orientation parameters of the three cameras and the plane S711 where the photographed object is located to construct a positioning model S712 for splicing sub-images in the panorama, and use the positioning model S712 for splicing sub-images in the panorama to determine the position of the sub-images in the panoramic image Position S713, position S714 of sub-image splicing loopholes and breaks, sub-image splicing overlapping area S715, and bordering relationship between sub-images and between sub-images and loopholes and breaks S716, so as to facilitate later splicing and use;
3、安装在高炉炉顶的3个摄像机阵列,分别采集的高炉料面不同位置的视频,形成的多摄像机视频序列S4,通过视频流同步分割单元对视频流同步分割S51获取的第i个时刻对应的第i帧待拼接子图像组S52;3. The three camera arrays installed on the top of the blast furnace collect videos of different positions on the blast furnace material surface respectively, forming a multi-camera video sequence S4, and the ith moment obtained by synchronously segmenting the video stream S51 through the video stream synchronous segmentation unit The sub-image group S52 corresponding to the i-th frame to be spliced;
4、利用步骤1构建的图像几何变换模型S63,对第i帧待拼接子图像组S52进行侧视图变俯视图;4. Using the image geometric transformation model S63 constructed in step 1, the side view of the sub-image group S52 to be stitched in the i-th frame is changed from a side view to a top view;
5、利用步骤2确定的子图像在全景图像中的位置S713,对以变换成俯视图进行子图像组定点定向全景粗拼接S721;5. Using the position S713 of the sub-image determined in step 2 in the panoramic image, perform fixed-point directional panoramic stitching S721 of the sub-image group by converting it into a top view;
6、根据步骤5获得的全景粗拼接图,通过判断拼接缝S722,将拼接缝区域分为三种情况:有重叠区域拼接缝、无缝无洞无重叠拼接缝和有洞有裂缝的拼接缝;6. According to the rough panorama mosaic image obtained in step 5, by judging the seam S722, the seam area is divided into three situations: seams with overlapping areas, seams with no holes and no overlaps, and seams with holes seams of cracks;
7、对于步骤6确定的有重叠区域拼接缝,采用步骤2获得的子图像拼接重合区域S715的具体位置,利用传统亮度和颜色相似匹配法直接拼接和融合S741;7. For the stitching seams with overlapping areas determined in step 6, use the specific position of the sub-image stitching overlapped area S715 obtained in step 2, and use the traditional brightness and color similarity matching method to directly stitch and fuse S741;
8、对于步骤6确定的有无缝、无洞和无重叠拼接缝,在接缝处采用亮度和颜色融合进行拼接S742;8. For the seamless, hole-free and non-overlapping stitching seams determined in step 6, use brightness and color fusion at the seams to stitch S742;
9、对于步骤6确定的有洞、有裂缝的拼接缝,首先利用步骤2确定的子图像之间及子图像与漏洞和断裂的接壤关系S716,确定拼接洞和缝相邻的子图像S731;其次,对子图像的相邻区域提取线特征并匹配S732,并在此基础上通过对相邻区域线特征外推边界点获取缝和漏洞的线特征S733,从而实现补偿漏洞和缝的线特征S734;最后对漏洞和其他部分采用颜色和亮度插值融合S735;9. For the splicing seams with holes and cracks determined in step 6, first use the border relationship S716 between the sub-images and the sub-images and holes and fractures determined in step 2 to determine the sub-images adjacent to the splicing holes and seams S731 ; Secondly, extract the line feature and match S732 to the adjacent area of the sub-image, and on this basis obtain the line feature S733 of the seam and the hole by extrapolating the boundary point of the line feature of the adjacent area, so as to realize the compensation of the line of the hole and the seam Features S734; Finally, use color and brightness interpolation fusion for holes and other parts S735;
10、通过步骤7、8、9即可获得第i时刻的第帧i完整的全景图,同时跳回步骤3对第i+1时刻由3个摄像机采集的子图像组进行全景拼接,如此循环获得整个高炉全景料面随时间分布的图像序列;利用获取的高炉全景料面随时间分布的图像序列合成实时的视频流,从而获取高炉全景料面的实时视频信息。10. Through steps 7, 8, and 9, the complete panorama of frame i at the i-th moment can be obtained, and at the same time, jump back to step 3 to perform panorama stitching on the sub-image groups collected by the 3 cameras at the i+1th moment, and so on. Obtain the image sequence of the entire blast furnace panoramic material surface distributed over time; use the acquired image sequence of the blast furnace panoramic material surface distributed over time to synthesize a real-time video stream, thereby obtaining real-time video information of the blast furnace panoramic material surface.
以上实施方式仅用于说明本发明,而非对本发明的限制。尽管参照实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,对本发明的技术方案进行各种组合、修改或者等同替换,都不脱离本发明技术方案的精神和范围,均应涵盖在本发明的权利要求范围当中。The above embodiments are only used to illustrate the present invention, but not to limit the present invention. Although the present invention has been described in detail with reference to the embodiments, those skilled in the art should understand that various combinations, modifications or equivalent replacements of the technical solutions of the present invention do not depart from the spirit and scope of the technical solutions of the present invention, and all should cover Within the scope of the claims of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510016447.2A CN104506828B (en) | 2015-01-13 | 2015-01-13 | A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510016447.2A CN104506828B (en) | 2015-01-13 | 2015-01-13 | A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN104506828A CN104506828A (en) | 2015-04-08 |
| CN104506828B true CN104506828B (en) | 2017-10-17 |
Family
ID=52948542
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201510016447.2A Active CN104506828B (en) | 2015-01-13 | 2015-01-13 | A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN104506828B (en) |
Families Citing this family (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180048877A1 (en) * | 2016-08-10 | 2018-02-15 | Mediatek Inc. | File format for indication of video content |
| CN107085842B (en) * | 2017-04-01 | 2020-04-10 | 上海讯陌通讯技术有限公司 | Self-learning multipath image fusion real-time correction method and system |
| CN109214979B (en) * | 2017-07-04 | 2020-09-29 | 北京京东尚科信息技术有限公司 | Method and apparatus for fusing objects in panoramic video |
| CN108460738A (en) * | 2018-02-11 | 2018-08-28 | 湖南文理学院 | Medical image sloped correcting method based on B-spline |
| CN109685845B (en) * | 2018-11-26 | 2023-04-07 | 普达迪泰(天津)智能装备科技有限公司 | POS system-based real-time image splicing processing method for FOD detection robot |
| CN111127478B (en) * | 2019-12-13 | 2023-09-05 | 上海众源网络有限公司 | View block segmentation method and device |
| CN113763570B (en) * | 2020-06-01 | 2024-05-10 | 武汉海云空间信息技术有限公司 | High-precision rapid automatic splicing method for point cloud of tunnel |
| CN116612390B (en) * | 2023-07-21 | 2023-10-03 | 山东鑫邦建设集团有限公司 | Information management system for constructional engineering |
| CN117291804B (en) * | 2023-09-28 | 2024-09-13 | 武汉星巡智能科技有限公司 | Binocular image real-time splicing method, device and equipment based on weighted fusion strategy |
| CN118537215B (en) * | 2024-05-07 | 2025-03-07 | 自然资源部第七地形测量队 | A method and system for stitching coastal drone orthophotos |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2008112776A2 (en) * | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | Systems and methods for filling occluded information for 2-d to 3-d conversion |
| CN101479765A (en) * | 2006-06-23 | 2009-07-08 | 图象公司 | Method and system for converting 2D movies for stereoscopic 3D display |
| WO2011121117A1 (en) * | 2010-04-02 | 2011-10-06 | Imec | Virtual camera system |
| CN103763479A (en) * | 2013-12-31 | 2014-04-30 | 深圳英飞拓科技股份有限公司 | Splicing device for real-time high speed high definition panoramic video and method thereof |
| CN103985254A (en) * | 2014-05-29 | 2014-08-13 | 四川川大智胜软件股份有限公司 | Multi-view video fusion and traffic parameter collecting method for large-scale scene traffic monitoring |
-
2015
- 2015-01-13 CN CN201510016447.2A patent/CN104506828B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101479765A (en) * | 2006-06-23 | 2009-07-08 | 图象公司 | Method and system for converting 2D movies for stereoscopic 3D display |
| WO2008112776A2 (en) * | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | Systems and methods for filling occluded information for 2-d to 3-d conversion |
| WO2011121117A1 (en) * | 2010-04-02 | 2011-10-06 | Imec | Virtual camera system |
| CN103763479A (en) * | 2013-12-31 | 2014-04-30 | 深圳英飞拓科技股份有限公司 | Splicing device for real-time high speed high definition panoramic video and method thereof |
| CN103985254A (en) * | 2014-05-29 | 2014-08-13 | 四川川大智胜软件股份有限公司 | Multi-view video fusion and traffic parameter collecting method for large-scale scene traffic monitoring |
Non-Patent Citations (1)
| Title |
|---|
| 基于鱼眼相机的实时视频拼接技术研究;孙炬辉;《中国优秀硕士学位论文全文数据库 科技信息辑》;20140915(第9期);全文 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN104506828A (en) | 2015-04-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN104506828B (en) | A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes | |
| CN113221665B (en) | A video fusion algorithm based on dynamic optimal stitching line and improved fade-in and fade-out method | |
| CN109218702B (en) | A camera rotation type 3D measurement and information acquisition device | |
| CN110782394A (en) | Panoramic video rapid splicing method and system | |
| TWI555378B (en) | An image calibration, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof | |
| JP5751986B2 (en) | Image generation device | |
| JP4947593B2 (en) | Apparatus and program for generating free viewpoint image by local region segmentation | |
| WO2021120407A1 (en) | Parallax image stitching and visualization method based on multiple pairs of binocular cameras | |
| CN103501409B (en) | Ultrahigh resolution panorama speed dome AIO (All-In-One) system | |
| CN104299215B (en) | The image split-joint method that a kind of characteristic point is demarcated and matched | |
| CN112085659A (en) | A panorama stitching fusion method, system and storage medium based on spherical screen camera | |
| CN107431796A (en) | The omnibearing stereo formula of panoramic virtual reality content catches and rendered | |
| CN106878687A (en) | A multi-sensor based vehicle environment recognition system and omnidirectional vision module | |
| CN101146231A (en) | Method for generating panoramic video based on multi-view video stream | |
| WO2021093584A1 (en) | Free viewpoint video generation and interaction method based on deep convolutional neural network | |
| CN101916455B (en) | Method and device for reconstructing three-dimensional model of high dynamic range texture | |
| CN107274346A (en) | Real-time panoramic video splicing system | |
| CN105005964B (en) | Geographic scenes panorama sketch rapid generation based on video sequence image | |
| CN107154014A (en) | A kind of real-time color and depth Panorama Mosaic method | |
| CN111064945B (en) | Naked eye 3D image acquisition and generation method | |
| CN206611521U (en) | A kind of vehicle environment identifying system and omni-directional visual module based on multisensor | |
| CN113436130B (en) | An unstructured light field intelligent perception system and device | |
| CN110458964A (en) | A Real-time Calculation Method of Dynamic Lighting in Real Environment | |
| JP2006515128A (en) | Stereo panoramic image capturing device | |
| CN107358577A (en) | A kind of quick joining method of cubic panorama |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |