[go: up one dir, main page]

CN101931819B - Temporal error concealment method - Google Patents

Temporal error concealment method Download PDF

Info

Publication number
CN101931819B
CN101931819B CN 200910149802 CN200910149802A CN101931819B CN 101931819 B CN101931819 B CN 101931819B CN 200910149802 CN200910149802 CN 200910149802 CN 200910149802 A CN200910149802 A CN 200910149802A CN 101931819 B CN101931819 B CN 101931819B
Authority
CN
China
Prior art keywords
damaged
block
blocks
merged
error value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 200910149802
Other languages
Chinese (zh)
Other versions
CN101931819A (en
Inventor
黄士嘉
郭斯彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acer Inc
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer Inc filed Critical Acer Inc
Priority to CN 200910149802 priority Critical patent/CN101931819B/en
Publication of CN101931819A publication Critical patent/CN101931819A/en
Application granted granted Critical
Publication of CN101931819B publication Critical patent/CN101931819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides a temporal error concealment method, which comprises the following steps: detecting a damaged macroblock, the damaged macroblock consisting of 4 8 × 8 damaged blocks; obtaining motion vectors of a plurality of 4 x 4 neighboring blocks surrounding the damaged macroblock; and for each 8 × 8 damaged block, determining a predicted motion vector of the 8 × 8 damaged block from motion vectors of 6 4 × 4 neighboring blocks of the plurality of 4 × 4 neighboring blocks that are closest to the 8 × 8 damaged block.

Description

时间性错误隐藏方法Temporal Error Concealment Method

技术领域 technical field

本发明是有关于一种时间性错误隐藏方法,特别是关于一种可根据邻近区块预测移动向量、基于所预测的移动向量决定区块的分割模式、以及使用部分失真(partial distortion)比对法进行移动向量精化程序的时间性错误隐藏方法。  The present invention relates to a temporal error concealment method, in particular to a method for predicting motion vectors from neighboring blocks, determining block partitioning modes based on the predicted motion vectors, and using partial distortion comparison A Temporal Error Concealment Method for Motion Vector Refinement Procedures. the

背景技术 Background technique

随着多媒体技术的应用越来越受欢迎,为了有效压缩视频文件,已发展出许多视频编码技术。压缩的目的是用以去除图像数据中多余(redundancy)的数据,以降低图像的储存空间或图像的传输量。在H.264压缩标准中,采用了画面内预测(intra prediction)与画面间预测(inter prediction)两种预测编码技术。画面内预测是利用同一画面中邻近区块在空间上的相关性来进行预测,而画面间预测则是利用相邻画面间在时间上的相关性来进行预测。  As the application of multimedia technology becomes more and more popular, in order to effectively compress video files, many video coding techniques have been developed. The purpose of compression is to remove redundant data in image data, so as to reduce image storage space or image transmission volume. In the H.264 compression standard, two predictive coding techniques, intra prediction and inter prediction, are used. Intra-frame prediction utilizes the spatial correlation of adjacent blocks in the same frame for prediction, while inter-frame prediction utilizes the temporal correlation between adjacent frames for prediction. the

针对画面间预测,H.264压缩标准针对每一个16×16宏区块定义了7种不同区块尺寸的分割模式:16×16(T1模式)、16×8(T2模式)、8×16(T3模式)、8×8(T4模式)、8×4(T5模式)、4×8(T6模式)、4×4(T7模式)、如图1所示。编码后,每个区块可由残余值(residual)及移动向量(motion vector)来表示。选择尺寸越小的区块编码,可获得越佳的画面质量,但所需的运算量及时间也越大。因此,为了兼顾画面质量与编码效率,一般是依照画面复杂度不同而采用不同大小的区块,以获得更加的压缩效能。  For inter-picture prediction, the H.264 compression standard defines 7 partition modes of different block sizes for each 16×16 macroblock: 16×16 (T1 mode), 16×8 (T2 mode), 8×16 (T3 mode), 8×8 (T4 mode), 8×4 (T5 mode), 4×8 (T6 mode), 4×4 (T7 mode), as shown in Figure 1. After encoding, each block can be represented by a residual value (residual) and a motion vector (motion vector). The smaller the size of the block coding is selected, the better the picture quality can be obtained, but the required calculation amount and time are also larger. Therefore, in order to balance the picture quality and coding efficiency, blocks of different sizes are generally used according to the picture complexity to obtain better compression performance. the

视频数据经过压缩后成为易于传送及储存的比特流(bitstreams),然而这些高度压缩的视频比特流在传输的过程中(尤其在无线通道环境下),很容易发生例如封包遗失(packet erasure)等问题。为了避免因视频封包遗失而影响视频画面质量,常用的保护机制有自动重传请求(ARQ)、正向错误更正(FEC)及错误隐藏(Error Concealment)。相较于ARQ及FEC,错误隐藏技术不需要额外的频宽,且在广播(Broadcast)及多重传送(Multicast)的情况下有较佳的效能。于解码端所执行的错误隐藏技术主要分成两种:空间性错误 隐藏(spatial error concealment)及时间性错误隐藏(temporal errorconcealment),其中空间性错误隐藏主要利用同一画面中的空间冗余信息来回复受损的视频序列,而时间性错误隐藏则是利用编码序列中各连续画面间的高度相关性来重建受损序列。由于相邻画面间一般都存在着很高的相关性,因此相较于空间性错误隐藏,除了在某些特殊情况(如场景转换、对象突然出现或消失等)之外,时间性错误隐藏方法通常可提供较佳的画面质量。  Video data is compressed into bitstreams that are easy to transmit and store. However, during the transmission of these highly compressed video bitstreams (especially in the wireless channel environment), it is easy to occur such as packet erasure, etc. question. In order to avoid affecting the video picture quality due to video packet loss, commonly used protection mechanisms include automatic repeat request (ARQ), forward error correction (FEC) and error concealment (Error Concealment). Compared with ARQ and FEC, error concealment technology does not require additional bandwidth, and has better performance in the case of broadcast (Broadcast) and multicast (Multicast). The error concealment techniques implemented at the decoder are mainly divided into two types: spatial error concealment and temporal error concealment. Spatial error concealment mainly utilizes spatial redundant information in the same picture to reply Damaged video sequences, while temporal error concealment uses the high correlation between successive pictures in the coded sequence to reconstruct the damaged sequence. Since there is generally a high correlation between adjacent pictures, compared with spatial error concealment, except in some special cases (such as scene transitions, sudden appearance or disappearance of objects, etc.), the temporal error concealment method Usually provides better picture quality. the

在使用时间性错误隐藏方法来重建受损区块时,由于要利用其它参考画面的信息,因此首先需要取得受损区块的移动向量。目前已发展出一些简单的方法用以预测受损区块的移动向量,例如可预测移动向量为0、使用空间上邻近区块的移动向量的平均值、或使用参考画面中相同位置区块的移动向量等等方法。上述方法可参考由Y.Wang等人于Proc.IEEE,vol.86,no.5,pp.974-997,May 1998中所发表的“Error control and concealment for videocommunication:a review”、由D.Agrafiotis等人于IEEE Trans.Circuits Syst.Video Technology,vol.16,no.8,pp.960-973,Aug.2006中所发表的“Enhancederror concealment with mode selection”、由S.Valente等人于IEEE Trans.Consumer Electronics,vol.47,no.3,pp.568-578,Aug.2001中所发表的“Anefficienct error concealment implementation for MPEG-4 video streams”、由B.Yan等人于IEEE Trans.Consumer Electronics,vol.49,no.4,pp.1416-1423,Nov.2003中所发表的“A novel selective motion vector matching algorithm forerror concealment in MPEG-4 video transmission over error-prone channels”、由J.Zhang等人于IEEE Trans.Circuits Syst.Video Technol.,vol.10,no.4,pp.659-665,Jun.2000中所发表的“A cell-loss concealment technique for MPEG-2coded video”、由J.Y.Pyun等人于IEEE Trans.Consum.Electron.,vol.49,no.4,pp.1013-1019,Nov.2003中所发表的“Robust error concealment for visualcommunications in burst-packet-loss networks”、以及由S.C.Huang等人于inProc.Int.Conf.MultiMedia Modeling(MMM),Jan.2008,LNCS 4903,pp.402-412中所发表的“Temporal Error Concealment for H.264 Using OptimumRegression Plane”,其上内容将并入本文作为参考。  When the temporal error concealment method is used to reconstruct the damaged block, the motion vector of the damaged block needs to be obtained first because the information of other reference frames is used. At present, some simple methods have been developed to predict the motion vector of the damaged block, such as predicting the motion vector to be 0, using the average value of the motion vector of adjacent blocks in space, or using the same location block in the reference picture methods for moving vectors, etc. The above method can refer to "Error control and concealment for videocommunication: a review" published by Y.Wang et al. in Proc.IEEE, vol.86, no.5, pp.974-997, May 1998, by D. "Enhanced error concealment with mode selection" published by Agrafiotis et al in IEEE Trans. Circuits Syst. Video Technology, vol.16, no.8, pp.960-973, Aug.2006, published by S. Trans.Consumer Electronics, vol.47, no.3, pp.568-578, "Anefficient error concealment implementation for MPEG-4 video streams" published in Aug.2001, published by B.Yan et al. in IEEE Trans.Consumer "A novel selective motion vector matching algorithm for error concealment in MPEG-4 video transmission over error-prone channels" published in Electronics, vol.49, no.4, pp.1416-1423, Nov.2003, by J. Zhang "A cell-loss concealment technique for MPEG-2coded video" published in IEEE Trans.Circuits Syst.Video Technol., vol.10, no.4, pp.659-665, Jun.2000, by J.Y. "Robust error concealment for visual communications in burst-packet-loss networks" published by Pyun et al. in IEEE Trans.Consum.Electron., vol.49, no.4, pp.1013-1019, Nov.2003, and by S.C.Huang et al. inProc.Int.Conf.MultiMedia Modeling (MMM), Jan.2008 , LNCS 4903, "Temporal Error Concealment for H.264 Using Optimum Regression Plane" published in pp.402-412, the content of which will be incorporated into this article as a reference. the

此外,针对时间性错误隐藏技术所提出的其它各种改良方法,亦可参考由M.E.Al-Mualla等人于Electron.Lett.,vol.35,pp.215-217,1999中所发表的“Temporal error concealment using motion field interpolation”、由S.Tsekeridou 等人于IEEE Trans.Multimedia,vol.6,no.6,pp.876-885,Dec.2004中所发表的“Vector rational interpolation schemes for erroneous motion field estimationapplied to MPEG-2 error concealment”、由J.Zheng等人于IEEE Trans.Broadcast.,vol.49,no.4,pp.383-389,Dec.2003中所发表的“A motion vectorrecovery algorithm for digital video using Lagrange interpolation”、由J.Zheng等人于IEEE Trans.Multimedia,vol.6,no.6,pp.801-805,Dec.2004中所发表的“Error-concealment algorithm for H.26L using first-order plane estimation”、由J.Zheng等人于IEEE Trans.Multimedia,vol.7,no.3,pp.507-513,Jun.2005中所发表的“Efficient motion vector recovery algorithm for H.264 based on apolynomial model”、由S.Shirani等人于IEEE Journal on Selected Areas inCommunication,Vol.18,pp.1122-1128,June 2000中所发表的“A concealmentmethod for video communications in an error-prone environment”、由Y.C.Lee等人于IEEE Trans.Image Process.,vol.11,no.11,pp.1314-1331,Nov 2002中所发表的“Multiframe error concealment for MPEG-coded video delivery overerror-prone networks”、由G.S.Yu等人于IEEE Trans.Circuits Syst.VideoTechnol.,vol.8,pp.422-434,Aug.1998中所发表的“POCS-based errorconcealment for packet video using multiframe overlap information”、由S.Belfiore等人于IEEE Trans.Multimedia,vol.7,no.2,pp.316-329,Apr.2005中所发表的“Concealment of whole-frame losses for wireless low bit-rate videobased on multiframe optical flow estimation”、由P.Baccichet等人于IEEE Trans.Consumer Electronics,vol.51,no.1,pp.227-233,Feb.2005中所发表的“Frameconcealment for H.264/AVC decoders”,其上内容将并入本文作为参考。  In addition, for various other improved methods proposed for the temporal error concealment technique, reference may also be made to "Temporal error concealment using motion field interpolation", "Vector rational interpolation schemes for erroneous motion field" published by S.Tsekeridou et al. in IEEE Trans.Multimedia, vol.6, no.6, pp.876-885, Dec.2004 estimation applied to MPEG-2 error concealment", "A motion vector recovery algorithm for digital" published by J. Zheng et al. in IEEE Trans.Broadcast., vol.49, no.4, pp.383-389, Dec.2003 video using Lagrange interpolation", "Error-concealment algorithm for H.26L using first" published by J.Zheng et al. in IEEE Trans.Multimedia, vol.6, no.6, pp.801-805, Dec.2004 -order plane estimation", "Efficient motion vector recovery algorithm for H.264 based" published by J. Zheng et al. in IEEE Trans.Multimedia, vol.7, no.3, pp.507-513, Jun.2005 on apolynomial model", "A concealment method for video communications in an error-prone environment" published by S.Shirani et al. in IEEE Journal on Selected Areas inCommunication, Vol.18, pp.1122-1128, June 2000, by Y.C.Lee et al. in IEEE Trans.Image Process., vol.11, n o.11, pp.1314-1331, "Multiframe error concealment for MPEG-coded video delivery overerror-prone networks" published in Nov 2002, by G.S.Yu et al. in IEEE Trans.Circuits Syst.VideoTechnol., vol.8 , pp.422-434, "POCS-based error concealment for packet video using multiframe overlap information" published in Aug.1998, published by S.Belfiore et al. in IEEE Trans.Multimedia, vol.7, no.2, pp. 316-329, "Concealment of whole-frame losses for wireless low bit-rate videobased on multiframe optical flow estimation" published in Apr.2005, published by P.Baccichet et al. in IEEE Trans.Consumer Electronics, vol.51, no .1, pp.227-233, "Frameconcealment for H.264/AVC decoders" published in Feb.2005, the content of which will be incorporated into this article as a reference. the

虽然已有许多研究针对时间性错误隐藏方法进行改良,但移动向量的预测准确性及补偿的效果及效率仍有改善的空间。  Although many studies have been conducted to improve temporal error concealment methods, there is still room for improvement in the prediction accuracy of motion vectors and the effect and efficiency of compensation. the

发明内容 Contents of the invention

鉴于先前技术所存在的问题,本发明提供了一种适用于H.264的高效能时间性错误隐藏方法,可有效地提高移动向量的预测准确性及错误隐藏效能。  In view of the problems in the prior art, the present invention provides a high-efficiency temporal error concealment method suitable for H.264, which can effectively improve the prediction accuracy and error concealment performance of motion vectors. the

根据本发明的一方面,提供了一种应用于视频解码中的时间性错误隐藏的方法,该方法包含以下步骤:a.检测一受损宏区块,该受损宏区块是由4个8×8受损区块所组成;b.取得环绕该受损宏区块的多个4×4邻近区块的移 动向量;c.针对每一该8×8受损区块,由该多个4×4邻近区块中最接近该8×8受损区块的6个4×4邻近区块的移动向量,决定该8×8受损区块的一预测移动向量;d.比较该4个8×8受损区块的该预测移动向量,以决定不合并该4个8×8受损区块、将该4个8×8受损区块两两合并为2个16×8合并受损区块、将该4个8×8受损区块两两合并为2个8×16合并受损区块、或将该4个8×8受损区块全部合并为1个16×16合并受损区块;e.针对合并后的该16×8合并受损区块、该8×16合并受损区块、或该16×16合并受损区块,决定每一该合并受损区块的一预测移动向量为该合并受损区块所包含的8×8受损区块的预测移动向量的平均值;以及f.针对每一该8×8受损区块、该16×8合并受损区块、该8×16合并受损区块、或该16×16合并受损区块,以各该受损区块或合并受损区块所对应的该预测移动向量为一起始点,与一参考画面中的一搜寻窗口内的多个参考区块进行像素比对,以在该搜寻窗口中寻找与该受损区块或该合并受损区块相匹配的一对应参考区块。  According to an aspect of the present invention, a method for temporal error concealment applied in video decoding is provided, the method comprising the following steps: a. detecting a damaged macroblock, which is composed of four 8×8 damaged blocks; b. Obtain the motion vectors of multiple 4×4 adjacent blocks around the damaged macro block; c. For each of the 8×8 damaged blocks, the The motion vectors of the six 4*4 adjacent blocks closest to the 8*8 damaged block among the plurality of 4*4 adjacent blocks determine a predicted motion vector of the 8*8 damaged block; d. compare The predicted motion vectors of the four 8×8 damaged blocks are used to determine not to merge the four 8×8 damaged blocks, and to merge the four 8×8 damaged blocks into two 16× 8 Merge damaged blocks, merge the four 8×8 damaged blocks into two 8×16 damaged blocks, or merge all the four 8×8 damaged blocks into one 16×16 merged damaged blocks; e. For the merged 16×8 merged damaged blocks, the 8×16 merged damaged blocks, or the 16×16 merged damaged blocks, determine each of the A predicted motion vector of the merged damaged block is an average value of predicted motion vectors of the 8×8 damaged blocks included in the merged damaged block; and f. For each of the 8×8 damaged blocks, The 16×8 merged damaged block, the 8×16 merged damaged block, or the 16×16 merged damaged block is moved according to the prediction corresponding to each damaged block or merged damaged block The vector is a starting point for pixel comparison with multiple reference blocks in a search window in a reference frame, so as to find a matching damaged block or the merged damaged block in the search window corresponding to the reference block. the

本发明的其它方面,部分将在后续说明中陈述,而部分可由说明中轻易得知,或可由本发明的实施例而得知。本发明的各方面将可利用上述的申请专利范围中所特别指出的元件及组合而理解并达成。需了解,前述的发明内容及下列详细说明均仅作举例之用,并非用以限制本发明。  Other aspects of the present invention will partly be stated in the subsequent description, and partly can be easily understood from the description, or can be known from the embodiments of the present invention. Aspects of the present invention can be understood and achieved by utilizing the elements and combinations particularly pointed out in the above claims. It should be understood that the foregoing summary of the invention and the following detailed description are for illustrative purposes only, and are not intended to limit the present invention. the

附图说明 Description of drawings

图式是与本说明书结合并构成其一部分,用以说明本发明的实施例,且连同说明书用以解释本发明的原理。在此所述的实施例为本发明的较佳实施例,然而,必须了解本发明并不限于所示的配置及元件,其中:  The drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principle of the invention. The embodiments described herein are preferred embodiments of the present invention, however, it must be understood that the present invention is not limited to the configuration and elements shown, wherein:

图1显示为H.264压缩标准针对画面间预测所定义的不同区块尺寸的分割模式;  Figure 1 shows the partitioning modes of different block sizes defined by the H.264 compression standard for inter-picture prediction;

图2为本发明所提供的时间性错误隐藏方法的流程图;  Fig. 2 is the flowchart of temporal error concealment method provided by the present invention;

图3绘示传输的过程中遗失或受损的一宏区块,及其邻近的多个4×4区块;  FIG. 3 shows a macroblock lost or damaged during transmission, and a plurality of adjacent 4×4 blocks;

图4为图3中宏区块的左上8×8区块;  Fig. 4 is the upper left 8 × 8 blocks of the macro block in Fig. 3;

图5绘示在一实施例中根据6个相邻的4×4区块所计算出的主动回归平面;  FIG. 5 shows an active regression plane calculated according to six adjacent 4×4 blocks in one embodiment;

图6显示本发明一实施例所提供的用以决定可变区块大小模式的方法流程图;  FIG. 6 shows a flowchart of a method for determining a variable block size mode provided by an embodiment of the present invention;

图7绘示本发明一实施例所采用的比对顺序;以及  Fig. 7 shows the alignment sequence adopted by an embodiment of the present invention; and

图8及图9为以T4(8×8)模式为例说明本发明一实施例的精化程序的示意图及流程图。  FIG. 8 and FIG. 9 are schematic diagrams and flow charts illustrating the refinement procedure of an embodiment of the present invention by taking the T4 (8×8) mode as an example. the

[主要元件标号说明]  [Description of main component labels]

300    受损宏区块  300 damaged macroblocks

300A、300B、300C、300D    8×8区块  300A, 300B, 300C, 300D 8×8 blocks

310、312、314、316        4×4区块  310, 312, 314, 316 4×4 blocks

320、322、324、326        4×4区块  320, 322, 324, 326 4×4 blocks

330、332、334、336        4×4区块  330, 332, 334, 336 4×4 blocks

340、342、344、346        4×4区块  340, 342, 344, 346 4×4 blocks

500主动回归平面  500 active return to plane

8008×8受损区块  8008×8 damaged blocks

810、8208×8邻近区块  810, 8208×8 adjacent blocks

812、814、816、8184×4区块  812, 814, 816, 8184×4 blocks

822、824、826、8284×4区块  822, 824, 826, 8284×4 blocks

具体实施方式 Detailed ways

为了使本发明的叙述更加详尽与完备,可参照下列描述并配合图2至图9的图式。然以下实施例中所述的装置、元件及方法步骤,仅用以说明本发明,并非用以限制本发明的范围。  In order to make the description of the present invention more detailed and complete, reference may be made to the following description together with the diagrams of FIGS. 2 to 9 . However, the devices, components and method steps described in the following embodiments are only used to illustrate the present invention, and are not intended to limit the scope of the present invention. the

图2为本发明所提供的时间性错误隐藏方法的流程图。首先,在步骤S200,接收并解码包含多个画面(frames)的一视频信号,其中至少一画面具有遗失或受损的宏区块。在步骤S210中,针对一遗失或受损的宏区块,决定其空间上相邻区块的移动向量。接着,在步骤S220中,使用由相邻区块的移动向量所建构出的一主动回归平面(active regression plane)而预测出受损区块的移动向量。在步骤S230中,根据所预测出的受损区块的移动向量,决定受损宏区块的分割模式。最后,在步骤S240中,使用可变区块尺寸的移动补偿方法进行移动向量的精化(motion refinement),以在参考画面中搜寻出更佳的补偿区块。此外,本发明提出数个可提前终止步骤S240的方法,以减少精化程序所需时间,提升补偿效率。以下将针对步骤S220至S240做更详细的描述。  FIG. 2 is a flow chart of the temporal error concealment method provided by the present invention. First, in step S200, a video signal including a plurality of frames is received and decoded, at least one frame has missing or damaged macroblocks. In step S210, for a lost or damaged macroblock, the motion vectors of its spatially neighboring blocks are determined. Next, in step S220, the motion vector of the damaged block is predicted by using an active regression plane constructed from the motion vectors of the adjacent blocks. In step S230, the division mode of the damaged macroblock is determined according to the predicted motion vector of the damaged block. Finally, in step S240, a motion compensation method with a variable block size is used for motion refinement to search for a better compensation block in the reference frame. In addition, the present invention proposes several methods for terminating the step S240 in advance, so as to reduce the time required for the refinement procedure and improve the compensation efficiency. Steps S220 to S240 will be described in more detail below. the

参考图3,宏区块(MB)300为传输的过程中遗失或受损的一16×16宏区块,而区块310、312、314、316、320、322、324、326、330、332、334、336、340、342、344及346为MB 300的邻近4×4区块。本发明利用空间上彼此相邻区块的移动向量间的高度相关性来预测受损MB 300的移动向量。首先,如图3所示,将受损MB 300分成4个8×8区块(左上区块300A、左下区块300B、右下区块300C、及右上区块300D)。本发明根据每一受损8×8区块与其邻近的6个4×4区块间的相关性,而提出一主动回归平面来预测每一受损8×8区块的移动向量,此平面为根据相邻区块的位置及其对应的移动向量而计算出的二阶平面。  Referring to FIG. 3, a macroblock (MB) 300 is a 16×16 macroblock that is lost or damaged during transmission, and blocks 310, 312, 314, 316, 320, 322, 324, 326, 330, 332, 334, 336, 340, 342, 344, and 346 are adjacent 4x4 blocks of MB 300. The present invention utilizes the high correlation between motion vectors of spatially adjacent blocks to predict the motion vector of the damaged MB 300. First, as shown in FIG. 3 , the damaged MB 300 is divided into four 8×8 blocks (upper left block 300A, lower left block 300B, lower right block 300C, and upper right block 300D). According to the correlation between each damaged 8×8 block and its adjacent six 4×4 blocks, the present invention proposes an active regression plane to predict the motion vector of each damaged 8×8 block. This plane is the second-order plane calculated according to the positions of adjacent blocks and their corresponding motion vectors. the

参考图4,以图3中的左上8×8区块300A为例来说明本发明以主动回归 平面预测移动向量的方法。由于空间上越接近彼此的区块一般具有越高的相关性,因此本发明选择与区块300A最靠近的6个区块312、314、316、320、322、324来计算区块300A的移动向量。将区块300A的中心点坐标设定为(0,0),则相邻的6个4×4区块312、314、316、320、322、324的中心点坐标分别为(6,6)、(2,6)、(-2,6)、(-6,2)、(-6,-2)、及(-6,-6),且其对应的移动向量分别表示为V1、V2、V3、V4、V5、及V6。本发明所提出的主动回归平面为:  With reference to Fig. 4, the method for predicting the motion vector with the active regression plane of the present invention is described by taking the upper left 8×8 block 300A in Fig. 3 as an example. Since blocks that are spatially closer to each other generally have higher correlation, the present invention selects the six blocks 312, 314, 316, 320, 322, and 324 closest to block 300A to calculate the motion vector of block 300A . Set the center point coordinates of block 300A as (0, 0), then the center point coordinates of six adjacent 4×4 blocks 312, 314, 316, 320, 322, 324 are (6, 6) respectively , (2, 6), (-2, 6), (-6, 2), (-6, -2), and (-6, -6), and their corresponding motion vectors are denoted as V1, V2 respectively , V3, V4, V5, and V6. The active regression plane proposed by the present invention is:

Zi(x,y)=α1xi 22 xi yi3yi 24xi5yi6 Z i (x, y)=α 1 x i 22 x i y i3 y i 24 x i5 y i6

其中x及y表示相邻6个4×4区块的中心点坐标,z为所对应的移动向量。将相邻6个4×4区块的中心坐标及对应的移动向量V1、V2、V3、V4、V5、及V6带入上式,即可计算出系数α1、α2、α3、α4、α5、及α6。参考图5,其绘示在一实施例中根据6个相邻的4×4区块所计算出的主动回归平面500,遗失的8×8区块300A的移动向量将位于此平面500上。以数学式表示,遗失的8×8区块的移动向量可使用中心坐标(0,0)而表示为:  Wherein, x and y represent the center point coordinates of six adjacent 4×4 blocks, and z is the corresponding moving vector. Put the center coordinates of six adjacent 4×4 blocks and the corresponding motion vectors V1, V2, V3, V4, V5, and V6 into the above formula, and the coefficients α 1 , α 2 , α 3 , α can be calculated 4 , α 5 , and α 6 . Referring to FIG. 5 , it shows an active regression plane 500 calculated from six adjacent 4×4 blocks in one embodiment, on which the motion vector of the missing 8×8 block 300A will be located. Expressed mathematically, the motion vector of the missing 8×8 block can be expressed as:

ZZ ii (( xx ,, ythe y )) == VV 11 ΔΔ 11 ΔΔ ++ VV 22 ΔΔ 22 ΔΔ ++ VV 33 ΔΔ 33 ΔΔ -- VV 44 ΔΔ 44 ΔΔ ++ VV 55 ΔΔ 55 ΔΔ ++ VV 66 ΔΔ 66 ΔΔ

其中,  in,

Δ=det(M), M = x 1 2 x 1 y 1 y 1 2 x 1 y 1 1 x 2 2 x 2 y 2 y 2 2 x 2 y 2 1 x 3 2 x 3 y 3 y 3 2 x 3 y 3 1 x 4 2 x 4 y 4 y 4 2 x 4 y 4 1 x 5 2 x 5 y 5 y 5 2 x 5 y 5 1 x 6 2 x 6 y 6 y 6 2 x 6 y 6 1 Δ=det(M), m = x 1 2 x 1 the y 1 the y 1 2 x 1 the y 1 1 x 2 2 x 2 the y 2 the y 2 2 x 2 the y 2 1 x 3 2 x 3 the y 3 the y 3 2 x 3 the y 3 1 x 4 2 x 4 the y 4 the y 4 2 x 4 the y 4 1 x 5 2 x 5 the y 5 the y 5 2 x 5 the y 5 1 x 6 2 x 6 the y 6 the y 6 2 x 6 the y 6 1

Δ1=det(M1), M 1 = x 2 xy y 2 x y 1 x 2 2 x 2 y 2 y 2 2 x 2 y 2 1 x 3 2 x 3 y 3 y 3 2 x 3 y 3 1 x 4 2 x 4 y 4 y 4 2 x 4 y 4 1 x 5 2 x 5 y 5 y 5 2 x 5 y 5 1 x 6 2 x 6 y 6 y 6 2 x 6 y 6 1 Δ 1 =det(M 1 ), m 1 = x 2 xy the y 2 x the y 1 x 2 2 x 2 the y 2 the y 2 2 x 2 the y 2 1 x 3 2 x 3 the y 3 the y 3 2 x 3 the y 3 1 x 4 2 x 4 the y 4 the y 4 2 x 4 the y 4 1 x 5 2 x 5 the y 5 the y 5 2 x 5 the y 5 1 x 6 2 x 6 the y 6 the y 6 2 x 6 the y 6 1

Δ2=det(M2), M 2 = x 1 2 x 1 y 1 y 1 2 x 1 y 1 1 x 2 xy y 2 x y 1 x 3 2 x 3 y 3 y 3 2 x 3 y 3 1 x 4 2 x 4 y 4 y 4 2 x 4 y 4 1 x 5 2 x 5 y 5 y 5 2 x 5 y 5 1 x 6 2 x 6 y 6 y 6 2 x 6 y 6 1 Δ 2 =det(M 2 ), m 2 = x 1 2 x 1 the y 1 the y 1 2 x 1 the y 1 1 x 2 xy the y 2 x the y 1 x 3 2 x 3 the y 3 the y 3 2 x 3 the y 3 1 x 4 2 x 4 the y 4 the y 4 2 x 4 the y 4 1 x 5 2 x 5 the y 5 the y 5 2 x 5 the y 5 1 x 6 2 x 6 the y 6 the y 6 2 x 6 the y 6 1

Δ3=det(M3), M 3 = x 1 2 x 1 y 1 y 1 2 x 1 y 1 1 x 2 2 x 2 y 2 y 2 2 x 2 y 2 1 x 2 xy y 2 x y 1 x 4 2 x 4 y 4 y 4 2 x 4 y 4 1 x 5 2 x 5 y 5 y 5 2 x 5 y 5 1 x 6 2 x 6 y 6 y 6 2 x 6 y 6 1 Δ 3 =det(M 3 ), m 3 = x 1 2 x 1 the y 1 the y 1 2 x 1 the y 1 1 x 2 2 x 2 the y 2 the y 2 2 x 2 the y 2 1 x 2 xy the y 2 x the y 1 x 4 2 x 4 the y 4 the y 4 2 x 4 the y 4 1 x 5 2 x 5 the y 5 the y 5 2 x 5 the y 5 1 x 6 2 x 6 the y 6 the y 6 2 x 6 the y 6 1

Δ4=det(M4), M 4 = x 1 2 x 1 y 1 y 1 2 x 1 y 1 1 x 2 2 x 2 y 2 y 2 2 x 2 y 2 1 x 3 2 x 3 y 3 y 3 2 x 3 y 3 1 x 2 xy y 2 x y 1 x 5 2 x 5 y 5 y 5 2 x 5 y 5 1 x 6 2 x 6 y 6 y 6 2 x 6 y 6 1 Δ 4 =det(M 4 ), m 4 = x 1 2 x 1 the y 1 the y 1 2 x 1 the y 1 1 x 2 2 x 2 the y 2 the y 2 2 x 2 the y 2 1 x 3 2 x 3 the y 3 the y 3 2 x 3 the y 3 1 x 2 xy the y 2 x the y 1 x 5 2 x 5 the y 5 the y 5 2 x 5 the y 5 1 x 6 2 x 6 the y 6 the y 6 2 x 6 the y 6 1

Δ5=det(M5), M 5 = x 1 2 x 1 y 1 y 1 2 x 1 y 1 1 x 2 2 x 2 y 2 y 2 2 x 2 y 2 1 x 3 2 x 3 y 3 y 3 2 x 3 y 3 1 x 4 2 x 4 y 4 y 4 2 x 4 y 4 1 x 2 xy y 2 x y 1 x 6 2 x 6 y 6 y 6 2 x 6 y 6 1 Δ 5 =det(M 5 ), m 5 = x 1 2 x 1 the y 1 the y 1 2 x 1 the y 1 1 x 2 2 x 2 the y 2 the y 2 2 x 2 the y 2 1 x 3 2 x 3 the y 3 the y 3 2 x 3 the y 3 1 x 4 2 x 4 the y 4 the y 4 2 x 4 the y 4 1 x 2 xy the y 2 x the y 1 x 6 2 x 6 the y 6 the y 6 2 x 6 the y 6 1

Δ6=det(M6), M 6 = x 1 2 x 1 y 1 y 1 2 x 1 y 1 1 x 2 2 x 2 y 2 y 2 2 x 2 y 2 1 x 3 2 x 3 y 3 y 3 2 x 3 y 3 1 x 4 2 x 4 y 4 y 4 2 x 4 y 4 1 x 5 2 x 5 y 5 y 5 2 x 5 y 5 1 x 2 xy y 2 x y 1 Δ 6 =det(M 6 ), m 6 = x 1 2 x 1 the y 1 the y 1 2 x 1 the y 1 1 x 2 2 x 2 the y 2 the y 2 2 x 2 the y 2 1 x 3 2 x 3 the y 3 the y 3 2 x 3 the y 3 1 x 4 2 x 4 the y 4 the y 4 2 x 4 the y 4 1 x 5 2 x 5 the y 5 the y 5 2 x 5 the y 5 1 x 2 xy the y 2 x the y 1

在计算出图3中4个8×8区块300A、300B、300C及300D的移动向量后,本发明可搭配H.264所定义的可变区块大小模式,根据所计算出的4个移动向量,而决定需采用T1模式(16×16)、T2模式(16×8)、T3模式(8×16)、或T4(8×8)模式来进行后续的移动向量精化程序。图6显示本发明一实施例所提供的用以决定可变区块大小模式的方法流程图。首先,在步骤S600中,以图3中的受损宏区块300为例,取得受损宏区块300内的4个8×8区块300A、300B、300C、及300D的预测移动向量。接着,在步骤S610中,比较水平方向相邻两个8×8区块的移动向量,并判断是否满足以下条件:  After calculating the motion vectors of the four 8×8 blocks 300A, 300B, 300C, and 300D in FIG. vector, and it is decided to use T1 mode (16×16), T2 mode (16×8), T3 mode (8×16), or T4 (8×8) mode for subsequent motion vector refinement procedure. FIG. 6 shows a flowchart of a method for determining a variable block size mode provided by an embodiment of the present invention. First, in step S600 , taking the damaged macroblock 300 in FIG. 3 as an example, the predicted motion vectors of the four 8×8 blocks 300A, 300B, 300C, and 300D in the damaged macroblock 300 are obtained. Next, in step S610, compare the motion vectors of two adjacent 8×8 blocks in the horizontal direction, and judge whether the following conditions are met:

|MV1-MV2|≤THH1以及|MV3-MV4|≤THH2;  |MV1-MV2|≤TH H1 and |MV3-MV4|≤TH H2 ;

其中MV1、MV2、MV3及MV4分别为左上8×8区块、右上8×8区块、左下8×8区块、及右下8×8区块(即分别为图3中区块300A、300D、300B及300C)的预测移动向量,THH1及THH2为水平方向移动向量的差值临界值,可依实际应用不同而调整。在一实施例中,THH1及THH2可均设定为1。若满足上述条件,则可进行水平方向的合并,并继续进行至步骤S620,反之,若不满足上述条件,则不进行水平方向的合并,且程序进行至步骤S630。在步骤S620及S630中,比较垂直方向相邻两个8×8区块的移动向量,并判断是否满足以下条件:  Wherein MV1, MV2, MV3 and MV4 are respectively the upper left 8×8 block, the upper right 8×8 block, the lower left 8×8 block, and the lower right 8×8 block (respectively block 300A, block 300A, 300D, 300B and 300C), TH H1 and TH H2 are the threshold difference of horizontal motion vectors, which can be adjusted according to different practical applications. In one embodiment, both TH H1 and TH H2 can be set to 1. If the above conditions are satisfied, the horizontal merging can be performed and the process proceeds to step S620; otherwise, if the above conditions are not met, the horizontal merging is not performed and the process proceeds to step S630. In steps S620 and S630, compare the motion vectors of two vertically adjacent 8×8 blocks, and determine whether the following conditions are met:

|MV1-MV3|≤THV1以及|MV2-MV4|≤THV2;  |MV1-MV3|≤TH V1 and |MV2-MV4|≤TH V2 ;

其中THV1及THV2为垂直方向移动向量的差值临界值,在一实施例中可均设定为1。若在步骤S620中判断为是,则程序进行至步骤S622,采取T1模式,将四个8×8区块合并形成一16×16宏区块,反之,则程序进行至步骤S624,采取T2模式,仅进行水平方向的合并而形成二个16×8子宏区块。另外,在步骤S630中,若判断为是,则程序进行至步骤S632,采取T3模式,进行垂直方向的合并而形成二个8×16子宏区块,反之,则程序进行至步骤S634,不进行任何合并,采取具有四个8×8子宏区块的T4模式。  Wherein TH V1 and TH V2 are the difference thresholds of the motion vectors in the vertical direction, and both may be set to 1 in an embodiment. If it is judged yes in step S620, then the program proceeds to step S622, adopts T1 mode, and combines four 8×8 blocks to form a 16×16 macro block, otherwise, then proceeds to step S624, adopts T2 mode , only merge in the horizontal direction to form two 16×8 sub-macroblocks. In addition, in step S630, if the judgment is yes, then the program proceeds to step S632, adopts the T3 mode, and performs vertical merging to form two 8×16 sub-macroblocks; otherwise, the program proceeds to step S634, does not For any merging, take the T4 mode with four 8x8 sub-macroblocks.

决定分割模式(T1、T2、T3、或T4模式)后,针对分割后的每个子宏区块(sub-macroblock)进行移动向量精化程序,以在参考画面中搜寻出更佳的补偿区块。移动向量精化程序类似编码端的移动评估(motion estimation)程 序,以每个子宏区块所预测出的移动向量为起始点(若子宏区块包含二个以上的8×8区块,则其所对应的移动向量为各8×8区块的预测移动向量的平均值),比对其外围边界像素的误差值(如绝对误差总合(SAD)),以在参考画面中寻找更佳的补偿区块。图7绘示本发明一实施例所采用的比对顺序,其以预测出的移动向量为起点(即图7中的点0),在参考画面的一搜寻窗口中采用向外螺旋搜寻的顺序依序与搜寻窗口内的各个候选区块比对。  After determining the division mode (T1, T2, T3, or T4 mode), perform a motion vector refinement process for each sub-macroblock after division to search for a better compensation block in the reference picture . The motion vector refinement procedure is similar to the motion estimation (motion estimation) procedure at the encoding end, with the motion vector predicted by each sub-macroblock as the starting point (if the sub-macroblock contains more than two 8×8 blocks, then its The corresponding motion vector is the average value of the predicted motion vectors of each 8×8 block), compared with the error value of its peripheral boundary pixels (such as the sum of absolute errors (SAD)), in order to find a better compensation block. Fig. 7 shows the alignment sequence adopted by an embodiment of the present invention, which starts from the predicted motion vector (i.e., point 0 in Fig. 7), and adopts an outward spiral search sequence in a search window of the reference frame Sequentially compare with each candidate block in the search window. the

在比对过程中,本发明提出一种部分失真(partial distortion)比对法以及两种可用以提前终止(early termination)的方法,以节省移动向量精化所需的时间。第一种提前终止方法是根据所对应的边界区块在编码时的原始残余值(residual)而设定一临界值:DTa=μ×N+γ,其中N代表对应边界区块中的总像素数量,μ代表每一对应像素的平均残余值,γ代表一常数系数。第二种方法是根据目前画面中其它已补偿的受损区块的比对结果而设定另一临界值:DTb=EBME(u,v)×EBMEβ/EBMEα×λ+ε,其中(u,v)及EBME(u,v)代表目前所要进行比对的子宏区块所对应的移动向量以及第一次比对所得到的外围边界比对误差值,EBMEα及EBMEβ分别代表目前画面中先前已经比对过的其它受损区块的第一次比对所得到外围边界比对误差值以及最小的外围边界比对误差值,λ为常数比例系数(例如可设为0.6),ε为常数系数(例如可设为0)。在进行边界区块比对的过程中,当比对误差值小于或等于临界值DTa或DTb时,代表已经找到可匹配的区块,便可停止对此受损子宏区块的搜寻,以降低外围边界比对的运算量。此外,本发明所提出的部分失真比对法是以16个像素(4×4)为一单位,且针对每一单位以一次累加一像素的方式进行比对,以减少移动向量精化程序中需进行误差比对的像素的数量。  In the comparison process, the present invention proposes a partial distortion comparison method and two early termination methods to save the time required for motion vector refinement. The first early termination method is to set a critical value according to the original residual value (residual) of the corresponding boundary block during encoding: DT a =μ×N+γ, where N represents the total The number of pixels, μ represents the average residual value of each corresponding pixel, and γ represents a constant coefficient. The second method is to set another critical value according to the comparison results of other compensated damaged blocks in the current frame: DT b =EBME(u,v)×EBME β /EBME α ×λ+ε, where (u, v) and EBME(u, v) represent the motion vector corresponding to the sub-macroblock to be compared at present and the peripheral border comparison error value obtained from the first comparison, EBME α and EBME β respectively Represents the outer border comparison error value and the smallest outer border comparison error value obtained from the first comparison of other damaged blocks that have been compared before in the current picture, and λ is a constant proportional coefficient (for example, it can be set to 0.6 ), ε is a constant coefficient (for example, it can be set to 0). In the process of comparing the boundary blocks, when the comparison error value is less than or equal to the critical value DT a or DT b , it means that a matching block has been found, and the search for the damaged sub-macroblock can be stopped , so as to reduce the computation load of peripheral boundary comparison. In addition, the partial distortion comparison method proposed by the present invention uses 16 pixels (4×4) as a unit, and performs comparison by accumulating one pixel at a time for each unit, so as to reduce the number of motion vector refinement procedures. The number of pixels to compare for error.

图8及图9为以T4(8×8)模式为例说明本发明一实施例的精化程序的示意图及流程图。区块800为一受损宏区块的左上方8×8区块,在精化程序中将比对其上方及左方的外围邻近区块810及820的像素误差值。一般来说,区块800的SAD值可表示为:  FIG. 8 and FIG. 9 are schematic diagrams and flow charts illustrating the refinement procedure of an embodiment of the present invention by taking the T4 (8×8) mode as an example. The block 800 is the upper left 8×8 block of a damaged macroblock, and the pixel error values of the upper and left peripheral neighboring blocks 810 and 820 will be compared in the refinement process. In general, the SAD value of block 800 can be expressed as:

DD. == ΣΣ ii == 00 88 ΣΣ jj == 00 88 || AA TopTop (( xx 00 ++ ii ,, ythe y 00 ++ jj )) ,, -- RR (( xx 00 ++ ii ++ uu ,, ythe y 00 ++ jj ++ vv )) || ++

ΣΣ ii == 00 88 ΣΣ jj == 00 88 || AA LeftLeft (( xx 00 ++ ii ,, ythe y 00 ++ jj )) ,, -- RR (( xx 00 ++ ii ++ uu ,, ythe y 00 ++ jj ++ vv )) ||

其中ATop及ALeft代表目前画面中的像素,R(x,y)代表参考画面所对应的像素,(x0,y0)代表对应区块的左上方像素的坐标,(u,v)代表区块800所对应的移动向量。  Among them, A Top and A Left represent the pixels in the current picture, R(x, y) represents the pixel corresponding to the reference picture, (x 0 , y 0 ) represents the coordinates of the upper left pixel of the corresponding block, (u, v) represents the motion vector corresponding to block 800 .

参考图8,本发明的部分失真比对法以4×4区块为一单位,将外围邻近区块810及820分别分成4个4×4区块812、814、816、818、822、824、826、828,每个4×4区块包含16个像素。本发明将区块810及820的总失真值D分成16个部分失真(dp,p=1~16),其中每个部分失真(dp)包含8个像素,且此8个像素分别位于4×4区块812、814、816、818、822、824、826、828中的相同位置。举例来说,部分失真d1包含图8中以斜线表示的8个像素。在一实施例中,d1至d16在每一4×4区块中的顺序如图8所示,且每个部分失真dp可表示如下:  Referring to FIG. 8, the partial distortion comparison method of the present invention takes a 4×4 block as a unit, and divides the peripheral adjacent blocks 810 and 820 into four 4×4 blocks 812, 814, 816, 818, 822, and 824, respectively. , 826, 828, each 4×4 block contains 16 pixels. The present invention divides the total distortion value D of blocks 810 and 820 into 16 partial distortions (d p , p=1-16), wherein each partial distortion (d p ) includes 8 pixels, and these 8 pixels are respectively located at Same location in 4x4 blocks 812 , 814 , 816 , 818 , 822 , 824 , 826 , 828 . For example, the partial distortion d 1 includes 8 pixels indicated by oblique lines in FIG. 8 . In one embodiment, the order of d 1 to d 16 in each 4×4 block is shown in FIG. 8 , and each partial distortion d p can be expressed as follows:

dd pp == ΣΣ ii == 00 11 ΣΣ jj == 00 11 || AA TopTop (( xx 00 ++ 44 ii ++ sthe s pp ,, ythe y 00 ++ 44 jj ++ tt pp )) ,, -- RR (( xx 00 ++ 44 ii ++ sthe s pp ++ uu ,, ythe y 00 ++ 44 jj ++ tt pp ++ vv )) || ++

ΣΣ ii == 00 11 ΣΣ jj == 00 11 || AA LeftLeft (( xx 00 ++ 44 ii ++ sthe s pp ,, ythe y 00 ++ 44 jj ++ tt pp )) ,, -- RR (( xx 00 ++ 44 ii ++ sthe s pp ++ uu ,, ythe y 00 ++ 44 jj ++ tt pp ++ vv )) ||

每个p值所对应的(sp,tp)值是列于下表:  The (s p , t p ) value corresponding to each p value is listed in the following table:

  p p   (sp,tp) (s p , t p )   1 1   (0,0) (0,0)   2 2   (2,2) (2, 2)   3 3   (2,0) (2,0)   4 4   (0,2) (0, 2)   5 5   (1,1) (1,1)   6 6   (3,3) (3,3)   7 7   (3,1) (3,1)   8 8   (1,3) (1, 3)

[0063] [0063]   9 9   (1,0) (1,0)   10 10   (3,2) (3, 2)   11 11   (0,1) (0,1)   12 12   (2,3) (2, 3)   13 13   (3,0) (3,0)   14 14   (1,2) (1, 2)   15 15   (2,1) (2,1)   16 16   (0,3) (0, 3)

另外,定义累加至第p个部分失真的第p个累加失真值Dp为:  In addition, the p-th accumulated distortion value D p accumulated to the p-th partial distortion is defined as:

DD. pp == ΣΣ ii == 11 pp dd ii

以下将根据以上定义,说明在本发明一实施例中针对一受损区块进行移动向量精化的程序。参考图9,在步骤S900中,判断是否为此受损区块所进行的第一次外围边界比对,若是则进行至步骤S910,计算所有的部分失真d1~d16及总失真值D16=d1+d2+...+d16。接着,在步骤S912及S914中分别判断D16是否小于等于DTa及DTb(即是否满足提前终止的条件),若是则可进行至步骤S950,结束此受损区块的搜寻程序。若步骤S912及S914均判断为否,则程序进行至步骤S916,设定初次比对的总失真值D16为Dmin,做为比对下一个候选区块的判断基准。接着,在步骤S920至S925中,进行下一个候选区块的外围边界比对,从p=1开始,分别计算dp(8个点)及Dp,若Dp>p×Dmim/16,即可结束此候选区块的比对,并进行至步骤S940。若D1<Dmim/16,则接着重复上述步骤,计算d2及D2,并比较D2与2×Dmim/16。比对的顺序是从p=1至p=16,且比对过程中一旦发生Dp大于p×Dmim/16的情况,即中止此候选区块的比对。当比对至p=16,若D16小于Dmim,则进行至步骤S930及S932,分别判断D16是否小于等于DTa及DTb(即是否满足提前终止的条件),若是则可进行至步骤S950,结束此受损区块的搜寻程序。若步骤S930及S932均判断为否,则程序进行至步骤S934,重新设定此次比对所得到的总失真值D16为新的Dmin值,做为比对下一个候选区块的判断基准。接着,在步骤S940中,判断是否已完成搜寻窗口中最后一个候选区块的比对,若是则回到步骤S920 继续下一个候选区块的外围边界比对,若否则进行至步骤S950,结束此受损区块的精化比对程序。  The following will describe the motion vector refinement process for a damaged block in an embodiment of the present invention according to the above definition. Referring to FIG. 9, in step S900, it is judged whether the first peripheral boundary comparison is performed for this damaged block, and if so, proceed to step S910, and calculate all partial distortions d 1 -d 16 and the total distortion value D 16 =d 1 +d 2 +...+d 16 . Next, in steps S912 and S914, it is judged whether D 16 is less than or equal to DT a and DT b (that is, whether the condition for early termination is satisfied), and if so, proceed to step S950 to end the search procedure for the damaged block. If both steps S912 and S914 are judged to be negative, the procedure proceeds to step S916, and the total distortion value D 16 of the initial comparison is set as D min as a criterion for comparing the next candidate block. Next, in steps S920 to S925, compare the outer boundary of the next candidate block, starting from p=1, calculate d p (8 points) and D p respectively, if D p >p×D mim /16 , the comparison of the candidate block can be ended, and proceed to step S940. If D 1 <D mim /16, then repeat the above steps to calculate d 2 and D 2 , and compare D2 with 2×D mim /16. The sequence of comparison is from p=1 to p=16, and once D p is greater than p×D mim /16 during the comparison process, the comparison of the candidate block is terminated. When comparing to p=16, if D 16 is less than D mim , then proceed to steps S930 and S932, respectively judge whether D 16 is less than or equal to DT a and DT b (that is, whether the condition for early termination is satisfied), and if so, proceed to Step S950, ending the search procedure for the damaged block. If both steps S930 and S932 are judged to be negative, then the program proceeds to step S934, resetting the total distortion value D16 obtained by this comparison as a new Dmin value, as a judgment for comparing the next candidate block benchmark. Next, in step S940, it is judged whether the comparison of the last candidate block in the search window has been completed, if so, return to step S920 and continue the peripheral boundary comparison of the next candidate block, otherwise proceed to step S950, and end this process Refined comparison program for damaged blocks.

以上所述仅为本发明的较佳实施例而已,并非用以限定本发明的权利要求范围;凡其它未脱离本发明所揭示的精神下所完成的等效改变或修饰,均应包含在上述的权利要求范围内。  The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the scope of the claims of the present invention; all other equivalent changes or modifications that do not deviate from the spirit disclosed in the present invention should be included in the above-mentioned within the scope of the claims. the

Claims (8)

1. A method for temporal error concealment in video decoding, the method comprising:
a. detecting a damaged macroblock, the damaged macroblock consisting of 4 8 × 8 damaged blocks;
b. obtaining motion vectors of a plurality of 4 x 4 neighboring blocks surrounding the damaged macroblock;
c. for each 8 × 8 damaged block, determining a predicted motion vector of the 8 × 8 damaged block from motion vectors of 64 × 4 neighboring blocks of the 4 × 4 neighboring blocks that are closest to the 8 × 8 damaged block;
d. comparing the predicted motion vectors of the 4 8 × 8 damaged blocks to determine whether to merge the 4 8 × 8 damaged blocks, merge the 4 8 × 8 damaged blocks two by two into 2 16 × 8 merged damaged blocks, merge the 4 8 × 8 damaged blocks two by two into 28 × 16 merged damaged blocks, or merge all the 4 8 × 8 damaged blocks into 1 16 × 16 merged damaged block;
e. determining a predicted motion vector of each of the merged damaged blocks as an average of the predicted motion vectors of the 8 × 8 damaged blocks included in the merged damaged block for the merged 16 × 8 merged damaged block, the 8 × 16 merged damaged block, or the 16 × 16 merged damaged block; and
f. for each of the 8 × 8 damaged block, the 16 × 8 merged damaged block, the 8 × 16 merged damaged block, or the 16 × 16 merged damaged block, the predicted motion vector corresponding to each damaged block or merged damaged block is used as a starting point to perform pixel matching with a plurality of reference blocks in a search window of a reference picture, so as to find a corresponding reference block matching with the damaged block or the merged damaged block in the search window.
2. The method according to claim 1, wherein step c further comprises:
providing an active regression plane as: zi(x,y)=α1xi 22xiyi3yi 24xi5yi6
Respectively substituting the coordinates of the center points of the 64 × 4 adjacent blocks and the corresponding motion vectors into x of the active regression planei、yiAnd ZiTo find the coefficient alpha1、α2、α3、α4、α5And alpha6(ii) a And
determining that the predicted motion vector is located on the active regression plane.
3. The method of claim 1, wherein the pixel matching is to calculate an error value between a peripheral boundary pixel of each of the damaged blocks or merged damaged blocks and a peripheral boundary pixel of the reference block in the search window.
4. The method of claim 3, wherein the error value is an absolute error sum.
5. The method according to claim 3, wherein step f is performed by performing pixel matching in a spiral path.
6. The method of claim 5, wherein during the pixel matching process, the pixel matching is stopped when an error value of a specific reference block is smaller than a threshold value, and the specific reference block is used as the corresponding reference block, wherein the threshold value is DTaμ × N + γ, where N represents the total number of pixels in the peripheral boundary pixel, μ represents the average residual value of the peripheral boundary pixel, and γ represents a constant coefficient.
7. The method of claim 5, wherein during the pixel matching process, the pixel matching is stopped when an error value of a specific reference block is smaller than a threshold value, and the specific reference block is used as the corresponding reference block, wherein the threshold value is DTb=EBME(u,v)×EBMEβ/EBMEαX lambda + epsilon, where (u, v) and EBME (u, v) represent the motion vector corresponding to the damaged block or the merged damaged block to be compared currently and the error value obtained by the first comparison, respectively, EBMEαAnd EBMEβRespectively representing the error value and the minimum error value obtained by the first comparison of other damaged blocks which are compared, wherein lambda is a constant proportionality coefficient, and epsilon is a constant coefficient.
8. The method of claim 1, wherein the pixel matching for each 8 x 8 damaged block or each merged damaged block further comprises:
calculating an error value D between the peripheral boundary pixel of a first reference block corresponding to the starting point and the peripheral boundary pixel of the damaged block or the merged damaged blockmim(ii) a And
comparing the pixels of the second reference block according to a spiral path, dividing the peripheral boundary pixels of the damaged block or the merged damaged block into 16 equal parts, defining the error value of the 1 st equal part of the 16 equal parts to the error value of the p th equal part as a p equal part accumulated error value, wherein p is an integer from 1 to 16, calculating the p equal part accumulated error value in sequence and judging whether the p equal part accumulated error value is greater than p multiplied by DmimAnd/16, if yes, finishing the pixel comparison of the second reference block.
CN 200910149802 2009-06-26 2009-06-26 Temporal error concealment method Active CN101931819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910149802 CN101931819B (en) 2009-06-26 2009-06-26 Temporal error concealment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910149802 CN101931819B (en) 2009-06-26 2009-06-26 Temporal error concealment method

Publications (2)

Publication Number Publication Date
CN101931819A CN101931819A (en) 2010-12-29
CN101931819B true CN101931819B (en) 2012-12-26

Family

ID=43370693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910149802 Active CN101931819B (en) 2009-06-26 2009-06-26 Temporal error concealment method

Country Status (1)

Country Link
CN (1) CN101931819B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5428399A (en) * 1991-04-15 1995-06-27 Vistek Electronics Limited Method and apparatus for image translation with improved motion compensation
US6160844A (en) * 1996-10-09 2000-12-12 Sony Corporation Processing digitally encoded signals
CN1440624A (en) * 2000-05-15 2003-09-03 诺基亚有限公司 Video Hiding Method Controlled with Flags
CN1669322A (en) * 2002-07-15 2005-09-14 诺基亚有限公司 Method for error concealment in video sequences

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5428399A (en) * 1991-04-15 1995-06-27 Vistek Electronics Limited Method and apparatus for image translation with improved motion compensation
US6160844A (en) * 1996-10-09 2000-12-12 Sony Corporation Processing digitally encoded signals
CN1440624A (en) * 2000-05-15 2003-09-03 诺基亚有限公司 Video Hiding Method Controlled with Flags
CN1669322A (en) * 2002-07-15 2005-09-14 诺基亚有限公司 Method for error concealment in video sequences

Also Published As

Publication number Publication date
CN101931819A (en) 2010-12-29

Similar Documents

Publication Publication Date Title
TWI401972B (en) Method for temporal error concealment
JP4908522B2 (en) Method and apparatus for determining an encoding method based on distortion values associated with error concealment
US8428136B2 (en) Dynamic image encoding method and device and program using the same
US9503739B2 (en) Encoder-assisted adaptive video frame interpolation
KR100955152B1 (en) Multidimensional Adjacent Block Prediction for Video Encoding
KR101012624B1 (en) Method and apparatus for integrated error concealment framework
CN110519600A (en) Unified prediction, device, codec and storage device between intra frame
US8155213B2 (en) Seamless wireless video transmission for multimedia applications
CN101090491A (en) Enhanced Block-Based Motion Estimation Algorithm for Video Compression
AU2011367779B2 (en) Method and device for estimating video quality on bitstream level
US20070133689A1 (en) Low-cost motion estimation apparatus and method thereof
US20090034632A1 (en) Intra-forecast mode selecting, moving picture coding method, and device and program using the same
US20130177078A1 (en) Apparatus and method for encoding/decoding video using adaptive prediction block filtering
CN101931819B (en) Temporal error concealment method
KR101349111B1 (en) Method search multiple reference frames
KR101307682B1 (en) method for error detection using the data hiding of motion vector based on the RDO for H.264/AVC baseline profile
KR20070090494A (en) Interframe error concealment apparatus and method using mean motion vector
Garcia-V et al. Image processing for error concealment
Wei et al. Fast mode decision for error resilient video coding
Jagiwala et al. Analysis of block matching algorithms for motion estimation in H. 264 video CODEC
CN108696750A (en) A kind of decision method and device of prediction mode
Hui Spatial-Temporal Error Concealment Scheme for Intra-coded Frames
Garg et al. Interpolated candidate motion vectors for boundary matching error concealment technique in video
Sohn et al. Fast multiple reference frame selection method using correlation of sequence in JVT/H. 264
Qian et al. GM/LM Based Error Concealment for MPEG-4 Video Transmission over High Lossy and Noisy Networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant