[go: up one dir, main page]

CN102984521B - High-efficiency video coding inter-frame mode judging method based on temporal relativity - Google Patents

High-efficiency video coding inter-frame mode judging method based on temporal relativity Download PDF

Info

Publication number
CN102984521B
CN102984521B CN201210532681.7A CN201210532681A CN102984521B CN 102984521 B CN102984521 B CN 102984521B CN 201210532681 A CN201210532681 A CN 201210532681A CN 102984521 B CN102984521 B CN 102984521B
Authority
CN
China
Prior art keywords
mode
unit
piecemeal
rate
coding unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210532681.7A
Other languages
Chinese (zh)
Other versions
CN102984521A (en
Inventor
何小海
钟国韵
李元
吴晓红
王正勇
陶青川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201210532681.7A priority Critical patent/CN102984521B/en
Publication of CN102984521A publication Critical patent/CN102984521A/en
Application granted granted Critical
Publication of CN102984521B publication Critical patent/CN102984521B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a high-efficiency video coding (HEVC) inter-frame mode judging method based on temporal relativity, which comprises the steps of forecast method configuration and forecast mode selection. The forecast mode selection adopts the temporal relativity between two adjacent frames; the similarity of PU (Physical Unit) modes of a large size CU (Control Unit) of a corresponding piece in the previous frame and a small size CU in the current piece is analyzed according to the relativity; and finally, a PU mode selection method of the current CUs in various dimension is designed for the corresponding pieces in various dimensions according to the similarity. Compared with the HEVC standard in the prior art, with the adoption of the HEVC inter-frame mode judging method, the complexity of codding computation is reduced to a greater extent on the premise that the bit rate and video quality are almost unchanged.

Description

基于时域相关性的高性能视频编码帧间模式判决方法Inter-mode decision method for high-performance video coding based on temporal correlation

技术领域 technical field

本发明涉及图像通信领域中的视频编码帧间模式技术问题,尤其是涉及一种高性能视频编码帧间模式判决方法。The invention relates to the technical problem of video coding inter-frame mode in the field of image communication, in particular to a high-performance video coding inter-frame mode judgment method.

背景技术 Background technique

目前的国际视频编码标准为高级视频编码(H.264/AVC),该视频编码标准相比之前的视频编码标准,在视频编码性能方面有较大的提高。随着高清视频技术的广泛应用,H.264/AVC的最大16×16尺寸块已经不太适合高清视频的编码了。为此,国际标准化组织-国际电工委员会/活动图像专家组(ISO-IEC/ MPEG) 和国际电信联盟远程通信标准化组织/视频编码专家组(ITU-T/VCEG)两大国际标准化组织已经成立了视频编码联合开发小组(JCT- VC),并正在制定新一代国际视频标准,即高性能视频编码(HEVC)标准。HEVC标准的目标是在保持H.264/AVC标准视频质量的基础上,使比特率降低一半,即压缩率提高一倍。在国内外众多学者的努力下,HEVC视频编码标准的编码性能较H.264/AVC有了较高的提升。The current international video coding standard is Advanced Video Coding (H.264/AVC). Compared with previous video coding standards, this video coding standard has greatly improved video coding performance. With the wide application of high-definition video technology, the maximum block size of 16×16 in H.264/AVC is no longer suitable for encoding high-definition video. To this end, the International Organization for Standardization-International Electrotechnical Commission/Moving Picture Experts Group (ISO-IEC/MPEG) and the International Telecommunication Union Telecommunication Standardization Organization/Video Coding Experts Group (ITU-T/VCEG) have been established. Video Coding Joint Development Team (JCT-VC), and is developing a new generation of international video standards, the High Performance Video Coding (HEVC) standard. The goal of the HEVC standard is to reduce the bit rate by half while maintaining the video quality of the H.264/AVC standard, that is, to double the compression rate. With the efforts of many scholars at home and abroad, the coding performance of the HEVC video coding standard has been greatly improved compared with H.264/AVC.

H.264/AVC标准采用16×16、16×8、8×16、8×8、8×4、4×8和4×4等尺寸块进行帧间预测,而HEVC标准则采用8×8、16×16、32×32和64×64尺寸的编码单元(CU)先进行分块,接着对每个CU采用最多8种不同的预测单元(PU)模式进行帧间预测,8种不同的PU模式分别为PART_2N×2N、PART_N×2N、PART_2N×N、PART_N×N、PART_nL×2N、PART_nR×2N 、PART_2N×nD、和PART_2N×nU。对于每种尺寸下的CU及其内部的PU预测模式均进行率-失真代价的计算,最终选择一种最佳的CU分块模式及其内部的PU模式进行视频编码。对于64×64尺寸的CU,只有3种PU模式可选择,对于32×32和16×16尺寸的CU,有7种模式可选择,对于8×8尺寸的CU,则有4种PU模式可选择。对于每种PU模式,编码器均需对其进行率-失真代价的计算,因此,HEVC标准带来了巨大的视频编码计算复杂度。针对H.264/AVC,已经有许多学者提出了一些快速帧间判决方法,如发表于杂志“IET Electronics Letters”的文章“Fast mode decision algorithm for H.264 using statistics of rate-distortion cost”在分析前一帧对应宏块率-失真代价的统计分布情况的基础上,提出了一种基于自适应阈值的H.264/AVC快速帧间模式判决方法。发表于杂志“IEEE Transactionson Circuits and Systems for Video Technology”的文章“Fast mode decision based on mode adaptation”提出了一种基于模式自适应快速帧间模式判决方法。该算法根据相邻块空、时域的视频编码特性,构建了一个基于优先级候选模式列表,根据该列表有效地选择最佳帧间预测模式。发表于杂志“IEEE transactions on imageprocessing”的文章“Direct inter-mode selection for H.264 video coding using phase correlation”通过利用当前块与参考块之间的相位相关性捕捉其运动向量,根据此运动向量信息从候选模式列表中选择一种最佳的帧间预测模式。The H.264/AVC standard uses 16×16, 16×8, 8×16, 8×8, 8×4, 4×8 and 4×4 blocks for inter-frame prediction, while the HEVC standard uses 8×8 , 16×16, 32×32, and 64×64 size coding units (CUs) are first divided into blocks, and then each CU uses up to 8 different prediction unit (PU) modes for inter-frame prediction, 8 different The PU modes are PART_2N×2N, PART_N×2N, PART_2N×N, PART_N×N, PART_nL×2N, PART_nR×2N, PART_2N×nD, and PART_2N×nU. Calculate the rate-distortion cost for each size of CU and its internal PU prediction mode, and finally select an optimal CU block mode and its internal PU mode for video encoding. For 64×64 size CU, there are only 3 PU modes to choose from, for 32×32 and 16×16 size CU, there are 7 kinds of PU modes to choose, for 8×8 size CU, there are 4 PU modes to choose choose. For each PU mode, the encoder needs to calculate the rate-distortion cost. Therefore, the HEVC standard brings huge computational complexity for video encoding. For H.264/AVC, many scholars have proposed some fast inter-frame judgment methods, such as the article "Fast mode decision algorithm for H.264 using statistics of rate-distortion cost" published in the magazine "IET Electronics Letters" in the analysis On the basis of the statistical distribution of macroblock rate-distortion cost in the previous frame, a H.264/AVC fast inter-mode decision method based on adaptive threshold is proposed. The article "Fast mode decision based on mode adaptation" published in the journal "IEEE Transactions on Circuits and Systems for Video Technology" proposes a fast mode decision method based on mode adaptation. According to the video coding characteristics of adjacent blocks in space and time domain, the algorithm constructs a priority-based candidate mode list, and effectively selects the best inter-frame prediction mode according to the list. The article "Direct inter-mode selection for H.264 video coding using phase correlation" published in the magazine "IEEE transactions on imageprocessing" captures its motion vector by using the phase correlation between the current block and the reference block, according to the motion vector information Choose the best inter prediction mode from the list of candidate modes.

为了降低HEVC的视频编码计算复杂度,学者们提出了一些方法,如HEVC提案“JCTVC-D087”中的文章“Encoding complexity reduction by removal of N×N partition type”在从16×16、32×32和64×64尺寸的CU中取消了PART_N×N的 PU模式,只在8×8尺寸的CU中保留,从而降低了视频编码计算复杂度。提案“JCTVC-F045”中的文章“Early termination of CU encoding to reduce HEVC complexity”提出当参数cbf=0时,即离散余弦变换(DCT)后的AC系数为全零时,跳过除了PART_2N×2N之外的其它PU模式,从而降低了视频编码的计算复杂度。以上两种方法目前已经加入到HEVC最新的参考软件HM7.0中了。另外,在目前的HEVC标准中,对于当前的CU,首先检测的率-失真代价的模式为PART_2N×2N、PART_2N×N和PART_N×2N,若模式PART_2N×N的率-失真代价值小于PART_N×2N模式的率-失真代价值,则继续检测模式PART_2N×nU 和 PART_2N×nD的率-失真代价,而跳过模式PART_nL×2N 和 PART_nR×2N的率-失真代价的计算;相反,当模式PART_2N×N的率-失真代价值大于PART_N×2N模式的率-失真代价,则继续检测模式PART_nL×2N 和 PART_nR×2N的率-失真代价,而跳过模式PART_2N×nU 和 PART_2N×nD的率-失真代价的计算,该方法也较大地降低了HEVC视频编码的计算复杂度。以上这些方法均在一定程度上降低了HEVC视频编码的计算复杂度,但是,目前的HEVC视频编码的帧间预测模式在时域上还存在较大的冗余度。In order to reduce the computational complexity of HEVC video coding, scholars have proposed some methods, such as the article "Encoding complexity reduction by removal of N×N partition type" in the HEVC proposal "JCTVC-D087" from 16×16, 32×32 The PU mode of PART_N×N is canceled in the CU of 64×64 size and is only reserved in the CU of 8×8 size, thereby reducing the computational complexity of video encoding. The article "Early termination of CU encoding to reduce HEVC complexity" in the proposal "JCTVC-F045" proposes that when the parameter cbf=0, that is, when the AC coefficients after the discrete cosine transform (DCT) are all zero, skip all but PART_2N×2N other PU modes, thereby reducing the computational complexity of video coding. The above two methods have been added to the latest HEVC reference software HM7.0. In addition, in the current HEVC standard, for the current CU, the rate-distortion cost modes first detected are PART_2N×2N, PART_2N×N and PART_N×2N, if the rate-distortion cost value of the mode PART_2N×N is less than PART_N× For the rate-distortion cost value of the 2N mode, continue to detect the rate-distortion cost of the mode PART_2N×nU and PART_2N×nD, and skip the calculation of the rate-distortion cost of the mode PART_nL×2N and PART_nR×2N; on the contrary, when the mode PART_2N ×N rate-distortion cost value is greater than the rate-distortion cost of PART_N×2N mode, then continue to detect the rate-distortion cost of mode PART_nL×2N and PART_nR×2N, and skip the rate-distortion cost of mode PART_2N×nU and PART_2N×nD The calculation of distortion cost, this method also greatly reduces the computational complexity of HEVC video coding. All of the above methods reduce the computational complexity of HEVC video coding to a certain extent, but the current inter-frame prediction mode of HEVC video coding still has a large redundancy in the time domain.

HEVC相比H.264/AVC,在视频压缩比方面提高较大,但是在视频编码的计算复杂度方面却升高了很多,虽然许多学者提出了一些针对H.264/AVC的快速视频编码方法,及针对HEVC的快速视频编码方法,但是H.264/AVC的快速视频编码方法不适合HEVC,而目前的HEVC的快速视频编码方法对于将其应用于实时通信的目标还存在一些距离。Compared with H.264/AVC, HEVC has greatly improved the video compression ratio, but the computational complexity of video coding has increased a lot. Although many scholars have proposed some fast video coding methods for H.264/AVC , and a fast video coding method for HEVC, but the fast video coding method of H.264/AVC is not suitable for HEVC, and the current fast video coding method of HEVC still has some distance to the goal of applying it to real-time communication.

发明内容 Contents of the invention

针对现有技术的高性能视频编码帧间模式判决方法的现状与不足,本发明的旨在提供一种新的基于时域相关性的高性能视频编码帧间模式判决方法,以降低HEVC视频编码的计算复杂度,实现将其应用于实时通信的目标。Aiming at the current situation and shortcomings of the high-performance video coding inter-frame mode judgment method in the prior art, the purpose of the present invention is to provide a new high-performance video coding inter-frame mode judgment method based on temporal correlation to reduce HEVC video coding Computational complexity, to achieve the goal of applying it to real-time communication.

本发明的基本思想是利用相邻帧之间的PU模式选择的相似性,根据前一帧中大尺寸CU所采取的PU模式,有效地选择当前CU的PU模式,跳过一些不太可能的CU分块尺寸及PU预测模式,减少所需遍历的PU模式,从而实现减少率-失真代价计算的数量,最终达到降低HEVC视频编码的计算复杂度的目的。The basic idea of the present invention is to use the similarity of PU mode selection between adjacent frames to effectively select the PU mode of the current CU according to the PU mode adopted by the large-sized CU in the previous frame, and skip some unlikely ones. The CU block size and PU prediction mode reduce the PU modes that need to be traversed, thereby reducing the number of rate-distortion cost calculations, and finally achieving the purpose of reducing the computational complexity of HEVC video coding.

本发明提供的基于时域相关性的高性能视频编码帧间模式判决方法,包括预测方式配置和预测模式选择,在预测方式配置中,CU分割深度不大于4,PU采用对称和非对称综合预测模式或只采用对称预测模式,在预测模式选择中,将当前深度CU总的率-失真代价之和与上一层CU总的率-失真代价之和进行比较,若比上层更小,则进一步采取四叉树划分成4个更下一层深度的CU,否则终止四叉树划分,所述预测模式选择包括以下步骤:The high-performance video coding inter-frame mode decision method based on time-domain correlation provided by the present invention includes prediction mode configuration and prediction mode selection. In the prediction mode configuration, the CU segmentation depth is not greater than 4, and the PU adopts symmetric and asymmetric comprehensive prediction. Mode or only symmetric prediction mode is used. In the prediction mode selection, the sum of the total rate-distortion cost of the current depth CU is compared with the sum of the total rate-distortion cost of the previous layer CU. If it is smaller than the upper layer, further Take the quadtree divided into 4 CUs with a lower level of depth, otherwise terminate the quadtree division, and the prediction mode selection includes the following steps:

(1)检测当前CU分块前一帧对应位置CU分块的尺寸,若当前CU分块尺寸小于对应CU分块的尺寸,则进入下面步骤(2),否则遍历当前CU分块的所有PU模式,并采取四叉树划分成4个更深一层CU分块,对更深一层的每个CU分块重复上述过程;(1) Detect the size of the CU block corresponding to the frame before the current CU block. If the size of the current CU block is smaller than the size of the corresponding CU block, enter the following step (2), otherwise traverse all PUs in the current CU block mode, and divide the quadtree into 4 deeper CU blocks, and repeat the above process for each CU block of the deeper layer;

(2)判断前一帧对应CU分块的PU模式是否为PART_2N×2N,若是,则当前CU分块只检测PART_2N×2N的PU模式的率-失真代价,并进入下面步骤(6),否则进入下面步骤(3);(2) Determine whether the PU mode corresponding to the CU block in the previous frame is PART_2N×2N, if so, the current CU block only detects the rate-distortion cost of the PU mode of PART_2N×2N, and enters the following step (6), otherwise Enter the following step (3);

(3)判断前一帧对应CU分块的PU模式是否为PART_nL×2N 或者 PART_nR×2N,若是,则当前CU分块只检测PART_N×2N和PART_2N×2N两种PU模式的率-失真代价,并进入下面步骤(6),否则进入下面步骤(4);(3) Determine whether the PU mode corresponding to the CU block in the previous frame is PART_nL×2N or PART_nR×2N, and if so, the current CU block only detects the rate-distortion cost of the two PU modes of PART_N×2N and PART_2N×2N, And enter the following step (6), otherwise enter the following step (4);

(4)判断前一帧对应CU分块的PU模式是否为PART_2N×nU 或者 PART_2N×nD,若是,则当前CU分块只检测PART_2N×N和PART_2N×2N两种PU模式的率-失真代价,并进入下面步骤(6),否则进入下面步骤(5);(4) Determine whether the PU mode corresponding to the CU block in the previous frame is PART_2N×nU or PART_2N×nD, if so, the current CU block only detects the rate-distortion cost of the two PU modes of PART_2N×N and PART_2N×2N, And enter the following step (6), otherwise enter the following step (5);

(5)检测当前CU分块检测所有PU模式的率-失真代价,并进入下面步骤(6);(5) Detect the current CU block to detect the rate-distortion cost of all PU modes, and enter the following step (6);

(6)判断当前CU分块的尺寸是否为前一帧对应CU块尺寸的1/4,若是,则当前CU分块不再进行四叉树划分;否则将当前CU分块进一步采取四叉树划分成4个更深一层CU分块,对更深一层的每个CU分块重复步骤(1)过程。(6) Determine whether the size of the current CU block is 1/4 of the size of the corresponding CU block in the previous frame. If so, the current CU block will no longer be divided into quadtrees; otherwise, the current CU block will be further adopted as a quadtree Divide into 4 deeper CU blocks, and repeat step (1) for each CU block of the deeper layer.

在上述技术方案中, CU分割深度优选为2~4,进一步优选为4。在PU可采用的对称和非对称综合预测模式和只采用对称预测模式中,优先采用对称和非对称综合预测模式。In the above technical solution, the CU segmentation depth is preferably 2-4, more preferably 4. Among the symmetric and asymmetric integrated prediction modes and the symmetric integrated prediction mode that can be adopted by the PU, the symmetric and asymmetric integrated prediction modes are preferentially used.

在上述技术方案中,所述率-失真代价可通过下述公式来确定:In the above technical solution, the rate-distortion cost can be determined by the following formula:

Jmode=(SADluma+wchroma×SADchroma)+λmode×Bmode J mode =(SAD luma +w chroma ×SAD chroma )+λ mode ×B mode

公式中Jmode为率—失真代价,SADluma为原始图像亮度与预测图像亮度的均方差,SADchroma为原始图像色度与预测图像色度的均方差,wchroma为色度失真的权值,λmode代表拉格朗日乘子,Bmode表示在该模式下编码比特数。In the formula, J mode is the rate-distortion cost, SAD luma is the mean square difference between the original image brightness and the predicted image brightness, SAD chroma is the mean square difference between the original image chromaticity and the predicted image chromaticity, w chroma is the weight of chromaticity distortion, λ mode represents the Lagrangian multiplier, and B mode represents the number of encoded bits in this mode.

根据本发明的上述方法可以编制执行上述基于时域相关性的高性能视频编码帧间模式判决方法的视频编码器。According to the above method of the present invention, a video encoder that executes the above high-performance video coding inter-frame mode decision method based on temporal correlation can be programmed.

本发明是基于以下思路分析而完成的:The present invention is based on following train of thought analysis and finishes:

当对应块CU的PU模式为PART_2N×2N时,说明对应块的纹理应该是比较平滑的或者其所处的区域中所有的运动对象都有着相同的运动向量,根据相邻两帧之间高度的相似性,当前块区域也应该具有类似的属性,因此,对于当前CU分块来说,只需对PU模式PART_2N×2N进行检测即可。When the PU mode of the corresponding block CU is PART_2N×2N, it means that the texture of the corresponding block should be relatively smooth or all moving objects in the area where it is located have the same motion vector, according to the height between two adjacent frames Similarity, the current block area should also have similar attributes, therefore, for the current CU block, it is only necessary to detect the PU mode PART_2N×2N.

当对应块CU的PU模式为PART_2N×N或者PART_N×2N时,在这种情况下,根据相邻两帧之间的相似性,此时当前块中的CU对应的对应块中并没有PU的分隔线,于是,此时并不能推测对应块区域为平滑的或者其中的运动对象均有着相同的运动向量。因此,对于当前CU分块来说,没有可以参考的PU模式,必须对所有的PU模式进行遍历。When the PU mode of the corresponding block CU is PART_2N×N or PART_N×2N, in this case, according to the similarity between two adjacent frames, there is no PU in the corresponding block corresponding to the CU in the current block Therefore, at this time, it cannot be inferred that the corresponding block area is smooth or that the moving objects in it all have the same motion vector. Therefore, for the current CU block, there is no PU mode that can be referred to, and all PU modes must be traversed.

当对应块CU的PU模式为PART_nL×2N 或者 PART_nR×2N,根据相邻两帧之间的时域相关性及时域上的运动方向性,一方面说明此时当前块中的运动对象有较大的概率分为左右两部分,另一方面,此时当前块中有部分CU的PU模式PART_N×2N的分隔线将与对应块CU的PU模式PART_nL×2N 或者 PART_nR×2N的分隔线对应,再者,根据当前块(与对应CU块相同位置和尺寸的当前帧中的当前块)中各层之间的PU模式的相关性及同CU层下相邻CU的PU模式相关性,当前块中各层CU的PU模式中,出现PART_N×2N概率较高,因此,当前块应检测PU模式PART_N×2N。另外,为了各层CU之间的连续性及保证视频编码性能,PU模式PART_2N×2N也应考虑进去,因此,在该情况下,总共只遍历2种PU模式:PART_N×2N和PART_2N×2N。同理,当对应块CU的PU模式为PART_2N×nU 或者 PART_2N×nD,此时说明当前块中的运动对象有较大的概率分为左右两部分,因此,当前块应检测PU模式为PART_2N×N和PART_2N×2N。When the PU mode of the corresponding block CU is PART_nL×2N or PART_nR×2N, according to the time domain correlation between two adjacent frames and the motion direction in the time domain, on the one hand, it means that the moving object in the current block has a large The probability is divided into left and right parts. On the other hand, at this time, the dividing line of the PU mode PART_N×2N of some CUs in the current block will correspond to the dividing line of the PU mode PART_nL×2N or PART_nR×2N of the corresponding block CU, and then Or, according to the correlation between the PU modes of each layer in the current block (the current block in the current frame with the same position and size as the corresponding CU block) and the correlation of the PU modes of adjacent CUs under the same CU layer, in the current block In the PU mode of each layer CU, the probability of PART_N×2N is relatively high. Therefore, the current block should detect the PU mode PART_N×2N. In addition, in order to ensure the continuity between the CUs of each layer and ensure the video coding performance, the PU mode PART_2N×2N should also be taken into consideration. Therefore, in this case, only two PU modes are traversed: PART_N×2N and PART_2N×2N. Similarly, when the PU mode of the corresponding block CU is PART_2N×nU or PART_2N×nD, it means that the moving object in the current block has a high probability of being divided into left and right parts. Therefore, the current block should detect that the PU mode is PART_2N× N and PART_2N×2N.

最后,当对应块CU分块尺寸为2N×2N,而当前块中最小CU尺寸为N/4×N/4时,根据相邻两帧之间时域的相关性,无论对应CU分块的PU模式为哪种,当前块中尺寸为N/4×N/4的CU的PU模式的分割线均不可能与对应CU的PU模式的分割线对应,即此时当前CU分块尺寸是不太可能出现N/4×N/4的,因此,对于该尺寸的CU,可不检测其率-失真代价以节省视频编码计算复杂度。Finally, when the block size of the corresponding CU block is 2N×2N, and the smallest CU size in the current block is N/4×N/4, according to the temporal correlation between two adjacent frames, regardless of the size of the corresponding CU block What kind of PU mode is it? It is impossible for the dividing line of the PU mode of the CU whose size is N/4×N/4 in the current block to correspond to the dividing line of the PU mode of the corresponding CU. That is, the current CU block size is not It is very likely that N/4×N/4 will appear. Therefore, for a CU of this size, its rate-distortion cost may not be detected to save the computational complexity of video coding.

相比HEVC视频编码标准,本发明的方法能在较大幅度降低视频编码计算复杂度的基础上,在视频编码压缩比和视频质量损失很小。视频压缩方法的根本依据是通过减少视频中各种相关性而达到用更少的数据表示原来的整个视频的信息量,本发明的方法是在分析了视频中相邻两帧之间的时域上PU模式之间的相关性,通过判断前一帧对应CU的PU模式,以及当前块中相邻CU分块之间的相关性,判决当前块的最佳PU模式,从而跳过了其他的PU模式,从相邻帧之间PU模式的相关性角度来看,本发明方法是去除了PU模式之间的冗余度,而合理的冗余度去除,不仅能去除该冗余部分所需进行的视频编码计算任务,同时也不会损失视频的信息量,因此也基本不会造成视频压缩比和视频质量的下降。Compared with the HEVC video coding standard, the method of the present invention can greatly reduce the computational complexity of video coding, and has little loss in video coding compression ratio and video quality. The fundamental basis of the video compression method is to achieve the information volume of the original whole video with less data by reducing various correlations in the video. The method of the present invention analyzes the time domain between two adjacent frames in the video. The correlation between the above PU modes, by judging the PU mode of the CU corresponding to the previous frame, and the correlation between adjacent CU blocks in the current block, the best PU mode of the current block is judged, thereby skipping other PU mode, from the perspective of the correlation of PU modes between adjacent frames, the method of the present invention removes the redundancy between PU modes, and reasonable redundancy removal can not only remove the required The video encoding calculation tasks performed will not lose the amount of video information at the same time, so the video compression ratio and video quality will basically not be reduced.

本发明方法改进的是整个视频编码计算过程中计算复杂度最关键的地方。在整个视频编码过程中,运动估计(包括整数像素运动估计和分数像素运动估计)的计算复杂度所占的比例超过50%(根据不同的配置环节存在一定差异),本发明方法最关键的是根据相邻两帧之间的时域相关性,跳过当前块中各层CU块中的不太可能出现的PU模式的检测过程,即对该PU模式下各个PU分块进行率-失真代价的计算检测,从而挑选其中率-失真代价最小的一个PU模式,而在率-失真代价的计算中,运动估计是其中最耗时的编码计算过程,跳过几个PU模式即意味着跳过几个运动估计的计算量,因此,在计算复杂度方面,本发明方法着手的点为视频编码计算过程中最关键改进之处。What the method of the invention improves is the most critical part of the calculation complexity in the whole video coding calculation process. In the entire video coding process, the calculation complexity of motion estimation (including integer pixel motion estimation and fractional pixel motion estimation) accounts for more than 50% (there are certain differences according to different configuration links), the most critical of the method of the present invention is According to the time-domain correlation between two adjacent frames, the detection process of the unlikely PU mode in each layer of CU blocks in the current block is skipped, that is, the rate-distortion cost is performed on each PU block in the PU mode The calculation and detection of the rate-distortion cost is selected to select a PU mode with the smallest rate-distortion cost. In the calculation of the rate-distortion cost, motion estimation is the most time-consuming encoding calculation process. Skipping several PU modes means skipping Therefore, in terms of computational complexity, the point where the method of the present invention starts is the most critical improvement in the video coding calculation process.

本发明方法能在保持降低计算复杂度的基础上,不额外增加硬件实现成本。视频编码技术很多情况下最终均要嵌入硬件设备,包括FPGA和DSP等,因此,对于改进方法的代码运算代价和所需要的数据存储硬件代价要求均较高。本发明方法需要增加的代码很少,主要包括几个判断语句,在硬件所需的存储器方面,由于本发明方法中判断的对象为前一帧中对应位置CU分块的PU模式,而这些模式的信息原来就存储在数据流中,本发明方法并未带来额外的数据存储要求,因此,若将本发明方法应用于硬件设备,对硬件设备的制造不会增加额外的成本,同时还能节省功耗。The method of the invention can reduce the computational complexity without additional hardware implementation cost. In many cases, video coding technology will eventually be embedded in hardware devices, including FPGA and DSP, etc. Therefore, the requirements for code operation cost and data storage hardware cost of the improved method are relatively high. The code that the method of the present invention needs to increase is seldom, mainly comprises several judgment sentences, aspect the memory required by hardware, because the object judged in the method of the present invention is the PU mode of the corresponding position CU block in the previous frame, and these modes The information of the present invention is originally stored in the data stream, and the method of the present invention does not bring additional data storage requirements. Therefore, if the method of the present invention is applied to hardware devices, no additional cost will be added to the manufacture of hardware devices, and at the same time it can Save power consumption.

附图说明 Description of drawings

图1为基于时域特性的HEVC快速帧间模式判决方法与HM7.0视频编码标准的CU分块方法的比较示意图,其中(a)为HM7.0视频编码标准的CU分块方法,(b)为基于时域相关性的HEVC快速帧间模式判决方法中CU分块方法;Figure 1 is a schematic diagram of the comparison between the HEVC fast inter-frame mode decision method based on time domain characteristics and the CU block method of the HM7.0 video coding standard, where (a) is the CU block method of the HM7.0 video coding standard, (b) ) is the CU block method in the HEVC fast inter-frame mode decision method based on temporal correlation;

图2为基于时域特性的HEVC快速帧间模式判决方法与HM7.0视频编码标准的PU预测方法的比较示意图,其中(a)为HM7.0视频编码标准的PU预测方法,(b)为基于时域相关性的HEVC快速帧间模式判决方法中PU预测方法;Figure 2 is a schematic diagram of the comparison between the HEVC fast inter-frame mode decision method based on time domain characteristics and the PU prediction method of the HM7.0 video coding standard, where (a) is the PU prediction method of the HM7.0 video coding standard, and (b) is the PU prediction method of the HM7.0 video coding standard. PU prediction method in HEVC fast inter mode decision method based on temporal correlation;

图3为基于时域相关性的HEVC快速帧间模式判决方法的流程图。Fig. 3 is a flow chart of HEVC fast inter-frame mode decision method based on temporal correlation.

具体实施方式 Detailed ways

下面结合实施例对本发明作进一步的详细说明,有必要指出的是,以下的实施例只用于对本发明做进一步的说明,不能理解为对本发明保护范围的限制,所属领域技术熟悉人员根据上述发明内容,对本发明做出一些非本质的改进和调整进行具体实施,应仍属于本发明的保护范围。The present invention will be further described in detail below in conjunction with the examples. It must be pointed out that the following examples are only used to further illustrate the present invention and cannot be interpreted as limiting the protection scope of the present invention. Content, making some non-essential improvements and adjustments to the present invention for specific implementation shall still belong to the protection scope of the present invention.

1.同时打开两个算法的程序并设置好相同的配置文件,参考软件选择HM7.0,量化步长(QP)值分别取27和32。本发明将与HEVC视频编码标准的参考软件算法HM7.0的方法进行比较。并对其三种视频编码性能:比特率、峰值信噪比(PSNR)以及视频编码时间(其中PSNR体现视频的客观视频质量,视频编码时间体现编码的计算复杂度),进行了比较分析,比较性能的差距用以下三个指标进行评价:1. Open the programs of the two algorithms at the same time and set the same configuration file. The reference software selects HM7.0, and the quantization step (QP) value is 27 and 32 respectively. The present invention will be compared with the method of the reference software algorithm HM7.0 of the HEVC video coding standard. And its three kinds of video coding performance: bit rate, peak signal-to-noise ratio (PSNR) and video coding time (PSNR reflects the objective video quality of the video, video coding time reflects the computational complexity of coding), comparative analysis, comparison The performance gap is evaluated with the following three indicators:

ΔBitrateΔ Bitrate == Bitratebit rate propro -- Bitratebit rate refref Bitratebit rate refref ×× 100100 %%

ΔPSNRΔ PSNR == PSNRPSNR propro -- PSNRPSNR refref

ΔTimeΔTime == TimeTime propro -- TimeTime refref TimeTime refref ×× 100100 %%

其中Bitratepro、PSNRpro 和Timepro 分别为本发明算法的比特率、PSNR以及视频编码时间,Bitrateref、PSNRref 和Timeref分别为HM7.0标准算法的比特率、PSNR以及视频编码时间,?Bitrate、?PSNR和?Time分别为本发明算法与HM7.0标准算法之间比特率、PSNR以及视频编码时间的差。Wherein Bitrate pro , PSNR pro and Time pro are the bit rate, PSNR and video encoding time of the algorithm of the present invention respectively, Bitrate ref , PSNR ref and Time ref are the bit rate, PSNR and video encoding time of the HM7.0 standard algorithm respectively, ? Bitrate, ?PSNR, and ?Time are the differences in bit rate, PSNR, and video encoding time between the algorithm of the present invention and the HM7.0 standard algorithm, respectively.

2.在HEVC视频编码技术中,PU预测模式可以采取对称和非对称的综合预测模式,也可以只采取对称预测模式,发明在以上两种情况下均有效,但是采取对称和非对称的综合预测模式能降低更多的视频编码计算复杂度,即能取得更好的算法效果,因此本发明采取综合预测模式。2. In the HEVC video coding technology, the PU prediction mode can adopt a symmetric and asymmetric comprehensive prediction mode, or only a symmetric prediction mode. The invention is effective in the above two cases, but the symmetric and asymmetric comprehensive prediction is adopted The mode can reduce more computational complexity of video coding, that is, better algorithm effect can be obtained, so the present invention adopts the comprehensive prediction mode.

3.编码的对象为标准的HEVC测试视频,它们的名称、分辨率和帧率分别为:Fourpeople(1280×720,60帧/秒)、Johnny(1280×720,60帧/秒)、KristenandSara(1280×720,60帧/秒)、Cactus(1920×1080,50帧/秒)、Kimono1(1920×1080,24帧/秒)和ParkScene(1920×1080,24帧/秒)。3. The encoded objects are standard HEVC test videos. Their names, resolutions and frame rates are: Fourpeople (1280×720, 60 frames per second), Johnny (1280×720, 60 frames per second), KristenandSara ( 1280×720, 60 frames per second), Cactus (1920×1080, 50 frames per second), Kimono1 (1920×1080, 24 frames per second) and ParkScene (1920×1080, 24 frames per second).

4.输入2个相同的视频序列;4. Input 2 identical video sequences;

5.分别对2个相同的视频序列进行视频编码;5. Perform video encoding on two identical video sequences;

6.利用HEVC视频编码器HM7.0对视频序列在HEVC方式下进行视频编码;6. Use HEVC video encoder HM7.0 to encode video sequences in HEVC mode;

7.本发明算法根据前一帧对应块的PU模式对当前CU分块的PU模式进行选择;7. The algorithm of the present invention selects the PU mode of the current CU block according to the PU mode of the corresponding block in the previous frame;

8.在预测模式选择中,将当前深度CU总的率-失真代价之和与上一层CU总的率-失真代价之和进行比较,若比上层更小,则进一步采取四叉树划分成4个更下一层深度的CU,否则终止四叉树划分,具体的预测模式选择如下:8. In the prediction mode selection, compare the sum of the total rate-distortion cost of the current depth CU with the sum of the total rate-distortion cost of the previous layer CU. If it is smaller than the upper layer, further divide the quadtree into 4 CUs with a lower level of depth, otherwise the quadtree division is terminated. The specific prediction mode selection is as follows:

(1)检测当前CU分块前一帧对应位置CU分块的尺寸,若当前CU分块尺寸小于对应CU分块的尺寸,则进入下面步骤(2),否则遍历当前CU分块的所有PU模式,并采取四叉树划分成4个更深一层CU分块,对更深一层的每个CU分块重复上述过程;(1) Detect the size of the CU block corresponding to the frame before the current CU block. If the size of the current CU block is smaller than the size of the corresponding CU block, enter the following step (2), otherwise traverse all PUs in the current CU block mode, and divide the quadtree into 4 deeper CU blocks, and repeat the above process for each CU block of the deeper layer;

(2)判断前一帧对应CU分块的PU模式是否为PART_2N×2N,若是,则当前CU分块只检测PART_2N×2N的PU模式的率-失真代价,并进入下面步骤(6),否则进入下面步骤(3);(2) Determine whether the PU mode corresponding to the CU block in the previous frame is PART_2N×2N, if so, the current CU block only detects the rate-distortion cost of the PU mode of PART_2N×2N, and enters the following step (6), otherwise Enter the following step (3);

(3)判断前一帧对应CU分块的PU模式是否为为PART_nL×2N 或者PART_nR×2N,若是,则当前CU分块只检测PART_N×2N和PART_2N×2N两种PU模式的率-失真代价,并进入下面步骤(6),否则进入下面步骤(4);(3) Determine whether the PU mode corresponding to the CU block in the previous frame is PART_nL×2N or PART_nR×2N, and if so, the current CU block only detects the rate-distortion cost of the two PU modes of PART_N×2N and PART_2N×2N , and go to the following step (6), otherwise go to the following step (4);

(4)判断前一帧对应CU分块的PU模式是否为PART_2N×nU 或者 PART_2N×nD,若是,则当前CU分块只检测PART_2N×N和PART_2N×2N两种PU模式的率-失真代价,并进入下面步骤(6),否则进入下面步骤(5);(4) Determine whether the PU mode corresponding to the CU block in the previous frame is PART_2N×nU or PART_2N×nD, if so, the current CU block only detects the rate-distortion cost of the two PU modes of PART_2N×N and PART_2N×2N, And enter the following step (6), otherwise enter the following step (5);

(5)检测当前CU分块检测所有PU模式的率-失真代价,并进入下面步骤(6);(5) Detect the current CU block to detect the rate-distortion cost of all PU modes, and enter the following step (6);

(6)判断当前CU分块的尺寸是否为前一帧对应CU块尺寸的1/4,若是,则当前CU分块不再进行四叉树分割划分;否则将当前CU分块四叉树划分成4个CU分块,对每个CU分块进行重复步骤(1)。(6) Determine whether the size of the current CU block is 1/4 of the size of the corresponding CU block in the previous frame. If so, the current CU block will not be divided into quadtrees; otherwise, the current CU block will be divided into quadtrees Divide into 4 CU blocks, and repeat step (1) for each CU block.

9.在模式选择过程中,率失真代价的公式如下:9. During the mode selection process, the rate-distortion cost formula is as follows:

Jmode=(SADluma+wchroma×SADchroma)+λmode×Bmode J mode =(SAD luma +w chroma ×SAD chroma )+λ mode ×B mode

公式中Jmode为率—失真代价,SADluma为原始图像亮度与预测图像亮度的均方差,SADchroma为原始图像色度与预测图像色度的均方差,wchroma为色度失真的权值,λmode代表拉格朗日乘子,Bmode表示在该模式下编码比特数。In the formula, J mode is the rate-distortion cost, SAD luma is the mean square difference between the original image brightness and the predicted image brightness, SAD chroma is the mean square difference between the original image chromaticity and the predicted image chromaticity, w chroma is the weight of chromaticity distortion, λ mode represents the Lagrangian multiplier, and B mode represents the number of encoded bits in this mode.

亮度和色度的失真SADluma和SADchroma分别可由以下两式得出:The distortion SAD luma and SAD chroma of brightness and chrominance can be obtained by the following two formulas respectively:

SADSAD lumaluma == ΣΣ ii ,, jj || DiffDiff lumaluma (( ii ,, jj )) ||

SADSAD chromachroma == ΣΣ ii ,, jj || DiffDiff chromachroma (( ii ,, jj )) ||

其中Diffluma和Diffchroma 分别为:Among them, Diff luma and Diff chroma are:

Diffluma(i,j)=BlockAluma(i,j)-BlockABluma(i,j)Diff luma (i,j)=BlockA luma (i,j)-BlockAB luma (i,j)

Diffchroma(i,j)=BlockAchroma (i,j)-BlockABchroma(i,j)Diff chroma (i,j)=BlockA chroma (i,j)-BlockAB chroma (i,j)

其中BlockAluma和BlockBluma分别为编码块和预测块中坐标位置为(i,j)上的像素亮度值,BlockA chroma和BlockBchroma分别为编码块和预测块中坐标位置为(i,j)上的像素色度值。Among them, BlockA luma and BlockB luma are the pixel luminance values at the coordinate position (i, j) in the coding block and the prediction block respectively, and BlockA chroma and BlockB chroma are the coordinate positions in the coding block and the prediction block at (i, j) respectively. The pixel chromaticity value.

色度失真权值wchroma可由下式得出:The chroma distortion weight w chroma can be obtained by the following formula:

ww chromachroma == 22 (( QPQP -- QPQP chromachroma )) // 33

其中QP和QPchroma分别为亮度和色度的QP值。Among them, QP and QP chroma are the QP values of brightness and chroma, respectively.

拉格朗日乘子λmode可由下式得出:The Lagrange multiplier λ mode can be obtained by the following formula:

λmode=2(QP-12)/3 λ mode =2 (QP-12)/3

10.本发明算法中,CU分割深度可以取2~4,但分割深度取的越大,则该方法的中跳过的PU预测模式将更多,最终将降低更多的视频编码计算复杂度。因此,本发明的CU分割深度取4。10. In the algorithm of the present invention, the CU segmentation depth can be 2 to 4, but the larger the segmentation depth is, the more PU prediction modes will be skipped in the method, which will eventually reduce more video coding calculation complexity . Therefore, the CU partition depth of the present invention is 4.

11.两个程序分别输出视频编码后的视频序列以及各自的比特率、PSNR值以及总的视频编码时间,上述3个指标的结果如表1-3所示,统计显示本发明算法比HEVC标准在比特率方面上升了0.13-1.05%,而且总体上看来,在大QP值下比小QP值下要稍微上升的多一些,在视频质量PSNR值方面降低0.00-0.06dB,在视频编码计算复杂度方面降低了22.38-58.36%。从总体来看来,本发明算法与HEVC视频编码标准相比,在视频压缩率(由比特率下降程度来体现)和视频质量损失很小的前提下,较大幅度地降低了视频编码的计算复杂度(见表1~3)。11. The two programs respectively output video sequences after video encoding and their respective bit rates, PSNR values and total video encoding time. The results of the above three indicators are shown in Table 1-3. Statistics show that the algorithm of the present invention is better than the HEVC standard The bit rate has increased by 0.13-1.05%, and generally speaking, the large QP value is slightly higher than the small QP value, and the video quality PSNR value is reduced by 0.00-0.06dB. In the video encoding calculation The complexity is reduced by 22.38-58.36%. Generally speaking, compared with the HEVC video coding standard, the algorithm of the present invention greatly reduces the calculation of video coding under the premise that the video compression rate (reflected by the degree of bit rate reduction) and video quality loss is small. Complexity (see Table 1~3).

表1 本发明算法与HM7.0标准算法之间比特率的比较Table 1 Comparison of bit rates between the algorithm of the present invention and the HM7.0 standard algorithm

表2 本发明算法与HM7.0标准算法之间PSNR值的比较Table 2 Comparison of PSNR values between the algorithm of the present invention and the HM7.0 standard algorithm

表3 本发明算法与HM7.0标准算法之间视频编码时间的比较Table 3 Comparison of video encoding time between the algorithm of the present invention and the HM7.0 standard algorithm

Claims (4)

1. the high-performance video coding inter-frame mode decision method based on relativity of time domain, comprise prediction mode configuration and predictive mode selection, in prediction mode configuration, coding unit CU is split the degree of depth and is not more than 4, predicting unit PU adopts symmetry and asymmetric integrated forecasting pattern or only adopts symmetrical predictive mode, in predictive mode is selected, rate total for current depth coding unit CU-distortion cost sum and the total rate-distortion cost sum of last layer coding unit CU are compared, if less than upper strata, quad-tree partition is then taked to become the coding unit CU of 4 more next layer depth further, otherwise termination quad-tree partition, it is characterized in that described predictive mode is selected to comprise the following steps:
(1) size of current coded unit CU piecemeal former frame correspondence position coding unit CU piecemeal is detected, if current coded unit CU block size is less than the size of corresponding coding unit CU piecemeal, then enter step (2) below, otherwise all predicting unit PU patterns of traversal current coded unit CU piecemeal, and take quad-tree partition to become 4 darker one deck coding unit CU piecemeals, said process is repeated to more further each coding unit CU piecemeal;
(2) whether the predicting unit PU pattern judging former frame corresponding coding unit CU piecemeal is PART_2N × 2N, if, then current coded unit CU piecemeal only detects the rate-distortion cost of the predicting unit PU pattern of PART_2N × 2N, and enter step (6) below, otherwise enter step (3) below;
(3) whether the predicting unit PU pattern judging former frame corresponding coding unit CU piecemeal is PART_nL × 2N or PART_nR × 2N, if, then current coded unit CU piecemeal only detects the rate-distortion cost of PART_N × 2N and PART_2N × 2N two kinds of predicting unit PU patterns, and enter step (6) below, otherwise enter step (4) below;
(4) whether the predicting unit PU pattern judging former frame corresponding coding unit CU piecemeal is PART_2N × nU or PART_2N × nD, if, then current coded unit CU piecemeal only detects the rate-distortion cost of PART_2N × N and PART_2N × 2N two kinds of predicting unit PU patterns, and enter step (6) below, otherwise enter step (5) below;
(5) detect rate-distortion cost that current coded unit CU piecemeal detects all predicting unit PU patterns, and enter step (6) below;
(6) judge that whether the size of current coded unit CU piecemeal is 1/4 of former frame corresponding coding unit CU block size, if so, then current coded unit CU piecemeal no longer carries out quad-tree partition; Otherwise taked by current coded unit CU piecemeal quad-tree partition to become 4 darker one deck coding unit CU piecemeals further, step (1) process is repeated to more further each coding unit CU piecemeal.
2., as claimed in claim 1 based on the high-performance video coding inter-frame mode decision method of relativity of time domain, it is characterized in that coding unit CU splits the degree of depth is 2 ~ 4.
3., as claimed in claim 2 based on the high-performance video coding inter-frame mode decision method of relativity of time domain, it is characterized in that coding unit CU splits the degree of depth is 4.
4. the high-performance video coding inter-frame mode decision method based on relativity of time domain as described in one of claims 1 to 3, is characterized in that described rate-distortion cost is determined by following formula:
J mode=(SAD luma+w chroma×SAD chroma)+λ mode×B mode
J in formula modefor rate-distortion cost, SAD lumafor the mean square deviation of original image brightness and predicted picture brightness, SAD chromafor the mean square deviation of original image colourity and predicted picture colourity, w chromafor the weights of chromatic distortion, λ moderepresent Lagrange multiplier, B moderepresent number of coded bits in this mode.
CN201210532681.7A 2012-12-12 2012-12-12 High-efficiency video coding inter-frame mode judging method based on temporal relativity Active CN102984521B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210532681.7A CN102984521B (en) 2012-12-12 2012-12-12 High-efficiency video coding inter-frame mode judging method based on temporal relativity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210532681.7A CN102984521B (en) 2012-12-12 2012-12-12 High-efficiency video coding inter-frame mode judging method based on temporal relativity

Publications (2)

Publication Number Publication Date
CN102984521A CN102984521A (en) 2013-03-20
CN102984521B true CN102984521B (en) 2015-04-08

Family

ID=47858210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210532681.7A Active CN102984521B (en) 2012-12-12 2012-12-12 High-efficiency video coding inter-frame mode judging method based on temporal relativity

Country Status (1)

Country Link
CN (1) CN102984521B (en)

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103297774B (en) * 2013-05-13 2016-04-27 清华大学深圳研究生院 The fast encoding method of B frame in a kind of Video coding
CN103338371B (en) * 2013-06-07 2016-11-09 东华理工大学 A Fast and Efficient Video Coding Intra Mode Judgment Method
US9686561B2 (en) * 2013-06-17 2017-06-20 Qualcomm Incorporated Inter-component filtering
CN104301723A (en) * 2013-07-16 2015-01-21 同济大学 Efficient Video Fast Coding Method Based on Optimal Stopping Theory
WO2015006951A1 (en) * 2013-07-18 2015-01-22 Mediatek Singapore Pte. Ltd. Methods for fast encoder decision
CN103491369B (en) * 2013-09-18 2016-09-28 华为技术有限公司 A kind of interframe prediction encoding method and encoder
US9762927B2 (en) * 2013-09-26 2017-09-12 Qualcomm Incorporated Sub-prediction unit (PU) based temporal motion vector prediction in HEVC and sub-PU design in 3D-HEVC
CN103533355B (en) * 2013-10-10 2016-08-17 宁波大学 A kind of HEVC fast encoding method
CN109743577A (en) 2013-10-18 2019-05-10 华为技术有限公司 Method and related device for determining block division method in video coding and decoding
EP4518322A3 (en) 2014-01-03 2025-05-07 University-Industry Cooperation Group of Kyung Hee University Method and device for inducing motion information between temporal points of sub prediction unit
CN104768012B (en) * 2014-01-03 2018-04-20 华为技术有限公司 The method and encoding device of asymmetrical movement partitioning scheme coding
CN103780910A (en) * 2014-01-21 2014-05-07 华为技术有限公司 Method and device for determining block segmentation mode and optical prediction mode in video coding
CN104602017B (en) * 2014-06-10 2017-12-26 腾讯科技(北京)有限公司 Video encoder, method and apparatus and its inter-frame mode selecting method and device
CN104023234B (en) * 2014-06-24 2017-02-22 华侨大学 Fast inter-frame prediction method applicable to high efficiency video coding (HEVC)
CN104333755B (en) * 2014-10-27 2017-11-14 上海交通大学 The CU based on SKIP/Merge RD Cost of B frames shifts to an earlier date terminating method in HEVC
CN104320656B (en) * 2014-10-30 2019-01-11 上海交通大学 Interframe encoding mode fast selecting method in x265 encoder
CN104333756B (en) * 2014-11-19 2017-10-24 西安电子科技大学 HEVC predictive mode fast selecting methods based on relativity of time domain
CN104469360B (en) * 2014-12-16 2018-03-30 北京金山云网络技术有限公司 A kind of fast schema selection method of Video coding
CN104601992B (en) * 2015-01-07 2018-07-03 上海交通大学 SKIP mode quick selecting methods based on Bayesian Smallest Risk decision
CN104581170B (en) * 2015-01-23 2018-07-06 四川大学 The method of quick interframe transcoding based on HEVC drop video resolutions
CN104796693B (en) * 2015-04-01 2017-08-25 南京邮电大学 A kind of quick CU depth of HEVC divides coding method
CN104796694B (en) * 2015-04-30 2017-08-15 上海交通大学 Optimization intraframe video coding method based on video texture information
CN105141953B (en) * 2015-07-31 2018-05-25 华侨大学 A kind of fast interframe mode selection method suitable for HEVC
CN105120291B (en) * 2015-08-07 2018-04-10 中山大学 A kind of adaptive Fast video coding method based on variance
CN105681808B (en) * 2016-03-16 2017-10-31 同济大学 A kind of high-speed decision method of SCC interframe encodes unit mode
CN105828084B (en) * 2016-03-30 2021-04-13 腾讯科技(深圳)有限公司 HEVC (high efficiency video coding) inter-frame coding processing method and device
CN105872538B (en) * 2016-04-18 2020-12-29 广东中星微电子有限公司 Time domain filtering method and time domain filtering device
US10609423B2 (en) 2016-09-07 2020-03-31 Qualcomm Incorporated Tree-type coding for video coding
CN107343198A (en) * 2017-05-08 2017-11-10 上海大学 A kind of quick decision method of AVS2 inter-frame forecast modes
CN107071497B (en) * 2017-05-21 2020-01-17 北京工业大学 A low-complexity video coding method based on spatiotemporal correlation
US10911769B2 (en) * 2017-06-23 2021-02-02 Qualcomm Incorporated Motion-based priority for the construction of candidate lists in video coding
CN107734331A (en) * 2017-11-17 2018-02-23 南京理工大学 A kind of video transcoding method based on HEVC standard
CN108366242B (en) * 2018-03-07 2019-12-31 绍兴文理学院 Video Compression Method for Adaptively Adjusting Chroma Distortion Weight Factors According to Video Content
CN110166785B (en) * 2018-07-25 2022-09-13 腾讯科技(深圳)有限公司 Intra-frame prediction method and device, storage medium and electronic device
CN109889838B (en) * 2018-12-29 2021-02-12 北京佳讯飞鸿电气股份有限公司 HEVC (high efficiency video coding) rapid coding method based on ROI (region of interest)
CN110035285B (en) * 2019-04-18 2023-01-06 中南大学 Depth Prediction Method Based on Motion Vector Sensitivity
CN110139097B (en) * 2019-04-19 2023-01-06 中南大学 A Method for Adaptive Mode Sequence Adjustment in Video Coding
CN113613006B (en) * 2021-07-30 2023-08-18 浙江裕瀚科技有限公司 Video coding method, system and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102355579A (en) * 2011-02-24 2012-02-15 中兴通讯股份有限公司 Method and device for coding or decoding in prediction mode
CN102447907A (en) * 2012-01-31 2012-05-09 北京工业大学 Coding method for video sequence of HEVC (high efficiency video coding)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8526495B2 (en) * 2010-11-22 2013-09-03 Mediatek Singapore Pte. Ltd. Apparatus and method of constrained partition size for high efficiency video coding
KR101783824B1 (en) * 2010-11-25 2017-10-11 엘지전자 주식회사 Method offor signaling image information, and method offor decoding image information using same

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102355579A (en) * 2011-02-24 2012-02-15 中兴通讯股份有限公司 Method and device for coding or decoding in prediction mode
CN102447907A (en) * 2012-01-31 2012-05-09 北京工业大学 Coding method for video sequence of HEVC (high efficiency video coding)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PING WU,MING LI.Introduction to the High-Efficiency Video Coding Standard.《ZTE COMMUNICATIONS》.2012,第10卷(第2期),1-9页. *

Also Published As

Publication number Publication date
CN102984521A (en) 2013-03-20

Similar Documents

Publication Publication Date Title
CN102984521B (en) High-efficiency video coding inter-frame mode judging method based on temporal relativity
CN101815218B (en) Method for coding quick movement estimation video based on macro block characteristics
CN103338371B (en) A Fast and Efficient Video Coding Intra Mode Judgment Method
CN103188496B (en) Based on the method for coding quick movement estimation video of motion vector distribution prediction
CN104539962B (en) It is a kind of merge visually-perceptible feature can scalable video coding method
CN103475880B (en) A kind of based on statistical analysis by H.264 to HEVC low complex degree video transcoding method
CN100401789C (en) A Fast Selection Method of H.264/AVC Intra Prediction Mode
CN106888379B (en) Applied to the interframe fast video code-transferring method for H.264 arriving HEVC
CN103384325A (en) Quick inter-frame prediction mode selection method for AVS-M video coding
CN101304529A (en) Method and device for selecting macro block mode
CN101820546A (en) Intra-frame prediction method
CN104581181A (en) Intra-frame coding method based on candidate mode list (CML) optimization
CN102647598B (en) H.264 inter-frame mode optimization method based on maximum and minimum MV difference
CN104639940A (en) Quick HEVC (High Efficiency Video Coding) inter-frame prediction mode selection method
CN100551072C (en) Quantization matrix system of selection in a kind of coding, device and decoding method and system
CN104853191A (en) HEVC fast coding method
CN110351552A (en) A kind of fast encoding method in Video coding
CN110365975A (en) A kind of AVS2 video encoding and decoding standard prioritization scheme
CN105187826A (en) Rapid intra-frame mode decision method specific to high efficiency video coding standard
CN101867818B (en) Selection method and device of macroblock mode
CN103384327A (en) AVS fast mode selection algorithm based on adaptive threshold
CN106791828A (en) High performance video code-transferring method and its transcoder based on machine learning
CN100586186C (en) A Fast Inter Prediction Mode Selection Method
CN105282557B (en) A kind of H.264 rapid motion estimating method of predicted motion vector
CN110446040A (en) A kind of inter-frame encoding methods and system suitable for HEVC standard

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant