[go: up one dir, main page]

CN115883833B - Intra-frame prediction method and device - Google Patents

Intra-frame prediction method and device

Info

Publication number
CN115883833B
CN115883833B CN202111144070.0A CN202111144070A CN115883833B CN 115883833 B CN115883833 B CN 115883833B CN 202111144070 A CN202111144070 A CN 202111144070A CN 115883833 B CN115883833 B CN 115883833B
Authority
CN
China
Prior art keywords
intra
prediction mode
frame prediction
candidate
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111144070.0A
Other languages
Chinese (zh)
Other versions
CN115883833A (en
Inventor
周川
吕卓逸
张晋荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111144070.0A priority Critical patent/CN115883833B/en
Priority to PCT/CN2022/120535 priority patent/WO2023051375A1/en
Publication of CN115883833A publication Critical patent/CN115883833A/en
Application granted granted Critical
Publication of CN115883833B publication Critical patent/CN115883833B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请公开了一种帧内预测方法及装置,属于视频编解码标准技术领域,本申请实施例的帧内预测方法包括:获取待解码的编码单元对应的模板的帧内预测模式的梯度直方图;获取所述梯度直方图中的第一帧内预测模式和第一候选帧内预测模式;根据所述第一候选帧内预测模式以及候选帧内预测模式集合,获取第二帧内预测模式;根据所述第一帧内预测模式和所述第二帧内预测模式,获取待解码的编码单元的预测样本;其中,候选帧内预测模式集合包括梯度直方图中除第一帧内预测模式和第一候选帧内预测模式之外的至少一个帧内预测模式。

The present application discloses an intra-frame prediction method and device, which belongs to the technical field of video coding and decoding standards. The intra-frame prediction method of an embodiment of the present application includes: obtaining a gradient histogram of an intra-frame prediction mode of a template corresponding to a coding unit to be decoded; obtaining a first intra-frame prediction mode and a first candidate intra-frame prediction mode in the gradient histogram; obtaining a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and a set of candidate intra-frame prediction modes; obtaining a prediction sample of the coding unit to be decoded according to the first intra-frame prediction mode and the second intra-frame prediction mode; wherein the set of candidate intra-frame prediction modes includes at least one intra-frame prediction mode in the gradient histogram except the first intra-frame prediction mode and the first candidate intra-frame prediction mode.

Description

Intra-frame prediction method and device
Technical Field
The application belongs to the technical field of video coding and decoding standards, and particularly relates to an intra-frame prediction method and device.
Background
The decoding end derives the intra prediction mode (Decoder-SIDE INTRA mode derivation, DIMD) by calculating the texture direction of the adjacent area above and to the left of the current coding block, selects two intra prediction modes with the largest amplitude for deriving the intra prediction mode of the current coding block, but the mode of directly selecting the intra prediction mode reduces the prediction accuracy when the texture of the adjacent area and the current coding block is complex.
Disclosure of Invention
The embodiment of the application provides an intra-frame prediction method and device, which can improve the prediction accuracy.
In a first aspect, there is provided an intra prediction method, including:
acquiring a gradient histogram of an intra-frame prediction mode of a template corresponding to a coding unit to be decoded;
Acquiring a first intra-frame prediction mode and a first candidate intra-frame prediction mode in the gradient histogram;
acquiring a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set;
obtaining a prediction sample of a coding unit to be decoded according to the first intra-frame prediction mode and the second intra-frame prediction mode;
Wherein the set of candidate intra-prediction modes includes at least one intra-prediction mode in the gradient histogram other than the first intra-prediction mode and the first candidate intra-prediction mode.
In a second aspect, there is provided an intra prediction apparatus comprising:
The first acquisition module is used for acquiring a gradient histogram of an intra-frame prediction mode of a template corresponding to a coding unit to be decoded;
A second obtaining module, configured to obtain a first intra-prediction mode and a first candidate intra-prediction mode in the gradient histogram;
A third obtaining module, configured to obtain a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set;
a fourth obtaining module, configured to obtain a prediction sample of a coding unit to be decoded according to the first intra-frame prediction mode and the second intra-frame prediction mode;
Wherein the set of candidate intra-prediction modes includes at least one intra-prediction mode in the gradient histogram other than the first intra-prediction mode and the first candidate intra-prediction mode.
In a third aspect, there is provided an intra prediction apparatus comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor implements the steps of the method as described in the first aspect.
In a fourth aspect, an intra-frame prediction apparatus is provided, including a processor and a communication interface, where the processor is configured to obtain a gradient histogram of an intra-frame prediction mode of a template corresponding to a coding unit to be decoded;
Acquiring a first intra-frame prediction mode and a first candidate intra-frame prediction mode in the gradient histogram;
acquiring a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set;
obtaining a prediction sample of a coding unit to be decoded according to the first intra-frame prediction mode and the second intra-frame prediction mode;
Wherein the set of candidate intra-prediction modes includes at least one intra-prediction mode in the gradient histogram other than the first intra-prediction mode and the first candidate intra-prediction mode.
In a fifth aspect, there is provided a readable storage medium having stored thereon a program or instructions which when executed by a processor realizes the steps of the method according to the first aspect.
In a sixth aspect, there is provided a chip comprising a processor and a communication interface coupled to the processor for running a program or instructions implementing the steps of the method according to the first aspect.
In a seventh aspect, a computer program/program product is provided, the computer program/program product being stored in a non-transitory storage medium, the program/program product being executed by at least one processor to implement the steps of the method according to the first aspect.
In the embodiment of the application, the first intra-frame prediction mode and the first candidate intra-frame prediction mode are firstly obtained according to the gradient histogram, then the second intra-frame prediction mode is determined according to the first candidate intra-frame prediction mode and other intra-frame prediction modes in the gradient histogram, and finally the prediction sample of the coding unit to be decoded is obtained according to the first intra-frame prediction mode and the second intra-frame prediction mode.
Drawings
FIG. 1 is a flow chart of an intra prediction method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of the relationship between the current coding unit to be decoded and the reference samples and reconstructed samples of the corresponding templates;
FIG. 3 is a block diagram of an intra prediction apparatus according to an embodiment of the present application;
fig. 4 is a block diagram showing the structure of an intra prediction apparatus according to an embodiment of the present application;
Fig. 5 is a block diagram of a communication device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the application, fall within the scope of protection of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the "first" and "second" distinguishing between objects generally are not limited in number to the extent that the first object may, for example, be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/" generally means a relationship in which the associated object is an "or" before and after.
The prior art related to the present application is briefly described as follows.
In video coding, a frame of image is divided into a number of macroblocks, and prediction blocks are obtained using intra prediction or inter prediction. The difference between the original block and the predicted block is a residual block, and then the residual block is transformed, quantized, and entropy-encoded.
The picture types in video standards typically include I pictures, P pictures, and B pictures. The I picture can be independently decoded without referring to other pictures, the P picture uses a plurality of pictures (past) that are in display order before the current picture as reference pictures, and the B picture uses a plurality of pictures (past) that are in display order before the current picture and a plurality of pictures (future) that are in display order after the current picture as reference pictures.
1. Intra prediction
Intra prediction has many prediction modes to handle multiple types of textures in an image, including DC, planar, and some angular prediction modes. The peripheral reconstructed pixels are used as input, and the predicted value of the current predicted block is obtained through the specified predicted mode, so that the purpose of removing the spatial redundancy is achieved. The specified prediction mode index can be obtained explicitly from the code stream or can be inferred implicitly from the decoding end.
2. Most probable intra prediction mode (Most probably mode, MPM)
MPM is a technique for explicitly deriving intra-prediction modes. In view of the strong correlation between the current prediction block and the surrounding neighboring blocks, a most probable intra prediction mode candidate list is constructed using the prediction modes of the neighboring blocks. If the optimal prediction mode is in the list, only the index of this mode in the list needs to be written into the code stream, so that the number of bits required for coding the intra-prediction mode can be saved.
3. The decoding end derives intra prediction modes (Decoder-SIDE INTRA mode derivation, DIMD)
DIMD mode is a technique for implicitly deriving intra prediction modes. At the decoding end, a horizontal and vertical Sobel (Sobel) filter is applied to pixels in a template of width N around a block to perform gradient histogram calculation, then the direction of the gradient is converted into an intra prediction mode, and the intensity of the gradient is accumulated as the amplitude of the corresponding intra prediction mode, and the intra prediction mode is derived by comparing the magnitudes of the amplitudes in the gradient histogram. If DIMD mode identification of the current coding unit to be decoded is true, intra-prediction is performed using the derived intra-prediction mode, and if DIMD mode is false, the derived intra-prediction mode is used to construct an MPM list.
The following describes in detail the intra prediction method and apparatus provided by the embodiments of the present application through some embodiments and application scenarios thereof with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides an intra prediction method, including:
Step 101, obtaining a gradient histogram of an intra-frame prediction mode of a template corresponding to a coding unit to be decoded;
Step 102, acquiring a first intra-frame prediction mode and a first candidate intra-frame prediction mode in the gradient histogram;
step 103, obtaining a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set;
Step 104, obtaining a prediction sample of a coding unit to be decoded according to the first intra-frame prediction mode and the second intra-frame prediction mode;
Wherein the set of candidate intra-prediction modes includes at least one intra-prediction mode of the gradient histogram other than the first intra-prediction mode and the first candidate intra-prediction mode, and further wherein the amplitudes of the first intra-prediction mode and the first candidate intra-prediction mode are each greater than or equal to the amplitude of each intra-prediction mode of the set of candidate intra-prediction modes.
Alternatively, the first intra-frame prediction mode is an intra-frame prediction mode with the largest amplitude (first large) corresponding to the intra-frame prediction mode contained in the gradient histogram, the first candidate intra-frame prediction mode is an intra-frame prediction mode with the second largest amplitude corresponding to the intra-frame prediction mode contained in the gradient histogram, and it should be noted that in the embodiment of the application, the implementation manner of acquiring the first intra-frame prediction mode and the first candidate intra-frame prediction mode may be that after the gradient histogram is acquired, all the intra-frame prediction modes are ordered according to the order of the amplitudes from large to small, and the two intra-frame prediction modes with the ordering positions at the forefront are selected, which may be understood as directly determining the intra-frame prediction modes with the first large amplitude and the second large amplitude in the gradient histogram.
Optionally, the amplitude of the first intra prediction mode in the embodiment of the present application is greater than or equal to the first candidate intra prediction mode.
In the present application, the first intra-frame prediction mode and the first candidate intra-frame prediction mode are not directly used to obtain the prediction samples of the coding unit to be decoded, so that the two intra-frame prediction modes selected in this way may not be optimal, the first candidate intra-frame prediction mode is used as the intra-frame prediction mode to be determined, if the first candidate intra-frame prediction mode is found to be directly used in the subsequent comparison process, the first candidate intra-frame prediction mode is used as the second intra-frame prediction mode, and if the first candidate intra-frame prediction mode is not directly used, one intra-frame prediction mode which can be directly used is selected as the second intra-frame prediction mode in the other compared intra-frame prediction modes.
It should be noted that, when performing video encoding and decoding, processing is performed for each frame image, and the frame image is divided into a plurality of coding units to perform prediction, that is, the coding unit to be decoded mentioned in the embodiment of the present application may be understood as a coding unit to be decoded (which may be referred to as a coding unit to be decoded) that is currently being processed.
Specifically, the implementation manner of step 101 in the embodiment of the present application is:
obtaining reconstructed samples of adjacent decoded pixels of a coding unit to be decoded;
constructing a template corresponding to a coding unit to be decoded according to the reconstructed sample;
and carrying out gradient analysis on the template to obtain a gradient histogram of an intra-frame prediction mode corresponding to the coding unit to be decoded.
For example, as shown in fig. 2, the Current CU represents the coding unit to be decoded currently, the region indicated by Templete represents the reconstructed sample of the template corresponding to the coding unit to be decoded currently, and the region indicated by REFERENCE OF THE TEMPLETE represents the reference sample of the template corresponding to the coding unit to be decoded currently.
Alternatively, one implementation manner that may be adopted in step 103 of the embodiment of the present application is:
step 1031, respectively utilizing a first candidate intra-frame prediction mode and each intra-frame prediction mode in a candidate intra-frame prediction mode set to generate a prediction sample of a template corresponding to a coding unit to be decoded;
it should be noted that, this step is to obtain a prediction sample of a template by using each intra prediction mode.
Step 1032, performing a first operation on the reconstructed samples of the template and the predicted samples of the template;
It should be noted that the first operation in the embodiment of the present application realizes the comparison of two samples, which is mainly to obtain the difference value of two samples, for example, the first operation may be calculated at the cost, and of course, the specific form of the first operation is not limited in the embodiment of the present application, and any operation mode capable of obtaining the difference value of two samples belongs to the protection scope of the first operation.
Step 1033, determining a second intra-frame prediction mode from the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set according to the result corresponding to the first operation;
it should be noted that, in step 1033, mainly, according to the result corresponding to the first operation, an intra-frame prediction mode with the smallest difference or difference between the reconstructed sample and the predicted sample indicated by the result corresponding to the first operation is selected as the second intra-frame prediction mode.
Optionally, in the case of the first operation being a cost calculation, the implementation of step 1033 is:
And determining the first intra-frame prediction mode candidate and the intra-frame prediction mode with the minimum cost calculation result corresponding to the intra-frame prediction mode candidate set as the second intra-frame prediction mode.
That is, the smallest difference or difference between the reconstructed samples and the predicted samples indicates that the result of the cost calculation is smallest.
Optionally, the result of the cost calculation may be the sum of absolute transformation differences, the sum of absolute differences, etc., which is certainly not limited in the embodiment of the present application, and all the calculation results capable of representing the difference between two samples belong to the protection range of the result of the cost calculation.
It should be noted that, because the intra-frame prediction mode derived by the decoding end is obtained in the present application, alternatively, the embodiment of the present application may perform the second intra-frame prediction mode only when the intra-frame prediction mode (DIMD) derived by the decoding end is identified as true, and certainly, the embodiment of the present application may also do without considering DIMD identification, that is, when the predicted sample is obtained, the first intra-frame prediction mode and the second intra-frame prediction mode may be obtained first by directly using this method, and then the predicted sample is obtained.
It should be noted that, in the case of considering DIMD identifiers, DIMD identifiers need to be acquired first, and optionally, the method for acquiring DIMD identifiers in the embodiment of the present application is as follows:
Before the gradient histogram of the intra-frame prediction mode of the template corresponding to the coding unit to be decoded is obtained, intra-frame prediction information of the coding unit to be decoded is obtained in the code stream, wherein the intra-frame prediction information comprises identification of a decoding end derived intra-frame prediction mode DIMD.
Further, the implementation manner of step 103 is:
And in the case that DIMD is identified as true, acquiring a second intra-prediction mode according to the first candidate intra-prediction mode and the candidate intra-prediction mode set.
Optionally, if the DIMD is identified as true, further judgment may be further performed according to the image type of the image to which the coding unit to be decoded belongs, specifically, if the image type of the image to which the coding unit to be decoded belongs is an I image, then the second intra-frame prediction mode is obtained according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set.
Alternatively, in the case that DIMD is identified as false, the decoding end does not acquire the second intra-prediction mode by using the method of step 103 any more, but directly adds the first intra-prediction mode and the first candidate intra-prediction mode to the MPM list.
Optionally, without considering DIMD identification, the method further comprises one of:
a11, adding the first intra-frame prediction mode and the first candidate intra-frame prediction mode into an MPM list;
It should be noted that, since a certain processing time is required to obtain the second intra-frame prediction mode, when the delay is not allowed, in order to increase the prediction rate, the first intra-frame prediction mode and the first candidate intra-frame prediction mode are selected to be added to the MPM list.
A12, adding the first intra-frame prediction mode and the second intra-frame prediction mode into an MPM list;
It should be noted that, because the second intra-frame prediction mode is a more accurate intra-frame prediction mode, the first intra-frame prediction mode and the second intra-frame prediction mode are added to the MPM list under the condition of allowing a delay, so as to improve the prediction accuracy.
Specific applications of the present application are exemplified below.
The main implementation process of the specific application case I is as follows:
Step S101, obtaining intra-frame prediction information of a current coding unit to be decoded from a code stream, where the intra-frame prediction information includes DIMD mode identifiers, MPM index values, intra-frame prediction mode index values, and the like.
Step S102, obtaining reconstructed samples of adjacent decoded pixels of a coding unit to be decoded currently;
step S103, constructing a template, wherein the template comprises a row (or a plurality of rows) above a coding unit to be decoded currently and/or a reconstructed sample of a decoded pixel in a left column (or a plurality of columns);
step S104, carrying out gradient analysis on the template to obtain a gradient histogram of an intra-frame prediction mode;
Step 105, selecting the intra-frame prediction mode with the largest amplitude value and the second largest from the gradient histogram as a first intra-frame prediction mode and a first candidate intra-frame prediction mode;
step S106, obtaining the intra-frame prediction mode of the current coding unit to be decoded according to the intra-frame prediction information obtained in step S101, wherein the method specifically comprises the following steps:
if DIMD mode identification is false and when the MPM list is used, adding the first intra-frame prediction mode and the first candidate intra-frame prediction mode obtained in the step S105 into the MPM list;
If DIMD mode identification is true, a final second intra-prediction mode is derived as follows, taking the first intra-prediction mode and the final second intra-prediction mode as intra-prediction modes of the coding unit currently to be decoded.
Specifically, the final second intra prediction mode is derived by:
Selecting an intra-frame prediction mode with the third largest amplitude value from the gradient histogram as a second candidate intra-frame prediction mode, comparing the second candidate intra-frame prediction mode with the first candidate intra-frame prediction mode, and selecting a final second intra-frame prediction mode;
the comparison is as follows:
And for the second candidate intra-frame prediction mode and the first candidate intra-frame prediction mode, respectively using the reference samples of the templates to generate the prediction samples of the templates, carrying out cost calculation on the prediction samples and the reconstruction samples of the templates corresponding to the second candidate intra-frame prediction mode to obtain the sum of absolute conversion differences corresponding to the second candidate intra-frame prediction mode, carrying out cost calculation on the prediction samples and the reconstruction samples of the templates corresponding to the first candidate intra-frame prediction mode to obtain the sum of absolute conversion differences corresponding to the first candidate intra-frame prediction mode, comparing the sum of absolute conversion differences corresponding to the second candidate intra-frame prediction mode with the sum of absolute conversion differences corresponding to the first candidate intra-frame prediction mode, selecting the second candidate intra-frame prediction mode as a final second intra-frame prediction mode if the sum of absolute conversion differences corresponding to the second candidate intra-frame prediction mode is minimum, and selecting the first candidate intra-frame prediction mode as a final second intra-frame prediction mode if the sum of absolute conversion differences corresponding to the first candidate intra-frame prediction mode is minimum.
Step S107, calculating a predicted value of a current coding unit to be decoded by using the obtained intra-frame prediction mode;
Specifically, the intra prediction mode is a first intra prediction mode and a first candidate intra prediction mode, or the intra prediction mode is a first intra prediction mode and a second intra prediction mode, or the intra prediction mode may be an intra prediction mode obtained by other intra prediction modes (for example, modes other than DIMD modes mentioned in the embodiments of the present application).
The main implementation process of the specific application case II is as follows:
step S201, obtaining reconstructed samples of adjacent decoded pixels of a coding unit to be decoded currently;
step S202, constructing a template, wherein the template comprises a row (or a plurality of rows) above a coding unit to be decoded currently and/or a reconstructed sample of a decoded pixel in a left column (or a plurality of columns);
Step S203, carrying out gradient analysis on the template to obtain a gradient histogram of an intra-frame prediction mode;
step S204, selecting the intra-frame prediction mode with the largest amplitude value and the second largest from the gradient histogram as a first intra-frame prediction mode and a first candidate intra-frame prediction mode;
Step S205, selecting one intra-frame prediction mode with the third largest amplitude value from the gradient histogram as a second candidate intra-frame prediction mode and one intra-frame prediction mode with the fourth largest amplitude value as a third candidate intra-frame prediction mode, comparing the selected intra-frame prediction mode with the first candidate intra-frame prediction mode, and selecting a final second intra-frame prediction mode;
the comparison is as follows:
And for the second candidate intra-frame prediction mode, the third candidate intra-frame prediction mode and the first candidate intra-frame prediction mode, respectively using the reference samples of the templates to generate the prediction samples of the templates, carrying out cost calculation on the prediction samples and the reconstruction samples of the templates corresponding to the second candidate intra-frame prediction mode, obtaining the sum of absolute conversion differences corresponding to the second candidate intra-frame prediction mode, carrying out cost calculation on the prediction samples and the reconstruction samples of the templates corresponding to the third candidate intra-frame prediction mode, obtaining the sum of absolute conversion differences corresponding to the first candidate intra-frame prediction mode, comparing the sum of absolute conversion differences corresponding to the second candidate intra-frame prediction mode with the sum of absolute conversion differences corresponding to the first candidate intra-frame prediction mode, selecting the second candidate intra-frame prediction mode as the final second candidate intra-frame prediction mode if the sum of absolute conversion differences corresponding to the third candidate intra-frame prediction mode is minimum, and obtaining the sum of absolute conversion differences corresponding to the first candidate intra-frame prediction mode if the sum of absolute conversion differences corresponding to the second candidate intra-frame prediction mode is minimum, and selecting the second candidate intra-frame prediction mode if the sum of absolute conversion differences corresponding to the second candidate intra-frame prediction mode is minimum.
It should be noted that the cost function may be other calculation methods, which are not limited herein.
Step S206, obtaining the intra-frame prediction information of the current coding unit to be decoded from the code stream, wherein the intra-frame prediction information comprises DIMD mode identifications, MPM index values, intra-frame prediction mode index values and the like;
It should be noted that, for the intra-frame prediction method used for the MPM list, the first candidate intra-frame prediction mode and the first intra-frame prediction mode obtained in step 204 may be added to the MPM list in the first type intra-frame prediction mode, and the first intra-frame prediction mode and the final second intra-frame prediction mode obtained in step 205 may be added to the MPM list in the second type intra-frame prediction mode.
The first type intra prediction mode and the second type intra prediction mode may be implicitly derived or may display an identifier, which is not limited herein.
The main implementation process of the specific application case III is as follows:
Step S301, obtaining intra-frame prediction information of a current coding unit to be decoded from a code stream, where the intra-frame prediction information includes DIMD mode identifiers, MPM index values, intra-frame prediction mode index values, and the like.
Step S302, obtaining reconstructed samples of adjacent decoded pixels of a coding unit to be decoded currently;
Step S303, constructing a template, wherein the template comprises a row (or a plurality of rows) above a coding unit to be decoded currently and/or a reconstructed sample of a decoded pixel in a left column (or a plurality of columns);
Step S304, carrying out gradient analysis on the template to obtain a gradient histogram of an intra-frame angle prediction mode;
Step S305, selecting the intra-frame prediction mode with the largest amplitude value and the second largest from the gradient histogram as a first intra-frame prediction mode and a first candidate intra-frame prediction mode;
Step S306, obtaining the intra-frame prediction mode of the current coding unit to be decoded according to the intra-frame prediction information obtained in step S301, wherein the intra-frame prediction mode is specifically one of the following:
If DIMD mode identification is false and when the MPM list is used, adding the first intra-frame prediction mode and the second candidate intra-frame prediction mode obtained in the step S305 into the MPM list;
if DIMD mode identification is true and the current picture type is an I picture, a final second intra mode is derived as follows, taking the first intra prediction mode and the final second intra prediction mode as intra prediction modes of the coding unit to be currently decoded.
Specifically, the final second intra prediction mode is derived by:
Selecting an intra-frame prediction mode with the third largest amplitude value from the gradient histogram as a candidate intra-frame prediction mode, comparing the candidate intra-frame prediction mode with the first candidate intra-frame prediction mode, and selecting a final second intra-frame prediction mode;
The comparison method is as follows:
And for the second candidate intra-frame prediction mode and the first candidate intra-frame prediction mode, respectively using the reference samples of the templates to generate the prediction samples of the templates, carrying out cost calculation on the prediction samples and the reconstruction samples of the templates corresponding to the second candidate intra-frame prediction mode to obtain the sum of absolute conversion differences corresponding to the second candidate intra-frame prediction mode, carrying out cost calculation on the prediction samples and the reconstruction samples of the templates corresponding to the first candidate intra-frame prediction mode to obtain the sum of absolute conversion differences corresponding to the first candidate intra-frame prediction mode, comparing the sum of absolute conversion differences corresponding to the second candidate intra-frame prediction mode with the sum of absolute conversion differences corresponding to the first candidate intra-frame prediction mode, selecting the second candidate intra-frame prediction mode as a final second intra-frame prediction mode if the sum of absolute conversion differences corresponding to the second candidate intra-frame prediction mode is minimum, and selecting the first candidate intra-frame prediction mode as a final second intra-frame prediction mode if the sum of absolute conversion differences corresponding to the first candidate intra-frame prediction mode is minimum.
Step S307, calculating the predicted value of the current coding unit to be decoded by using the obtained intra-frame prediction mode.
Specifically, the intra prediction mode is a first intra prediction mode and a first candidate intra prediction mode, or the intra prediction mode is a first intra prediction mode and a second intra prediction mode, or the intra prediction mode may be an intra prediction mode obtained by other intra prediction modes (for example, modes other than DIMD modes mentioned in the embodiments of the present application).
It should be noted that, the embodiment of the present application can make the intra-frame prediction mode derived from DIMD modes more accurate, and can improve the prediction accuracy, thereby improving the compression efficiency.
It should be noted that, in the intra-frame prediction method provided in the embodiment of the present application, the execution body may be an intra-frame prediction device, or a control module in the intra-frame prediction device for executing the intra-frame prediction method. In the embodiment of the present application, an intra-frame prediction method performed by an intra-frame prediction device is taken as an example, and the intra-frame prediction device provided by the embodiment of the present application is described.
As shown in fig. 3, an embodiment of the present application provides an intra prediction apparatus 300, including:
A first obtaining module 301, configured to obtain a gradient histogram of an intra-frame prediction mode of a template corresponding to a coding unit to be decoded;
A second obtaining module 302, configured to obtain a first intra-prediction mode and a first candidate intra-prediction mode in the gradient histogram;
A third obtaining module 303, configured to obtain a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set;
A fourth obtaining module 304, configured to obtain a prediction sample of a coding unit to be decoded according to the first intra-frame prediction mode and the second intra-frame prediction mode;
Wherein the set of candidate intra-prediction modes includes at least one intra-prediction mode in the gradient histogram other than the first intra-prediction mode and the first candidate intra-prediction mode.
Optionally, the third obtaining module 303 includes:
the generating unit is used for generating a prediction sample of the template by using the first candidate intra-frame prediction mode and each intra-frame prediction mode in the candidate intra-frame prediction mode set respectively according to the reference sample of the template corresponding to the coding unit to be decoded;
The operation unit is used for carrying out first operation on the reconstruction sample of the template and the prediction sample of the template;
and the determining unit is used for determining a second intra-frame prediction mode in the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set according to the corresponding result of the first operation.
Optionally, the first operation is calculated at the cost of;
The determining unit is used for:
And determining the first intra-frame prediction mode candidate and the intra-frame prediction mode with the minimum cost calculation result corresponding to the intra-frame prediction mode candidate set as the second intra-frame prediction mode.
Optionally, before the first obtaining module 301 obtains the gradient histogram of the intra-prediction mode of the template corresponding to the coding unit to be decoded, the method further includes:
A fifth obtaining module, configured to obtain intra-frame prediction information of a coding unit to be decoded in a code stream, where the intra-frame prediction information includes a identifier of a decoding end derived intra-frame prediction mode DIMD;
the third obtaining module 303 is configured to:
And in the case that DIMD is identified as true, acquiring a second intra-prediction mode according to the first candidate intra-prediction mode and the candidate intra-prediction mode set.
Optionally, the third obtaining module 303 is configured to:
and under the condition that the image type of the image to which the coding unit to be decoded belongs is an I image, acquiring a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set.
Optionally, the apparatus further comprises:
a first processing module, configured to add the first intra-prediction mode and the first candidate intra-prediction mode to a most probable intra-prediction mode MPM list if the DIMD is identified as false.
Optionally, the apparatus further comprises one of:
A second processing module configured to add the first intra-prediction mode and the first candidate intra-prediction mode to an MPM list;
and a third processing module, configured to add the first intra-frame prediction mode and the second intra-frame prediction mode to an MPM list.
Optionally, the amplitude of the first intra-prediction mode and the first candidate intra-prediction mode are each greater than or equal to the amplitude of each intra-prediction mode in the set of candidate intra-prediction modes.
Optionally, the first intra-frame prediction mode is an intra-frame prediction mode with the largest amplitude corresponding to the intra-frame prediction mode included in the gradient histogram.
Optionally, the first candidate intra-prediction mode is an intra-prediction mode having a second largest amplitude corresponding to the intra-prediction mode included in the gradient histogram.
It should be noted that, compared with the mode of directly selecting the first intra-frame prediction mode and the second intra-frame prediction mode in the prior art, the embodiment of the application adds the secondary selection on the basis of the primary selection, can perform the optimal verification on the first selected intra-frame prediction mode, ensures that the finally determined intra-frame prediction mode is optimal, and can improve the prediction accuracy.
The intra-frame prediction apparatus in the embodiment of the present application may be an apparatus, an apparatus with an operating system or an electronic device, or may be a component, an integrated circuit, or a chip in a terminal. The apparatus or electronic device may be a mobile terminal or a non-mobile terminal. By way of example, mobile terminals may include, but are not limited to, the types of terminals 11 listed above, and non-mobile terminals may be servers, network attached storage (Network Attached Storage, NAS), personal computers (personal computer, PCs), televisions (TVs), teller machines, self-service machines, etc., and embodiments of the present application are not limited in particular.
The intra-frame prediction apparatus provided by the embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, and achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted herein.
The embodiment of the application also provides an intra-frame prediction device, which comprises a processor and a communication interface, wherein the processor is used for acquiring a gradient histogram of an intra-frame prediction mode of a template corresponding to a coding unit to be decoded;
Acquiring a first intra-frame prediction mode and a first candidate intra-frame prediction mode in the gradient histogram;
acquiring a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set;
obtaining a prediction sample of a coding unit to be decoded according to the first intra-frame prediction mode and the second intra-frame prediction mode;
Wherein the set of candidate intra-prediction modes includes at least one intra-prediction mode in the gradient histogram other than the first intra-prediction mode and the first candidate intra-prediction mode.
The device embodiment corresponds to the device-side method embodiment, and each implementation process and implementation manner of the method embodiment can be applied to the device embodiment, and the same technical effects can be achieved. Specifically, fig. 4 is a schematic hardware structure of an intra prediction apparatus for implementing an embodiment of the present application.
The intra prediction apparatus 400 includes, but is not limited to, at least some of the components of a radio frequency unit 401, a network module 402, an audio output unit 403, an input unit 404, a sensor 405, a display unit 406, a user input unit 407, an interface unit 408, a memory 409, and a processor 410.
Those skilled in the art will appreciate that the intra-frame prediction apparatus 400 may further include a power source (e.g., a battery) for powering the various components, and the power source may be logically coupled to the processor 410 by a power management system to perform functions such as managing charging, discharging, and power consumption by the power management system. The intra prediction apparatus structure shown in fig. 4 does not constitute a limitation of the apparatus, and the intra prediction apparatus may include more or less components than those shown in the drawings, or may combine some components, or may be arranged in different components, which will not be described herein.
It should be appreciated that in embodiments of the present application, the input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, with the graphics processor 4041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. The touch panel 4071 is also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
In the embodiment of the present application, the radio frequency unit 401 receives downlink data from the network side device and processes the downlink data with the processor 410, and in addition, sends uplink data to the network side device. Typically, the radio frequency unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
Memory 409 may be used to store software programs or instructions as well as various data. The memory 409 may mainly include a storage program or instruction area and a storage data area, wherein the storage program or instruction area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. In addition, the Memory 409 may include a high-speed random access Memory, and may also include a nonvolatile Memory, wherein the nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable Programmable ROM (EPROM), an Electrically Erasable Programmable EPROM (EEPROM), or a flash Memory. Such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device.
Processor 410 may include one or more processing units and, optionally, processor 410 may integrate an application processor that primarily processes operating systems, user interfaces, and applications or instructions, and a modem processor that primarily processes wireless communications, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
Wherein the processor 410 is configured to implement:
acquiring a gradient histogram of an intra-frame prediction mode of a template corresponding to a coding unit to be decoded;
Acquiring a first intra-frame prediction mode and a first candidate intra-frame prediction mode in the gradient histogram;
acquiring a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set;
obtaining a prediction sample of a coding unit to be decoded according to the first intra-frame prediction mode and the second intra-frame prediction mode;
Wherein the set of candidate intra-prediction modes includes at least one intra-prediction mode in the gradient histogram other than the first intra-prediction mode and the first candidate intra-prediction mode.
The intra-frame prediction device of the embodiment of the application obtains the first intra-frame prediction mode and the first candidate intra-frame prediction mode according to the gradient histogram, then determines the second intra-frame prediction mode according to the first candidate intra-frame prediction mode and other intra-frame prediction modes in the gradient histogram, and finally obtains the prediction sample of the coding unit to be decoded according to the first intra-frame prediction mode and the second intra-frame prediction mode.
Optionally, the processor 410 is configured to implement:
Respectively utilizing a first candidate intra-frame prediction mode and each intra-frame prediction mode in a candidate intra-frame prediction mode set to generate a prediction sample of a template corresponding to a coding unit to be decoded;
performing a first operation on the reconstructed sample of the template and the predicted sample of the template;
and determining a second intra-frame prediction mode in the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set according to a result corresponding to the first operation.
Optionally, the first operation is calculated at the cost of;
further, the processor 410 is configured to implement:
And determining the first intra-frame prediction mode candidate and the intra-frame prediction mode with the minimum cost calculation result corresponding to the intra-frame prediction mode candidate set as the second intra-frame prediction mode.
Optionally, the processor 410 is further configured to implement:
the method comprises the steps of obtaining intra-frame prediction information of a coding unit to be decoded from a code stream, wherein the intra-frame prediction information comprises a decoding end derived intra-frame prediction mode DIMD identifier;
The obtaining a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set includes:
And in the case that DIMD is identified as true, acquiring a second intra-prediction mode according to the first candidate intra-prediction mode and the candidate intra-prediction mode set.
Optionally, the processor 410 is further configured to implement:
and under the condition that the image type of the image to which the coding unit to be decoded belongs is an I image, acquiring a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set.
Optionally, the processor 410 is further configured to implement:
and adding the first intra-prediction mode and the first candidate intra-prediction mode to a most probable intra-prediction mode MPM list if the DIMD flag is false.
Optionally, the processor 410 is further configured to implement one of:
adding the first intra-prediction mode and the first candidate intra-prediction mode to an MPM list;
And adding the first intra-frame prediction mode and the second intra-frame prediction mode into an MPM list.
Optionally, the amplitude of the first intra-prediction mode and the first candidate intra-prediction mode are each greater than or equal to the amplitude of each intra-prediction mode in the set of candidate intra-prediction modes.
Optionally, the first intra-frame prediction mode is an intra-frame prediction mode with the largest amplitude corresponding to the intra-frame prediction mode included in the gradient histogram.
Optionally, the first candidate intra-prediction mode is an intra-prediction mode having a second largest amplitude corresponding to the intra-prediction mode included in the gradient histogram.
Preferably, the embodiment of the present application further provides an intra-frame prediction apparatus, which includes a processor, a memory, and a program or an instruction stored in the memory and capable of running on the processor, where the program or the instruction when executed by the processor implements each process of the intra-frame prediction method embodiment, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
The embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or an instruction, which when executed by a processor, implements each process of the intra-frame prediction method embodiment, and can achieve the same technical effect, so that repetition is avoided, and no further description is provided here. The computer readable storage medium is, for example, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk or an optical disk.
Optionally, as shown in fig. 5, an embodiment of the present application further provides a communication device 500, including a processor 501, a memory 502, and a program or an instruction stored in the memory 502 and capable of being executed on the processor 501, where, for example, the communication device 500 is an intra-frame prediction apparatus, the program or the instruction when executed by the processor 501 implements each process of the above-mentioned intra-frame prediction method embodiment, and the same technical effects can be achieved, so that repetition is avoided and no further description is given here.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the intra-frame prediction method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (19)

1. An intra prediction method, comprising:
acquiring a gradient histogram of an intra-frame prediction mode of a template corresponding to a coding unit to be decoded;
Acquiring a first intra-frame prediction mode and a first candidate intra-frame prediction mode in the gradient histogram;
acquiring a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set;
obtaining a prediction sample of a coding unit to be decoded according to the first intra-frame prediction mode and the second intra-frame prediction mode;
Wherein the set of candidate intra-prediction modes includes at least one intra-prediction mode in the gradient histogram other than the first intra-prediction mode and the first candidate intra-prediction mode.
2. The method of claim 1, wherein the obtaining a second intra-prediction mode from the first candidate intra-prediction mode and a set of candidate intra-prediction modes comprises:
respectively utilizing a first candidate intra-frame prediction mode and each intra-frame prediction mode in a candidate intra-frame prediction mode set to generate a corresponding prediction sample of a template corresponding to a coding unit to be decoded;
performing a first operation on the reconstructed sample of the template and the predicted sample of the template;
and determining a second intra-frame prediction mode in the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set according to a result corresponding to the first operation.
3. The method of claim 2, wherein the first operation is a cost calculation;
and determining a second intra-frame prediction mode from the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set according to the result corresponding to the first operation, wherein the method comprises the following steps:
And determining the first intra-frame prediction mode candidate and the intra-frame prediction mode with the minimum cost calculation result corresponding to the intra-frame prediction mode candidate set as the second intra-frame prediction mode.
4. The method according to claim 1, further comprising, prior to the obtaining a gradient histogram of an intra prediction mode of a template corresponding to the coding unit to be decoded:
the method comprises the steps of obtaining intra-frame prediction information of a coding unit to be decoded from a code stream, wherein the intra-frame prediction information comprises a decoding end derived intra-frame prediction mode DIMD identifier;
The obtaining a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set includes:
And in the case that DIMD is identified as true, acquiring a second intra-prediction mode according to the first candidate intra-prediction mode and the candidate intra-prediction mode set.
5. The method of claim 4, wherein the obtaining a second intra-prediction mode from the first candidate intra-prediction mode and a set of candidate intra-prediction modes comprises:
and under the condition that the image type of the image to which the coding unit to be decoded belongs is an I image, acquiring a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set.
6. The method as recited in claim 4, further comprising:
and adding the first intra-prediction mode and the first candidate intra-prediction mode to a most probable intra-prediction mode MPM list if the DIMD flag is false.
7. The method of claim 1, further comprising one of:
adding the first intra-prediction mode and the first candidate intra-prediction mode to an MPM list;
And adding the first intra-frame prediction mode and the second intra-frame prediction mode into an MPM list.
8. The method of claim 1, wherein the first intra-prediction mode and the first candidate intra-prediction mode each have an amplitude that is greater than or equal to an amplitude of each intra-prediction mode in the set of candidate intra-prediction modes.
9. The method according to claim 1, wherein the first intra-prediction mode is an intra-prediction mode having a maximum amplitude corresponding to an intra-prediction mode included in the gradient histogram.
10. The method of claim 1, wherein the first candidate intra-prediction mode is an intra-prediction mode having a second largest amplitude corresponding to an intra-prediction mode included in the gradient histogram.
11. An intra prediction apparatus, comprising:
The first acquisition module is used for acquiring a gradient histogram of an intra-frame prediction mode of a template corresponding to a coding unit to be decoded;
A second obtaining module, configured to obtain a first intra-prediction mode and a first candidate intra-prediction mode in the gradient histogram;
A third obtaining module, configured to obtain a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set;
a fourth obtaining module, configured to obtain a prediction sample of a coding unit to be decoded according to the first intra-frame prediction mode and the second intra-frame prediction mode;
Wherein the set of candidate intra-prediction modes includes at least one intra-prediction mode in the gradient histogram other than the first intra-prediction mode and the first candidate intra-prediction mode.
12. The apparatus of claim 11, wherein the third acquisition module comprises:
the generating unit is used for generating a prediction sample of the template by using the first candidate intra-frame prediction mode and each intra-frame prediction mode in the candidate intra-frame prediction mode set respectively according to the reference sample of the template corresponding to the coding unit to be decoded;
The operation unit is used for carrying out first operation on the reconstruction sample of the template and the prediction sample of the template;
and the determining unit is used for determining a second intra-frame prediction mode in the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set according to the corresponding result of the first operation.
13. The apparatus of claim 12, wherein the first operation is a cost calculation;
The determining unit is used for:
And determining the first intra-frame prediction mode candidate and the intra-frame prediction mode with the minimum cost calculation result corresponding to the intra-frame prediction mode candidate set as the second intra-frame prediction mode.
14. The apparatus of claim 11, further comprising, prior to the first obtaining module obtaining a gradient histogram of an intra prediction mode of a template corresponding to an encoding unit to be decoded:
A fifth obtaining module, configured to obtain intra-frame prediction information of a coding unit to be decoded in a code stream, where the intra-frame prediction information includes a identifier of a decoding end derived intra-frame prediction mode DIMD;
The third obtaining module is configured to:
And in the case that DIMD is identified as true, acquiring a second intra-prediction mode according to the first candidate intra-prediction mode and the candidate intra-prediction mode set.
15. The apparatus of claim 14, wherein the third acquisition module is configured to:
and under the condition that the image type of the image to which the coding unit to be decoded belongs is an I image, acquiring a second intra-frame prediction mode according to the first candidate intra-frame prediction mode and the candidate intra-frame prediction mode set.
16. The apparatus as recited in claim 14, further comprising:
a first processing module, configured to add the first intra-prediction mode and the first candidate intra-prediction mode to a most probable intra-prediction mode MPM list if the DIMD is identified as false.
17. The apparatus of claim 11, further comprising one of:
A second processing module configured to add the first intra-prediction mode and the first candidate intra-prediction mode to an MPM list;
and a third processing module, configured to add the first intra-frame prediction mode and the second intra-frame prediction mode to an MPM list.
18. An intra prediction apparatus comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, performs the steps of the intra prediction method as claimed in any one of claims 1 to 10.
19. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the intra prediction method according to any of claims 1 to 10.
CN202111144070.0A 2021-09-28 2021-09-28 Intra-frame prediction method and device Active CN115883833B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111144070.0A CN115883833B (en) 2021-09-28 2021-09-28 Intra-frame prediction method and device
PCT/CN2022/120535 WO2023051375A1 (en) 2021-09-28 2022-09-22 Intra-frame prediction method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111144070.0A CN115883833B (en) 2021-09-28 2021-09-28 Intra-frame prediction method and device

Publications (2)

Publication Number Publication Date
CN115883833A CN115883833A (en) 2023-03-31
CN115883833B true CN115883833B (en) 2025-07-22

Family

ID=85763528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111144070.0A Active CN115883833B (en) 2021-09-28 2021-09-28 Intra-frame prediction method and device

Country Status (2)

Country Link
CN (1) CN115883833B (en)
WO (1) WO2023051375A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119277063A (en) * 2023-07-04 2025-01-07 维沃移动通信有限公司 Intra-frame prediction method, device and apparatus

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102724509A (en) * 2012-06-19 2012-10-10 清华大学 Method and device for selecting optimal intra-frame coding mode for video sequence
CN105120292A (en) * 2015-09-09 2015-12-02 厦门大学 Video coding intra-frame prediction method based on image texture features

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105812799B (en) * 2014-12-31 2019-03-08 阿里巴巴集团控股有限公司 The fast selecting method and its device of video intra-frame prediction mode
EP4518317A1 (en) * 2016-05-06 2025-03-05 InterDigital Madison Patent Holdings, SAS Method and system for decoder-side intra mode derivation for block-based video coding
EP3709644A1 (en) * 2019-03-12 2020-09-16 Ateme Method for image processing and apparatus for implementing the same
WO2021168817A1 (en) * 2020-02-28 2021-09-02 深圳市大疆创新科技有限公司 Video processing method and apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102724509A (en) * 2012-06-19 2012-10-10 清华大学 Method and device for selecting optimal intra-frame coding mode for video sequence
CN105120292A (en) * 2015-09-09 2015-12-02 厦门大学 Video coding intra-frame prediction method based on image texture features

Also Published As

Publication number Publication date
WO2023051375A1 (en) 2023-04-06
CN115883833A (en) 2023-03-31

Similar Documents

Publication Publication Date Title
KR100947692B1 (en) Motion vector coding method and motion vector decoding method
CN107347159B (en) Method and equipment for coding and decoding video bit stream
CN111654696B (en) An intra-frame multi-reference line prediction method, device, storage medium and terminal
CN102651816B (en) Method and device for scanning transformation coefficient block
KR20130102083A (en) Method and apparatus for providing hand detection
CN111246212B (en) Geometric partitioning mode prediction method and device based on encoding and decoding end, storage medium and terminal
CN113596442A (en) Video processing method and device, electronic equipment and storage medium
CN115883833B (en) Intra-frame prediction method and device
WO2023025024A1 (en) Point cloud attribute coding method, point cloud attribute decoding method and terminal
JP2006517069A (en) Motion vector prediction method and system
CN109640087A (en) A kind of intra prediction mode decision method, device and equipment
KR20240006667A (en) Point cloud attribute information encoding method, decoding method, device and related devices
CN107172425A (en) Reduced graph generating method, device and terminal device
JPH08298599A (en) Image coding method and apparatus thereof
CN118301354A (en) List construction method and terminal
CN116760986B (en) Candidate motion vector generation method, candidate motion vector generation device, computer equipment and storage medium
CN115914641B (en) Image compression method, apparatus and readable storage medium
CN108833940A (en) Video type determination method, device and equipment
CN117412040A (en) Loop filtering methods, devices and equipment
CN117915097A (en) Intra-frame prediction method, device and equipment
CN120812286A (en) Prediction mode processing method, device and equipment
CN119155444A (en) Intra-frame prediction method, device, electronic equipment and readable storage medium
CN119316610A (en) Intra-frame prediction method, reference object determination method, device and electronic device
CN119277063A (en) Intra-frame prediction method, device and apparatus
CN116320428A (en) Inter-frame prediction method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant