[go: up one dir, main page]

WO2018134363A1 - Filter apparatus and methods - Google Patents

Filter apparatus and methods Download PDF

Info

Publication number
WO2018134363A1
WO2018134363A1 PCT/EP2018/051329 EP2018051329W WO2018134363A1 WO 2018134363 A1 WO2018134363 A1 WO 2018134363A1 EP 2018051329 W EP2018051329 W EP 2018051329W WO 2018134363 A1 WO2018134363 A1 WO 2018134363A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
filtering
current block
filter
immediately subsequent
Prior art date
Application number
PCT/EP2018/051329
Other languages
French (fr)
Inventor
Kenneth Andersson
Per Wennersten
Jacob STRÖM
Jack ENHORN
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Publication of WO2018134363A1 publication Critical patent/WO2018134363A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation

Definitions

  • the present embodiments generally relate filter apparatus and methods, for example to filter apparatus and methods for video coding and decoding, and in particular to deringing filtering in video coding and decoding.
  • H.266 High Efficiency Video Coding
  • JCT-VC Joint Collaborative Team on Video Coding
  • Spatial prediction is achieved using intra (I) prediction from within the current picture.
  • a picture consisting of only intra coded blocks is referred to as an l-picture.
  • Temporal prediction is achieved using inter (P) or bi-directional inter (B) prediction on block level.
  • HEVC was finalized in 2013.
  • JVET Joint Video Exploration Team
  • Ringing also referred to as Gibbs phenomenon, appears in video frames as oscillations near sharp edges. It is a result of a cut-off of high-frequency information in the block Discrete Cosine Transform (DCT) transformation and lossy quantization process. Ringing also comes from inter prediction where sub-pixel interpolation using a filter with negative weights can cause ringing near sharp edges. Artificial patterns that resemble ringing can also appear from intra prediction, as shown in the right part of Figure 1 (whereby Figures 1 (A) and (B) illustrate the ringing effect on a zoomed original video frame and a zoomed compressed video frame respectively). The ringing effect degrades the objective and subjective quality of video frames.
  • DCT Discrete Cosine Transform
  • bilateral filtering is widely used in image processing because of its edge-preserving and noise-reducing features.
  • a bilateral filter decides its coefficients based on the contrast of the pixels in addition to the geometric distance.
  • a Gaussian function has usually been used to relate coefficients to the geometric distance and contrast of the pixel values.
  • the weight ⁇ ( ⁇ ,; ' , k, I) assigned for pixel (k, I) to filter the pixel (i, j) is defined as: ⁇ ⁇ is the spatial parameter, and o r is the range parameter.
  • the bilateral filter is controlled by these two parameters. I(i, j ) and l(k, I) are the original intensity levels of pixels(i, j) and (k,l) respectively.
  • I D is the filtered intensity of pixel (i, j).
  • Rate-Distortion Optimization is part of the video encoding process. It improves coding efficiency by finding the "best" coding parameters. It measures both the number of bits used for each possible decision outcome of the block and the resulting distortion of the block.
  • a deblocking filter (DBF) and a Sample Adaptive Offset (SAO) filter are included in the HEVC standard.
  • DPF deblocking filter
  • SAO Sample Adaptive Offset
  • ALF Adaptive Loop Filter
  • SAO will remove some of the ringing artifacts but there is still room for improvements.
  • Another problem for deploying bilateral filtering in video coding is that they are too complex, lack sufficient parameter settings and adaptivity.
  • the embodiments disclosed herein relate to further improvements to a filter.
  • a method performed by a filter, for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, wherein a pixel value is modified by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
  • the method comprises performing a filtering operation on a block by block basis, each block comprising rows and columns of pixels.
  • the filtering operation comprises determining where an immediately subsequent block to the current block will occur; and avoiding the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur.
  • a filter for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, the filter being configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
  • the filter is configured to filter on a block by block basis, each block comprising rows and columns of pixels.
  • the filter is configured to determine where an immediately subsequent block to the current block will occur, and avoid the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur.
  • a decoder comprising a modifying means configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
  • the modifying means is operative to filter on a block by block basis, each block comprising rows and columns of pixels.
  • the modifying means is operative to determine where an immediately subsequent block to the current block will occur, and avoid the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur.
  • a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method as described in the embodiments herein, and defined in the appended claims.
  • a computer program product comprising a computer-readable medium with the computer program as above.
  • Figures 1 (A) and (B) illustrate the ringing effect on a zoomed original video frame and a zoomed compressed video frame respectively;
  • Figure 2 illustrates an 8x8 transform unit block and the filter aperture for the pixel located at (1 ,1 );
  • Figure 3 illustrates a plus sign shaped deringing filter aperture
  • Figure 5 illustrates the steps performed in a filtering method according to an example
  • Figure 6 illustrates a decoder according to an example
  • Figure 7 illustrates a data processing system in accordance with an example
  • Figure 8 shows an example of a method according to an embodiment
  • Figure 9 shows an example of block filtering according to an embodiment
  • Figure 10 shows an example of block filtering according to an embodiment
  • Figure 1 1 shows an example of block filtering according to an embodiment
  • Figure 12 shows an example of block filtering according to an embodiment
  • Figure 13 shows an example of block filtering according to an embodiment
  • Figure 14 shows an example of how a block may be partitioned into smaller blocks for coding and/or decoding
  • Figure 15 shows an example of a filter according to an embodiment
  • Figure 16 shows an example of a video coding system having a filter according to an embodiment
  • Figure 17 shows an example of a decoder according to an embodiment
  • Figure 18 illustrates schematically a video encoder according to an embodiment
  • Figure 19 illustrates schematically a video decoder according to an embodiment.
  • Hardware implementation may include or encompass, without limitation, digital signal processor (DSP) hardware, a reduced instruction set processor, hardware (e.g., digital or analog) circuitry including but not limited to application specific integrated circuit(s) (ASIC) and/or field programmable gate array(s) (FPGA(s)), and (where appropriate) state machines capable of performing such functions.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a computer is generally understood to comprise one or more processors, one or more processing units, one or more processing modules or one or more controllers, and the terms computer, processor, processing unit, processing module and controller may be employed interchangeably.
  • the functions may be provided by a single dedicated computer, processor, processing unit, processing module or controller, by a single shared computer, processor, processing unit, processing module or controller, or by a plurality of individual computers, processors, processing units, processing modules or controllers, some of which may be shared or distributed.
  • these terms also refer to other hardware capable of performing such functions and/or executing software, such as the example hardware recited above.
  • the filters described herein may be used in any form of user equipment, such as a mobile telephone, tablet, desktop, netbook, multimedia player, video streaming server, set-top box or computer.
  • user equipment UE
  • UE user equipment
  • a UE herein may comprise a UE (in its general sense) capable of operating or at least performing measurements in one or more frequencies, carrier frequencies, component carriers or frequency bands.
  • terminal device it may be a “UE” operating in single- or multi- radio access technology (RAT) or multi-standard mode.
  • RAT radio access technology
  • wireless communication device the general terms “terminal device”, “communication device” and “wireless communication device” are used in the following description, and it will be appreciated that such a device may or may not be 'mobile' in the sense that it is carried by a user.
  • the term “terminal device” encompasses any device that is capable of communicating with communication networks that operate according to one or more mobile communication standards, such as the Global System for Mobile communications, GSM, UMTS, Long-Term Evolution, LTE, etc.
  • a UE may comprise a Universal Subscription Identity Module (USIM) on a smart-card or implemented directly in the UE, e.g., as software or as an integrated circuit.
  • USIM Universal Subscription Identity Module
  • the operations described herein may be partly or fully implemented in the USIM or outside of the USIM.
  • Embodiments described here are related to providing filtering blocks that can be used in filters, including for example a deringing filter as described in the earlier application, for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
  • a filter as the bilateral filter, for example directly after a transform for a current block, adds one additional filtering step before one can perform intra prediction of a block to the right and/or below the current block.
  • Embodiments herein disclose how the pixels within a block can be selectively filtered, such that in certain circumstances a next block can be predicted from unfiltered pixels. This means, for example, that a next block can be predicted before the filtering of reconstructed samples in the block have finished, thereby reducing latency.
  • PCT/SE2017/050776 filed on 1 1 July 2017, to provide background and context for the present invention.
  • the embodiments of the earlier application will be referred to below as examples.
  • the examples of the earlier application provide advantages in that the proposed filtering removes ringing artifacts in compressed video frames so a better video quality (both objectively and subjectively) can be achieved with a small increase in codec complexity.
  • Objectively, coding efficiency as calculated by Bj0ntegaard-Delta bit rate (BD-rate) is improved by between 0.5 and 0.7%.
  • a bilateral deringing filter with a plus sign shaped filter aperture is used directly after inverse transform.
  • An identical filter and identical filtering process is used in the corresponding video encoder and decoder to ensure that there is no drift between the encoder and the decoder.
  • the first example describes a way to remove ringing artifacts by using a deringing filter designed in the earlier application.
  • the deringing filter is evolved from a bilateral filter.
  • each pixel in the reconstructed picture is replaced by a weighted average of itself and its neighbors. For instance, a pixel located at (i, j), will be filtered using its neighboring pixel (k, I).
  • the weight ⁇ ( ⁇ ,], k, ⁇ ) is the weight assigned for pixel (k, I) to filter the pixel (i, j),and it is defined as: l(i, j ) and l(k, I) are the original reconstructed intensity value of pixels(i, j) and (k,l) respectively.
  • ⁇ ⁇ is the spatial parameter
  • o r is the range parameter.
  • the bilateral filter is controlled by these two parameters.
  • the weight of a reference pixel (k,l) to the pixel(IJ) is dependent both on the distance between the pixels and the intensity difference between the pixels.
  • the pixels located closer to the pixel to be filtered, and that have smaller intensity difference to the pixel to be filtered, will have larger weight than the other more distant (spatial or intensity) pixels.
  • ⁇ ⁇ and o r are constant values.
  • the deringing filter in this example, is applied to each TU block after reverse transform in an encoder, as shown in Figure 2, which shows an example of a 8x8 block. This means, for example, that following subsequent Intra-coded blocks will predict from the filtered pixel values.
  • the filter may also be used during R-D optimization in the encoder.
  • the identical deringing filter is also applied to each TU block after reverse transform in the corresponding video decoder.
  • each pixel in the transform unit is filtered using its direct neighboring pixels only, as shown in Figure 3.
  • the filter has a plus sign shaped filter aperture centered at the pixel to be filtered.
  • the output filtered pixel intensity I D (i,j) is:
  • the proposed deringing filter's all possible weights (coefficients) are calculated and stored in a two-dimensional look-up-table(LUT).
  • the LUT can for instance, use spatial distance and intensity difference between the pixel to be filtered and reference pixels as index of the LUT.
  • the filter aperture is a plus
  • a one-dimensional lookup table (LUT) indexed on the difference in intensity, or indexed on the absolute value of the difference in intensity.
  • one LUT could have one LUT dedicated to a weight dependent on distance from the current pixel (w d) and another LUT dedicated to a weight dependent on closeness in pixel value (w _ r ). It should be noted that the exponential function used to determine the weights could be some other function as well.
  • the LUT could be optimized based on some error metric (SSD, SSIM) or according to human vision. Instead of one LUT one could also have one LUT for weights vertically above or below of current pixel and another LUT for weights horizontally left or right of current pixel.
  • a deringing filter with a rectangular shaped filter aperture is used in the video encoder's R-D optimization process.
  • the same filter is also used in the corresponding video decoder.
  • each pixel is filtered using its neighboring pixels within a M by N size rectangular shaped filter aperture centered at the pixel to be filtered, as shown in Figure 4.
  • the same deranging filter as in the first example is used.
  • the deringing filter according to the third example of the earlier application is used after prediction and transform have been performed for an entire frame or part of a frame.
  • the same filter is also used in the corresponding video decoder.
  • the third example is the same as the first or second example, except that the filtering is not done right after the inverse transform. Instead the proposed filter applies to reconstructed picture in both encoder and decoder. On the one hand this could lead to worse performance since filtered pixels will not be used for intra prediction, but on the other hand the difference is likely very small and the existing filters are currently placed at this stage of the encoder/decoder.
  • a d and/or ⁇ ⁇ are related to Transform Unit, TU, size.
  • the ⁇ ⁇ and o r can be a function of the form (e.g. a polynomial function):
  • ⁇ ⁇ 0.92 - min ⁇ TU block width, TU block height] * 0.025.
  • ⁇ ⁇ 0.92 - max ⁇ TU block width, TU block height] * 0.025 , or
  • the ⁇ ⁇ can be separate for filter coefficients vertically and horizontally so
  • °_r_hor a function of the form (e.g. a polynomial function):
  • a d and ⁇ ⁇ are related to the Quantization Parameter, QP, value.
  • ⁇ ⁇ and o r can be a function of the form:
  • bit_depth
  • bit_depth i.e. the number of bits used to represent pixels in the video.
  • both a d and ⁇ ⁇ are derived based on QP, a preferred example is to have different functions f 3 ⁇ f 4 .
  • the QP mentioned here relates to the coarseness of the quantization of transform coefficients.
  • the QP can correspond to a picture or slice QP or even a locally used QP, i.e. QP for TU block.
  • QP can be defined differently in different standards so that the QP in one standard do not correspond to the QP in another standard.
  • HEVC High Efficiency Video Coding
  • JEM six steps of QP change doubles the quantization step. This could be different in a final version of H.266 where steps could be finer or coarser and the range could be extended beyond 51 .
  • the range parameter is a polynomial model, for example first order model, of the QP.
  • Another approach is to define a table with an entry for each table where each entry relates to the reconstruction level of at least one transform coefficient quantized with QP to 1 .
  • a table of a d and/or or a table of ⁇ ⁇ created where each entry, i.e., QP value, relates to the reconstruction level, i.e., pixel value after inverse transform and inverse quantization, for one transform coefficient quantized with QP to 1 , e.g., the smallest possible value a quantized transform coefficient can have.
  • This reconstruction level indicates the smallest pixel value change that can originate from a true signal. Changes smaller than half of this value can be regarded as coding noise that the deringing filter should remove.
  • HEVC uses by default a uniform reconstruction quantization (URQ) scheme that quantizes frequencies equally.
  • HEVC has the option of using quantization scaling matrices, also referred to as scaling lists, either default ones, or quantization scaling matrices that are signaled as scaling list data in the sequence parameter set (SPS) or picture parameter set (PPS).
  • SPS sequence parameter set
  • PPS picture parameter set
  • scaling matrices are typically only be specified for 4x4 and 8x8 matrices.
  • the signaled 8x8 matrix is applied by having 2x2 and 4x4 blocks share the same scaling value, except at the DC positions.
  • a scaling matrix with individual scaling factors for respective transform coefficient, can be used to make a different quantization effect for respective transform coefficient by scaling the transform coefficients individually with respective scaling factor as part of the quantization. This enables, for example, that the quantization effect is strongerfor higher frequency transform coefficients than for lower frequency transform coefficients.
  • default scaling matrices are defined for each transform size and can be invoked by flags in the SPS and/or the PPS. Scaling matrices also exist in H.264.
  • HEVC it is also possible to define own scaling matrices in SPS or PPS specifically for each combination of color component, transform size and prediction type (intra or inter mode).
  • deringing filtering is performed for at least reconstruction sample values from one transform coefficient using the corresponding scaling factor, as the QP, to determine a d and/or ⁇ ⁇ . This could be performed before adding the intra/inter prediction or after adding the intra/inter prediction.
  • Another less complex approach would be to use the maximum or minimum scaling factor, as the QP, to determine a d and/or ⁇ ⁇ .
  • the size of the filter can also be dependent of the QP so that the filter is larger for larger QP than for small QPs.
  • the width and/or the height of the filter kernel of the deringing filter is defined for each QP.
  • Another example is to use a first width and/or a first height of the filter kernel for QP values equal or smaller than a threshold and a second, different width and/or a second, different height for QP values larger than a threshold.
  • a d and ⁇ ⁇ are related to video resolution.
  • the ⁇ ⁇ and o r can be a function of the form:
  • the size of the filter can also be dependent of the size of the frame. If both a d and ⁇ ⁇ are derived based on frame diagonal, a preferred example is to have different functions f 5 ⁇ f 6 -
  • At least one of the spatial parameter and the range parameter can be set such that stronger deringing filtering is applied for small resolutions as compared to large resolutions.
  • a d and ⁇ ⁇ are related to QP, TU block size, video resolution and other video properties.
  • the ⁇ ⁇ and o r can be a function of the form:
  • An example my comprise the example 1 combined with the functions
  • the de-ringing filter is applied if an inter prediction is interpolated, e.g. not integer pixel motion, or the intra prediction is predicted from reference samples in a specific direction (e.g. non DC) or that the transform block has non-zero transform coefficients.
  • an inter prediction is interpolated, e.g. not integer pixel motion, or the intra prediction is predicted from reference samples in a specific direction (e.g. non DC) or that the transform block has non-zero transform coefficients.
  • De-ringing can be applied directly after intra/inter prediction to improve the accuracy of the prediction signal or directly after the transform on residual samples to remove transform effects or on reconstructed samples (after addition of intra/inter prediction and residual) to remove both ringing effects from prediction and transform or both on intra/inter prediction and residual or reconstruction.
  • the filter weights (Wd , w r or similarly a d , a r ) and/or filter size can be individually for intra prediction mode and/or inter prediction mode.
  • the filter weights and/or filter size can be different in vertical and horizontal direction depending on intra prediction mode or interpolation filter used for inter prediction. For example, if close to horizontal intra prediction is performed the weights could be smaller for the horizontal direction than the vertical direction and for close to vertical intra prediction weights could be smaller for the vertical direction than the horizontal direction. If sub-pel interpolation with an interpolation filter with negative filter coefficients only is applied in the vertical direction the filter weights could be smaller in the horizontal direction than in the vertical direction and if sub-pel interpolation filter with negative filter coefficients only is applied in the horizontal direction the filter weights could be smaller in the vertical direction than in the horizontal direction.
  • the filter weights (Wd , w r or similarly a d , ⁇ ⁇ ) and/or filter size can depend on the position of non-zero transform coefficients.
  • the filter weights and/or filter size can be different in vertical and horizontal direction depending non-zero transform coefficient positions. For example, if non-zero transform coefficients only exist in the vertical direction at the lowest frequency in the horizontal direction the filter weights can be smaller in the horizontal direction than in the vertical direction. Alternatively, the filter is only applied in the vertical direction. Similarly, if nonzero transform coefficients only exist in the horizontal direction at the lowest frequency in the vertical direction the filter weights can be smaller in the vertical direction than in the horizontal direction. Alternatively, the filter is only applied in the horizontal direction.
  • the filter weights and/or filter size can also be dependent on existence of non-zero transform coefficients above a certain frequency.
  • the filter weights can be smaller if only low frequency non-zero transform coefficients exist than when high frequency non-zero transform coefficients exist.
  • the filter weights (Wd , w r or similarly a d , a r ) and/or filter size can be different for depending on a transform type.
  • Type of transform can refer to transform skip, KLT like transforms, DCT like transforms, DST transforms, non-separable 2D transforms, rotational transforms and combination of those.
  • the bilateral filter could only be applied to fast transforms, weight equal to 0 for all other transform types.
  • the filtering may be implemented as a differential filter which output is clipped (Clip) to be larger than or equal to a MIN value and less than or equal to a MAX value, and added to the pixel value instead of using a smoothing filter kernel like the Gaussian.
  • the differential filter can for example be designed as the difference between a dirac function and a Gaussian filter kernel.
  • a sign (s) can optionally also be used to make the filtering to enhance edges rather than smooth edges if that is desired for some cases.
  • the MAX and MIN value can be a function of other parameters as discussed in other examples.
  • the usage of a clipping function can be omitted but allows for an extra freedom to limit the amount of filtering enabling the use of a stronger bilateral filter although limiting how much it is allowed to change the pixel value.
  • the filtering can be described as a vertical filtering part and a horizontal filtering part as shown below:
  • the MAXJior, MAX_ver, and MINJior and MIN_ver can be a function of other parameters as discussed in other examples.
  • one aspect is to keep the size of a LUT small.
  • ⁇ ⁇ and o r parameters using
  • the size of the LUT can become quite big.
  • the absolute difference between two luma values can be between 0 and 1023.
  • Equation 1 can be rewritten as
  • Equation 5 The first factor of the expression in Equation 5 depend on ⁇ ⁇ . Since there are four TU sizes, there are four different possible values of on ⁇ ⁇
  • Equation (2) thus becomes:
  • the value that will be fetched from the LUT will be 0 (since the LUT for 59 is zero) which will be correct.
  • An alternative is to make the LUT larger up to the nearest power of two minus one, this case 31. Thus it is sufficient to check if any bit larger than bit 5 is set. If so, 31 used, otherwise the value is used as is.
  • the approach as described above can be implemented with filtering in float or integers (8 or 16bit or 32 bit).
  • a table lookup is used to determine respective weight.
  • filtering in integers that avoids division by doing table lookup of a multiplication factor and shift factor.
  • lookup_M determines a multiplication factor to increase the gain of the filtering to close to unity (weights sum up to 1 « lookup_Sh) given that the "division" using right shift (») has the shift value (lookup_Sh) limited to be a multiple of 2.
  • lookup_Sh(A) gives a shift factor that together with the multiplication factor lookup_M gives a sufficient approximation of 1/A.
  • roundF is a rounding factor which is equal to lookup_Sh » 1. If this approximation is done so that the gain is less or equal to unity the filtering will not increase the value of the filtered pixel outside the value of the pixel values in the neighborhood before the filtering.
  • one approach to reduce the amount of filtering is to omit filtering if the sum of the weights is equal to the weight for the center pixel.
  • the filtering as described in other examples can alternatively be performed by separable filtering in horizontal and vertical direction instead for 2D filtering as mostly described in other examples.
  • one set of weights (wd, wr or similarly o_(d ),o_(r )) and/or filter size is used for blocks that have been intra predicted and another set of weights and/or filter size is used for blocks that have been inter predicted.
  • the weights are set to reduce the amount of filtering for blocks which have been predicted with higher quality compared to blocks that have been predicted with lower quality. Since blocks that have been inter predicted typically has higher quality than blocks have been intra predicted they are filtered less to preserve the prediction quality.
  • Example weights for intra predicted blocks are:
  • Example weights for inter predicted blocks are:
  • Example 18 In this example one set of weights (wd, wr or similarly o_(d ),o_(r )) and/or filter size depends on picture type /slice type.
  • One example is to use one set of weights for intra pictures/slices and another set weights are used for inter pictures/slices.
  • One example to have one_Wd or similarly a d for pictures/slices that have only been intra predicted and a smaller Wd or similarly_a d for other pictures/slices.
  • Example weights for intra pictures/slices are:
  • Example weights for inter pictures/slices are:
  • B slices that typically have better prediction quality than P slices (only single prediction) can in another variant of this example have a smaller weight than P slices.
  • generalized B-slices that are used instead of P-slices for uni-directional prediction can have same weight as P-slices.
  • "normal" B-slices that can predict from both future and past can have a larger weight than generalized B-slices.
  • Example weights for "normal" B-slices are:
  • one set of weights (wd, wr or similarly o_(d ),o_(r )) and/or filter size is used intra pictures/slices and another set weights are used for inter pictures/slices that are used for reference for prediction of other pictures and a third set of weights are used for inter pictures/slices that not are used for reference for prediction of other pictures.
  • One example is to have one Wd or similarly o d for pictures/slices that have only been intra predicted and a somewhat smaller Wd or similarly a d for pictures/slices that have been inter predicted and are used for predicting other pictures and smallest Wd or similarly a d for pictures/slices that have been inter predicted but not are used for prediction of other pictures (non_reference picture).
  • Example weights for intra pictures/slices are:
  • Example weights for inter pictures/slices e.g. P_SLICE, B_SLICE that not are used for reference (non_reference picture) are:
  • Example weights for inter pictures/slices (e.g. P_SLICE, B_SLICE) that are used for reference are:
  • Example 20
  • an encoder can select which values of the weights to use and encode them in SPS (sequence parameter sets) , PPS (picture parameter sets) or slice header.
  • a decoder can then decode the values of the weights to be used for filtering respective picture/slice.
  • a decoder can then decode the values of the weights to be used for blocks that are intra predicted and the values of the weights to be used for blocks that are inter predicted.
  • a data processing system as illustrated in Figure 7, can be used to implement the filter of the examples described above.
  • the data processing system includes at least one processor that is further coupled to a network interface via an interconnect.
  • the at least one processor is also coupled to a memory via the interconnect.
  • the memory can be implemented by a hard disk drive, flash memory, or read-only memory and stores computer-readable instructions.
  • the at least one processor executes the computer- readable instructions and implements the functionality described above.
  • the network interface enables the data processing system to communicate with other nodes in a network.
  • Alternative examples may include additional components responsible for providing additional functionality, including any functionality described above and/or any functionality necessary to support the solution described herein.
  • this example relates to filtering blocks that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
  • a filter for example as the bilateral filter
  • directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block to the right of the current block.
  • all pixels are filtered except the rightmost column of the current block if the block to the right can use intra prediction.
  • the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples. Since the bilateral filter is applied to all samples except the rightmost column, in a case where there is a block to the right that uses intra prediction by extrapolation, this can be predicted directly after reconstruction of current block, if desired, and does not have to wait for the filtering to be performed.
  • the method of filtering further comprises the steps of: determining if a block to the right of a current block can use intra prediction, and, if so, selectively filtering all pixels in the block, except the rightmost column of the current block.
  • this example relates to filtering blocks that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards
  • a filter for example as the bilateral filter
  • directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block below the current block.
  • all pixels are filtered except the bottom row of the current block if the block below uses intra prediction.
  • the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples. Since the bilateral filter is applied to all samples except the bottom row, in a case where there is a block below the current block that uses intra prediction by extrapolation, this can be predicted directly after the reconstruction of the current block, if desired, and does not have to wait for the filtering to be performed.
  • the method of filtering further comprises the steps of: determining if a block below a current block can use intra prediction, and, if so, selectively filtering all pixels in the block, except the bottom row of the current block. In one example, the bottommost row is never filtered if the next block in the coding order is below the current block (including below to the left), as will be described further in examples 7 and 8 below.
  • the bottom row is never filtered, as will also be described further in examples 7 and 8 below.
  • this example relates to filtering blocks of samples that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
  • a filter for example as the bilateral filter
  • directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block to the right of the current block.
  • an adjacent block for example a block to the right of the current block.
  • pixels in the rightmost column of the current block are filtered if the block to the right can use intra prediction.
  • the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples. Then the right most column is filtered and can then be used for intra prediction of a block to the right of current block.
  • the method of filtering further comprises the steps of: determining if a block to the right of a current block can use intra prediction, and, if so, selectively filtering only pixels in the rightmost column of the current block.
  • this example relates to filtering of samples that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
  • a filter for example as the bilateral filter
  • directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block below the current block.
  • an adjacent block for example a block below the current block.
  • To reduce the amount of pixels (samples) to be filtered in this example only pixels in the bottom row of the current block are filtered, if the block below can use intra prediction.
  • the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples. Then the row in the bottom of current block is filtered and can then be used for intra prediction of a block below the current block.
  • the method of filtering further comprises the steps of: determining if a block below a current block can use intra prediction, and, if so, selectively filtering only pixels in the bottom row of the current block.
  • this example relates to filtering blocks that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
  • Having a filter, for example as the bilateral filter directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block to the right of the current block.
  • all pixels are filtered except the rightmost column of the current block and the bottom row of the current block, if the block to the right and below uses intra prediction.
  • the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples.
  • the bilateral filter is applied to all samples except the rightmost column and bottom row, in a case where there is a block to the right or below that uses intra prediction by extrapolation, this can be predicted directly after reconstruction of current block, if desired, and do not have to wait for the filtering to be performed.
  • the method of filtering further comprises the steps of: determining if a block to the right and below of a current block can use intra prediction, and, if so, selectively filtering all pixels in the block, except the rightmost column and the bottom row of the current block.
  • the rightmost column and the bottom row are never filtered if the block is the top-left block of a quadrant since the next block in coding order is to the right and directly after that the block below, as will be described further in example 7 and 8 below.
  • the right most column and the bottom row are never filtered, as will be described further in example 7 and 8 below.
  • This example relates to filtering blocks that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
  • a filter for example as the bilateral filter
  • directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block to the right of the current block.
  • the next block is predicted from unfiltered pixels. This means that the next block can be predicted before the filtering of the reconstructed samples in the block have finished, breaking the latency problem.
  • This example relates to filtering a current block where the pixels of the current block can be used later for prediction of a subsequent block.
  • the pixels of the current blocks can be used for prediction of an immediately subsequent block, i.e., the next block in the decoding (or coding) order. If all pixels in the current block are filtered, this filtering would need to finish before the pixels can be used for prediction in the immediately subsequent block.
  • a decoder would therefore need to wait to decode the immediately subsequent block until the filtering of the current block is finished. This wait may mean that the decoder can run out of cycles, i.e., it will not have time to decode the entire frame before it must be displayed. This problem is most acute for small blocks, such as 4x4 blocks, since these take more cycles to decode per pixel.
  • the last column and the last row of the current bock are never filtered, i.e. not filtered, as shown in Figure 13. Since an immediately subsequent block can only use the last row or the last column of pixels from the current block for prediction, and since these pixels remain unfiltered, it is possible for the decoding of the subsequent block to commence before filtering of the current block has ended. This means that decoding of the immediately subsequent block can happen in parallel with the filtering of the current block, or at least in parallel with some of the filtering of the current block, whereby such decoding in parallel saves cycles and reduces latency.
  • a step of selectively filtering comprises never filtering a last column and a last row of the current block, i.e. never filtering the last column and last row.
  • the decoding of an immediately subsequent block commences before filtering of a current block has ended.
  • decoding of an immediately subsequent block occurs in parallel with at least some of the filtering of the current block.
  • filtering is avoided only for 4x4 blocks.
  • the last column and the last row of the current bock is not filtered, as shown in Figure 13, if the block is the smallest possible block, such as a 4x4 block. Since an immediately subsequent block can only use the last row or the last column of pixels from the current block for prediction, and since these pixels remain unfiltered, it is possible for the decoding of the subsequent block to commence before filtering of the current block has ended, if the current block is a 4x4 block. This means that decoding of the immediately subsequent block can happen in parallel with the filtering of the current 4x4 block, saving cycles and reducing latency. Since the clock cycle budget is especially tight for 4x4 blocks, saving cycles for these blocks are sufficient. Also, it will allow for more pixels being filtered compared to example 7, since all pixels of larger blocks, such as 4x8 blocks, may be filtered.
  • a group of block sizes are excluded, such as 4x4, 4x8 and 8x4 from having their last row and last column filtered.
  • the step of not filtering a last column and a last row of the current block is applied to a group of block sizes, including block sizes comprising 4x4, 4x8 and 8x4.
  • the method first attempts to determine where the immediately subsequent block will occur. If it occurs to the right, the method avoids filtering the right most column of the current block, as shown in Figure 9.
  • next subsequent block occurs below, the process avoids filtering the bottom most row of the current block, as shown in Figure 10. If it cannot be determined from the available information where the next immediately subsequent block will occur, the method avoids filtering both the right most column and the bottom most row of pixels in the current block.
  • the decoding of the immediately subsequent block is not delayed thus reducing latency and saving cycles.
  • part of the decoding of the current block namely the filtering, can happen in parallel with decoding the immediately subsequent block.
  • the method keeps track of the Z-order number.
  • blocks are arrange in Z-order according to the table below.
  • the current block is of type 0 (or 2)
  • the right most column of pixels is not filtered, since the immediately subsequent block will be block of type 1 (or 3), which will be to the right of the current block.
  • the current block is of type 1
  • the bottom most column of pixels is not filtered, since the immediately subsequent block will be block of type 2, which will be below the current block.
  • both the right most column of pixels and the bottom most row of the current block is not filtered, since it is not known if the immediately subsequent block will be to the right or below.
  • the method keeps track of the Z-order number at two or more levels.
  • the first number is a higher-level Z-order number
  • the second number is a lower-level Z-order number.
  • the method would have to avoid filtering both the last row and the last column, since the example of embodiment 1 was only looking at its lower-level Z- order number (i.e. 3).
  • the method can know that the immediately subsequent blocks will be to the right. Hence the method can avoid the filtering only on the right most column.
  • the method avoids filtering the last column if the higher/level Z-order number is 0 or 2
  • the method avoids filtering the last row if the higher-level Z-order number is 1.
  • the higher-level z-order number is 3, the method avoids filtering both the last row and the last column as before, since the method again does not now where the immediately subsequent block will end up. It is possible to look at even higher levels of Z-order numbers in an analogous fashion.
  • the Z-order is used to determine whether or not to avoid filtering certain pixels, but where filtering is only avoided if the block is of size 4x4.
  • the filtering is only avoided if the block size is from a group of block sizes, for example 4x4, 4x8 or 8x4.
  • two or more different Z-order levels are used to determine whether or not to avoid filtering certain pixels, but where filtering is only avoided if the block is of size 4x4.
  • the filtering is only avoided if the block size is from a group of block sizes, for example 4x4, 4x8 or 8x4.
  • a filter as described in the embodiments herein may be implemented in a video encoder and a video decoder. It may be implemented in hardware, in software or a combination of hardware and software. The filter may be implemented in, e.g. comprised in, user equipment, such as a mobile telephone, tablet, desktop, netbook, multimedia player, video streaming server, set-top box or computer.
  • Figure 8 shows a method according to a first embodiment, performed by a filter, for filtering of a picture of a video signal.
  • the picture comprises pixels, each pixel being associated with a pixel value, wherein a pixel value is modified by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
  • the method comprises performing a filtering operation on a block by block basis, each block comprising rows and columns of pixels, step 801 , for example M rows and N columns, step 801 .
  • the filtering operation comprises determining where an immediately subsequent block to the current block will occur, step 803, and avoiding the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur, step 805.
  • the method comprises avoiding the filtering of the right most column of the current block.
  • the method comprises avoiding the filtering of the bottom most row of the current block. If it is not possible to determine where the immediately subsequent block occurs, according to one embodiment the method comprises avoiding the filtering of the right most column of the current block and the bottom most row of the current block.
  • determining where an immediately subsequent block occurs comprises monitoring a Z-order number of the current block, wherein the Z-order relates to a sequence in which blocks are decoded or coded, and determining where an immediately subsequent block occurs based on the next number in the Z-order sequence, compared to the Z-order number of the current block. According to other examples, determining where an immediately subsequent block occurs comprises the use of Z-order numbers on at least first and second levels.
  • the method may comprise monitoring a first level Z-order number of the current block, wherein the first level Z-order relates to a sequence in which blocks are decoded or coded, and monitoring a second level Z-order number of the current block, wherein the second level Z-order relates to a sequence in which groups of blocks at a first level Z-order are decoded or coded.
  • the method comprises determining where an immediately subsequent block occurs based on the next number in the first level Z-order number, compared to the first level Z- order number of the current block, and, if the location of the immediately subsequent block cannot be determined based on the first level Z-order numbers, using the second level Z-order numbers to determine where the immediately subsequent block will occur.
  • the step of avoiding filtering is only applied to block sizes comprising 4x4.
  • the step of avoiding filtering is applied to a group of block sizes, including block sizes comprising 4x4, 4x8 and 8x4.
  • the decoding of an immediately subsequent block may commence before filtering of a current block has ended.
  • the decoding of an immediately subsequent block can occur in parallel with at least some of the filtering of the current block.
  • the Z-order is a certain order in which blocks are coded or decoded.
  • a large block typically referred to as CTU (in JEM 128x128 but could also be 256x256) that is then split into a smaller number of blocks, e.g. four blocks, or can be split into half blocks.
  • a 64x64 block can be split into four blocks of size 32x32, or two blocks of 32x64, or two blocks of size 64x32.
  • a block to the right or below the current block uses samples from the current block for prediction by extrapolation, for example intra prediction
  • the block to the bottom left of the current block uses samples from the current block.
  • that block can be the block next in processing order.
  • a block to the right always can be a prediction block, e.g. an intra block, to minimize impact on latency according to some embodiments it is possible to always omit filtering of the right column when the next block in the processing order is to the right of the current block.
  • a block below, or below to the left always can be a prediction block, e.g. an intra block, in some embodiments it is possible to always omit filtering of the bottom row to minimize impact on latency.
  • a common way of splitting blocks is what is called a quadtree split, where a block of size WxH is split into four blocks of size (W/2)x(H/2).
  • Another common way of splitting block is called a binary split, where a block of size WxH is split into two smaller blocks, either of size (W/2)xH or Wx(H/2).
  • quadtree split A common way to combine quadtree split and binary split is to first split a block using quadtree split, and then split it further using binary split. In this way of operation, quadtree split never follows binary split. Such splitting is often referred to as "quadtree followed by binarytree", QTBT.
  • Figure 14a shows an example of a starting point, e.g. a 64x64 block, A. Then this block is (quadtree) split into four 32x32 blocks B, C, D, E as shown in Figure 14b.
  • Block C is split into two 16x32 blocks C1 and C2
  • D2 is split into two 16x8 blocks D21 and D22
  • D4 is split into two 8x16 blocks D41 and D42.
  • the blocks C1 and C2 are further split using binary splitting, as shown in Figure 14d; C1 is split into two 8x32 blocks C1 1 and C12, and C2 is split into two 8x32 blocks C21 and C21 .
  • the method comprises looking at the last split. If the last split was a quadtree split, it is possible to use the Z-order number as described in a previous embodiment. However, if the last split was a binary split, the method instead takes into consideration whether the block in question is the first or the last block of the binary split in decoding order.
  • D2 is split horizontally into D21 and D22, where D21 is the first block of the split in decoding order and D22 is the last block of the split in decoding order.
  • D21 is immediately followed by D22 in decoder order.
  • the method may consider avoiding filtering the lowest (last) row of D21 since it is adjacent to the immediately subsequent block in the decoding order, block D22.
  • the block is split vertically, the method may instead consider avoiding filtering the last column of the first block.
  • An example is D4, which is split vertically into D41 and D42.
  • D41 is the first block of the split and D42 is the last block of the split.
  • the method may consider avoiding filtering the right most (last) column of D41 , since it is adjacent to the immediately subsequent block in the decoding order, block D42.
  • the method can avoid filtering the last column for C1 1 and C21 , since they are the first block of their respective most recent (vertical) splits.
  • the method may avoid filtering both the last row and the last column, e.g. to be safe.
  • the method may also go one step up in the hierarchy to see if the block was first or last.
  • block C12 was the last block of the split C1 ⁇ C1 1/C12, so one cannot tell from the most recent split what to do.
  • C1 was the first block of the vertical split C ⁇ C1/C2. Hence it is safe to determine that it is sufficient to avoid filtering the last column of C12.
  • C22 it was the last block both in the split from C2 ⁇ C21/C22 and furthermore C2 was the last block in the split C ⁇ C1/C2. Hence one cannot draw any conclusions from the two most recent splits. In this case, the method may need to go as far back as to the quadtree split.
  • the method can determine that the Z-order number was 1 , and hence the method should avoid filtering the last row of C22.
  • the step in Figure 8 of determining where an immediately subsequent block occurs may comprise determining how a block has been partitioned into smaller blocks using quadtree and/or binary splits for coding or decoding.
  • the method may comprise determining the type of last split and, if the last split was a quadtree split, using a Z-order number as described earlier for determining where an immediately subsequent block will occur.
  • the method may comprise determining the type of last split and, if the last split was a binary split avoiding the filtering of the last column of a current block, where the current block is a first block of a most recent vertical split, or avoiding the filtering of the last row of a current block, where the current block is a first block of a most recent horizontal split.
  • the method may further comprise, avoiding the filtering of the last column and last row of a current block, where the current block is a last block of a most recent vertical or horizontal split, or checking one or more higher levels in a block splitting hierarchy to determine whether filtering should be avoided in the last column only or last row only of a current block.
  • the method comprises reconstructing the block by adding dequantized and residual inverse transformed coefficients to the prediction samples.
  • the current block can be predicted, e.g. by intra prediction using reconstructed samples from the current picture or by inter prediction using samples from another picture that already have been reconstructed.
  • the error from that prediction compared to the source is typically compressed by a transform.
  • the transform coefficients ae quantized to reduce overhead. All coding parameters (prediction parameters, quantized transform coefficients) may be entropy coded to further reduce overhead.
  • FIG. 15 shows an example of a filter 1400 according to an embodiment, whereby the filter is implemented as a data processing system.
  • the data processing system includes at least one processor 1401 that is further coupled to a network interface via a network interface 1405.
  • the at least one processor 1401 is also coupled to a memory 1403 via the network interface 1405.
  • the memory 1403 can be implemented by a hard disk drive, flash memory, or read-only memory and stores computer-readable instructions.
  • the at least one processor 1401 executes the computer-readable instructions and implements the functionality described in the embodiments above.
  • the network interface 1405 enables the data processing system 1400 to communicate with other nodes in a network.
  • Alternative examples may include additional components responsible for providing additional functionality, including any functionality described above and/or any functionality necessary to support the solution described herein.
  • the filter 1400 may be operative to filter a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, the filter being configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
  • the filter 1400 may be operative to filter on a block by block basis, each block comprising rows and columns of pixels.
  • the filter may be operative to determine where an immediately subsequent block to the current block will occur, and to avoid the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur.
  • the filter may be further operative such that: if it is determined that the immediately subsequent block occurs to the right of the current block, configuring the filter to avoid the filtering of the right most column of the current block; or if it is determined that the immediately subsequent block occurs below the current block, configuring the filter to avoid the filtering of the bottom most row of the current block; or if it is not possible to determine where the immediately subsequent block occurs, configuring the filter to avoid the filtering of the right most column of the current block and the bottom most row of the current block.
  • the filter may be further operative to determine where an immediately subsequent block occurs by: monitoring a Z-order number of the current block, wherein the Z-order relates to a sequence in which blocks are decoded or coded; and determining where an immediately subsequent block occurs based on the next number in the Z-order sequence, compared to the Z-order number of the current block.
  • the filter may be further operative to determine where an immediately subsequent block occurs by using Z-order numbers on at least first and second levels, by: monitoring a first level Z-order number of the current block, wherein the first level Z-order relates to a sequence in which blocks are decoded or coded; monitoring a second level Z-order number of the current block, wherein the second level Z-order relates to a sequence in which groups of blocks at a first level Z-order are decoded or coded.
  • the filter may be operative to determine where an immediately subsequent block occurs based on the next number in the first level Z-order number, compared to the first level Z-order number of the current block, and, if the location of the immediately subsequent block cannot be determined based on the first level Z-order numbers, using the second level Z-order numbers to determine where the immediately subsequent block will occur.
  • the filter 1400 may be further operative to perform filtering operations as described here, and defined in the appended claims.
  • FIG 16 shows an example of part of a video coding system 1500 having a filter 1400 according to an embodiment.
  • the filter 1400 may comprise a filter as described in any of the embodiments herein.
  • the filter 1400 is shown as being positioned between a transform module 1501 for a current block and a prediction module 1503, the prediction module 1503 configured to provide a prediction, for example an intra prediction operation for a block to the right or below a current block.
  • Figure 17 shows an example of a decoder 1600 that comprises a modifying means, for example a filter as described herein, configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
  • the modifying means may be operative to filter on a block by block basis, each block comprising rows and columns of pixels, e.g. M rows and N columns, and determine where an immediately subsequent block to the current block will occur, and avoid the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur.
  • the modifying means may be further operative to perform a filtering method as described herein, and as defined in the appended claims.
  • the at least one of the parameters ⁇ ⁇ and o r may also depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
  • the embodiments described herein provide an improved filter for video coding and decoding.
  • Figure 18 is a schematic block diagram of a video encoder 40 according to an embodiment.
  • a current sample block also referred to as pixel block or block of pixels, is predicted by performing a motion estimation by a motion estimator 50 from already encoded and reconstructed sample block(s) in the same picture and/or in reference picture(s).
  • the result of the motion estimation is a motion vector in the case of inter prediction.
  • the motion vector is utilized by a motion compensator 50 for outputting an inter prediction of the sample block.
  • An intra predictor 49 computes an intra prediction of the current sample block.
  • the outputs from the motion estimator/compensator 50 and the intra predictor 49 are input in a selector 51 that either selects intra prediction or inter prediction for the current sample block.
  • the output from the selector 51 is input 10 to an error calculator in the form of an adder 41 that also receives the sample values of the current sample block.
  • the adder 41 calculates and outputs a residual error as the difference in sample values between the sample block and its prediction, i.e., prediction block.
  • the error is transformed in a transformer 42, such as by a discrete cosine transform (DCT), and the resulting coefficients are quantized 15 by a quantizer 43 followed by coding in an encoder 44, such as by an entropy encoder.
  • DCT discrete cosine transform
  • the estimated motion vector is brought to the encoder 44 for generating the coded representation of the current sample block.
  • the transformed and quantized residual error for the current sample block is also provided to an inverse quantizer 45 and inverse transformer 46 to reconstruct the residual error.
  • This residual error is added by an adder 47 to the prediction output from the motion compensator 50 or the intra predictor 49 to create a reconstructed sample block that can be used as prediction block in the prediction and coding of other sample blocks.
  • This reconstructed sample block is first processed by a device 100 for filtering of a picture according to the embodiments in order to suppress deringing artifacts.
  • the modified, i.e., filtered, reconstructed sample block is then temporarily stored in a Decoded Picture Buffer (DPB) 48, where it is available to the intra predictor 49 and the motion estimator/compensator 50.
  • DPB Decoded Picture Buffer
  • the modified, i.e. filtered, reconstructed sample block from device 100 is also coupled directly to the intra predictor 49.
  • the device 100 is preferably instead arranged between the inverse transformer 46 and the adder 47.
  • An embodiment relates to a video decoder comprising a device for filtering of a picture according to the embodiments.
  • FIG 19 is a schematic block diagram of a video decoder 60 comprising a device 100 for filtering of a picture according to the embodiments.
  • the video decoder 60 comprises a decoder 61 , such as an entropy decoder, for decoding a bitstream comprising an encoded representation of a sample block to get a set of quantized and transformed coefficients. These coefficients are dequantized in an inverse quantizer 62 and inverse transformed by an inverse transformer 63 to get a decoded residual error.
  • the decoded residual error is added in an adder 64 to the sample prediction values of a prediction block.
  • the prediction block is determined by a motion stimator/compensator 67 or intra predictor 66, depending on whether inter or intra prediction is performed.
  • a selector 68 is thereby interconnected to the adder 64 and the motion estimator/compensator 67 and the intra predictor 66.
  • the resulting decoded sample block output from the adder 64 is input to a device 100 for filtering of a picture or part of a picture in order to suppress and combat any ringing artifacts.
  • the filtered sample block enters a DPB 65 and can be used as prediction block for subsequently decoded sample blocks.
  • the DPB 65 is thereby connected to the motion estimator/compensator 67 to make the stored sample blocks available to the motion estimator/compensator 67.
  • the output from the adder 64 is preferably also input to the intra predictor 66 to be used as an unfiltered prediction block.
  • the filtered sample block is furthermore output from the video decoder 60, such as output for display on a screen. If the deringing filtering instead is applied following inverse transform, the device 100 is preferably instead arranged between the inverse transformer 63 and the adder 64.
  • One idea of embodiments of the present invention is to introduce a deringing filter into the Future Video Codec, i.e., the successor to HEVC.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method, performed by a filter, for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, wherein a pixel value is modified by a weighted combination of the pixel value and at least one spatially neighboring pixel value, comprises performing a filtering operation on a block by block basis, each block comprising rows and columns of pixels. The filtering operation comprises determining where an immediately subsequent block to the current block will occur, and avoiding the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur.

Description

FILTER APPARATUS AND METHODS
Technical Field
The present embodiments generally relate filter apparatus and methods, for example to filter apparatus and methods for video coding and decoding, and in particular to deringing filtering in video coding and decoding.
Background
The latest video coding standard, H.266, also known as High Efficiency Video Coding (HEVC), is a block based video codec, developed by the Joint Collaborative Team on Video Coding (JCT-VC). It utilizes both temporal and spatial prediction. Spatial prediction is achieved using intra (I) prediction from within the current picture. A picture consisting of only intra coded blocks is referred to as an l-picture. Temporal prediction is achieved using inter (P) or bi-directional inter (B) prediction on block level. HEVC was finalized in 2013.
International Telecommunication Union (ITU) Telecommunication Standardization Sector (ITU-T) Video Coding Experts Group (VCEG) and International Organization for Standardization (ISO)/lnternational Electrotechnical Commission (I EC) Moving Picture Experts Group (MPEG) are studying the potential need for standardization of future video coding technology with a compression capability that significantly exceeds that of the current HEVC standard. Such future standardization action could either take the form of additional extension(s) of HEVC or an entirely new standard. The groups are working together on this exploration activity in a joint collaboration effort known as the Joint Video Exploration Team (JVET) to evaluate compression technology designs proposed by their experts in this area.
Ringing, also referred to as Gibbs phenomenon, appears in video frames as oscillations near sharp edges. It is a result of a cut-off of high-frequency information in the block Discrete Cosine Transform (DCT) transformation and lossy quantization process. Ringing also comes from inter prediction where sub-pixel interpolation using a filter with negative weights can cause ringing near sharp edges. Artificial patterns that resemble ringing can also appear from intra prediction, as shown in the right part of Figure 1 (whereby Figures 1 (A) and (B) illustrate the ringing effect on a zoomed original video frame and a zoomed compressed video frame respectively). The ringing effect degrades the objective and subjective quality of video frames.
As a non-iterative and straightforward filtering technique, bilateral filtering is widely used in image processing because of its edge-preserving and noise-reducing features. Unlike the conventional linear filters of which the coefficients are predetermined, a bilateral filter decides its coefficients based on the contrast of the pixels in addition to the geometric distance. A Gaussian function has usually been used to relate coefficients to the geometric distance and contrast of the pixel values.
For a pixel located at (i, j) which will be filtered using its neighboring pixel (k, I), the weight ω(ί,;', k, I) assigned for pixel (k, I) to filter the pixel (i, j) is defined as:
Figure imgf000004_0001
σά is the spatial parameter, and or is the range parameter. The bilateral filter is controlled by these two parameters. I(i, j ) and l(k, I) are the original intensity levels of pixels(i, j) and (k,l) respectively.
After the weights are obtained, they are normalized, and the final pixel value ID (i,j) is given by:
Figure imgf000004_0002
ID is the filtered intensity of pixel (i, j).
Rate-Distortion Optimization (RDO) is part of the video encoding process. It improves coding efficiency by finding the "best" coding parameters. It measures both the number of bits used for each possible decision outcome of the block and the resulting distortion of the block. A deblocking filter (DBF) and a Sample Adaptive Offset (SAO) filter are included in the HEVC standard. In addition to these, Adaptive Loop Filter (ALF) filter is added into the later version of the Future Video Codec. Among those filters, SAO will remove some of the ringing artifacts but there is still room for improvements.
Another problem for deploying bilateral filtering in video coding is that they are too complex, lack sufficient parameter settings and adaptivity.
The embodiments disclosed herein relate to further improvements to a filter.
Summary
It is an aim of the present invention to provide a method and apparatus which obviate or reduce at least one or more of the disadvantages mentioned above. According to a first aspect of the present invention there is provided a method, performed by a filter, for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, wherein a pixel value is modified by a weighted combination of the pixel value and at least one spatially neighboring pixel value. The method comprises performing a filtering operation on a block by block basis, each block comprising rows and columns of pixels. The filtering operation comprises determining where an immediately subsequent block to the current block will occur; and avoiding the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur. According to another aspect of the present invention there is provided a filter, for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, the filter being configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value. The filter is configured to filter on a block by block basis, each block comprising rows and columns of pixels. The filter is configured to determine where an immediately subsequent block to the current block will occur, and avoid the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur. According to another aspect there is provided a decoder comprising a modifying means configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value. The modifying means is operative to filter on a block by block basis, each block comprising rows and columns of pixels. The modifying means is operative to determine where an immediately subsequent block to the current block will occur, and avoid the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur.
According to another aspect, there is provided a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method as described in the embodiments herein, and defined in the appended claims.
According to another aspect, there is provided a computer program product comprising a computer-readable medium with the computer program as above.
Brief description of the drawings
For a better understanding of examples of the present invention, and to show more clearly how the examples may be carried into effect, reference will now be made, by way of example only, to the following drawings in which:
Figures 1 (A) and (B) illustrate the ringing effect on a zoomed original video frame and a zoomed compressed video frame respectively; Figure 2 illustrates an 8x8 transform unit block and the filter aperture for the pixel located at (1 ,1 );
Figure 3 illustrates a plus sign shaped deringing filter aperture; Figure 4 illustrates a rectangular shaped deringing filter aperture of size M x N=3x3 pixels;
Figure 5 illustrates the steps performed in a filtering method according to an example; Figure 6 illustrates a decoder according to an example; Figure 7 illustrates a data processing system in accordance with an example; Figure 8 shows an example of a method according to an embodiment;
Figure 9 shows an example of block filtering according to an embodiment; Figure 10 shows an example of block filtering according to an embodiment; Figure 1 1 shows an example of block filtering according to an embodiment; Figure 12 shows an example of block filtering according to an embodiment; Figure 13 shows an example of block filtering according to an embodiment;
Figure 14 shows an example of how a block may be partitioned into smaller blocks for coding and/or decoding;
Figure 15 shows an example of a filter according to an embodiment;
Figure 16 shows an example of a video coding system having a filter according to an embodiment;
Figure 17 shows an example of a decoder according to an embodiment;
Figure 18 illustrates schematically a video encoder according to an embodiment; and Figure 19 illustrates schematically a video decoder according to an embodiment. Detailed description
The following sets forth specific details, such as particular embodiments for purposes of explanation and not limitation. But it will be appreciated by one skilled in the art that other embodiments may be employed apart from these specific details. In some instances, detailed descriptions of well-known methods, nodes, interfaces, circuits, and devices are omitted so as not obscure the description with unnecessary detail. Those skilled in the art will appreciate that the functions described may be implemented in one or more nodes using hardware circuitry (e.g., analog and/or discrete logic gates interconnected to perform a specialized function, ASICs, PLAs, etc.) and/or using software programs and data in conjunction with one or more digital microprocessors or general purpose computers. Nodes that communicate using the air interface also have suitable radio communications circuitry. Moreover, where appropriate the technology can additionally be considered to be embodied entirely within any form of computer- readable memory, such as solid-state memory, magnetic disk, or optical disk containing an appropriate set of computer instructions that would cause a processor to carry out the techniques described herein.
Hardware implementation may include or encompass, without limitation, digital signal processor (DSP) hardware, a reduced instruction set processor, hardware (e.g., digital or analog) circuitry including but not limited to application specific integrated circuit(s) (ASIC) and/or field programmable gate array(s) (FPGA(s)), and (where appropriate) state machines capable of performing such functions.
In terms of computer implementation, a computer is generally understood to comprise one or more processors, one or more processing units, one or more processing modules or one or more controllers, and the terms computer, processor, processing unit, processing module and controller may be employed interchangeably. When provided by a computer, processor, processing unit, processing module or controller, the functions may be provided by a single dedicated computer, processor, processing unit, processing module or controller, by a single shared computer, processor, processing unit, processing module or controller, or by a plurality of individual computers, processors, processing units, processing modules or controllers, some of which may be shared or distributed. Moreover, these terms also refer to other hardware capable of performing such functions and/or executing software, such as the example hardware recited above. The filters described herein may be used in any form of user equipment, such as a mobile telephone, tablet, desktop, netbook, multimedia player, video streaming server, set-top box or computer. Although in the description below the term user equipment (UE) is used, it should be understood by the skilled in the art that "UE" is a non-limiting term comprising any mobile device, communication device, wireless communication device, terminal device or node equipped with a radio interface allowing for at least one of: transmitting signals in uplink (UL) and receiving and/or measuring signals in downlink (DL). A UE herein may comprise a UE (in its general sense) capable of operating or at least performing measurements in one or more frequencies, carrier frequencies, component carriers or frequency bands. It may be a "UE" operating in single- or multi- radio access technology (RAT) or multi-standard mode. As well as "UE", the general terms "terminal device", "communication device" and "wireless communication device" are used in the following description, and it will be appreciated that such a device may or may not be 'mobile' in the sense that it is carried by a user. Instead, the term "terminal device" (and the alternative general terms set out above) encompasses any device that is capable of communicating with communication networks that operate according to one or more mobile communication standards, such as the Global System for Mobile communications, GSM, UMTS, Long-Term Evolution, LTE, etc. A UE may comprise a Universal Subscription Identity Module (USIM) on a smart-card or implemented directly in the UE, e.g., as software or as an integrated circuit. The operations described herein may be partly or fully implemented in the USIM or outside of the USIM.
An earlier co-pending patent application by the present Applicant, PCT/SE2017/050776 filed on 1 1 July 2017, describes a dedicated deringing filter in HEVC, which introduces a deringing filter into the Future Video Codec (the successor to HEVC). The deringing filter proposed in the earlier application is evolved from a bilateral filter, and proposes some simplifications, and how to adapt the filtering to local parameters in order to improve the filtering performance.
Embodiments described here are related to providing filtering blocks that can be used in filters, including for example a deringing filter as described in the earlier application, for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
Having a filter, as the bilateral filter, for example directly after a transform for a current block, adds one additional filtering step before one can perform intra prediction of a block to the right and/or below the current block. Embodiments herein disclose how the pixels within a block can be selectively filtered, such that in certain circumstances a next block can be predicted from unfiltered pixels. This means, for example, that a next block can be predicted before the filtering of reconstructed samples in the block have finished, thereby reducing latency. Prior to describing the embodiments of the present invention, reference will first be made to the examples of the earlier co-pending application PCT/SE2017/050776 filed on 1 1 July 2017, to provide background and context for the present invention. The embodiments of the earlier application will be referred to below as examples.
The examples of the earlier application provide advantages in that the proposed filtering removes ringing artifacts in compressed video frames so a better video quality (both objectively and subjectively) can be achieved with a small increase in codec complexity. Objectively, coding efficiency as calculated by Bj0ntegaard-Delta bit rate (BD-rate) is improved by between 0.5 and 0.7%.
Example 1
According to a first example, a bilateral deringing filter with a plus sign shaped filter aperture is used directly after inverse transform. An identical filter and identical filtering process is used in the corresponding video encoder and decoder to ensure that there is no drift between the encoder and the decoder.
The first example describes a way to remove ringing artifacts by using a deringing filter designed in the earlier application. The deringing filter is evolved from a bilateral filter.
By applying the deringing filter, each pixel in the reconstructed picture is replaced by a weighted average of itself and its neighbors. For instance, a pixel located at (i, j), will be filtered using its neighboring pixel (k, I). The weight ω(ί,], k, Ϊ) is the weight assigned for pixel (k, I) to filter the pixel (i, j),and it is defined as:
Figure imgf000010_0001
l(i, j ) and l(k, I) are the original reconstructed intensity value of pixels(i, j) and (k,l) respectively. σά is the spatial parameter, and or is the range parameter. The bilateral filter is controlled by these two parameters. In this way, the weight of a reference pixel (k,l) to the pixel(IJ) is dependent both on the distance between the pixels and the intensity difference between the pixels. In this way, the pixels located closer to the pixel to be filtered, and that have smaller intensity difference to the pixel to be filtered, will have larger weight than the other more distant (spatial or intensity) pixels. In this example, σά and or are constant values.
The deringing filter, in this example, is applied to each TU block after reverse transform in an encoder, as shown in Figure 2, which shows an example of a 8x8 block. This means, for example, that following subsequent Intra-coded blocks will predict from the filtered pixel values. The filter may also be used during R-D optimization in the encoder. The identical deringing filter is also applied to each TU block after reverse transform in the corresponding video decoder.
In this example, each pixel in the transform unit is filtered using its direct neighboring pixels only, as shown in Figure 3. The filter has a plus sign shaped filter aperture centered at the pixel to be filtered.
The output filtered pixel intensity ID (i,j) is:
Figure imgf000011_0001
In an efficient implementation of the first example, in a video encoder/decoder, the proposed deringing filter's all possible weights (coefficients) are calculated and stored in a two-dimensional look-up-table(LUT). The LUT can for instance, use spatial distance and intensity difference between the pixel to be filtered and reference pixels as index of the LUT. In the case where the filter aperture is a plus, there will only be two distances; the distance 0 for the middle pixel and the distance 1 for the other four pixels. Furthermore, the middle pixel will not have any intensity difference (since the middle pixel is the filtered pixel) and therefore its weight will always be e° = 1 when calculated using equation 1. Thus in the case of the plus shaped filter of Figure 3, it will be sufficient with a one-dimensional lookup table (LUT), indexed on the difference in intensity, or indexed on the absolute value of the difference in intensity.
Instead of one LUT one could have one LUT dedicated to a weight dependent on distance from the current pixel (w d) and another LUT dedicated to a weight dependent on closeness in pixel value (w _r). It should be noted that the exponential function used to determine the weights could be some other function as well. The LUT could be optimized based on some error metric (SSD, SSIM) or according to human vision. Instead of one LUT one could also have one LUT for weights vertically above or below of current pixel and another LUT for weights horizontally left or right of current pixel.
Example 2
According to the second example of the earlier application, a deringing filter with a rectangular shaped filter aperture is used in the video encoder's R-D optimization process. The same filter is also used in the corresponding video decoder.
In the second example each pixel is filtered using its neighboring pixels within a M by N size rectangular shaped filter aperture centered at the pixel to be filtered, as shown in Figure 4. The same deranging filter as in the first example is used.
Example 3
The deringing filter according to the third example of the earlier application is used after prediction and transform have been performed for an entire frame or part of a frame. The same filter is also used in the corresponding video decoder.
The third example is the same as the first or second example, except that the filtering is not done right after the inverse transform. Instead the proposed filter applies to reconstructed picture in both encoder and decoder. On the one hand this could lead to worse performance since filtered pixels will not be used for intra prediction, but on the other hand the difference is likely very small and the existing filters are currently placed at this stage of the encoder/decoder.
Example 4
In this example, ad and/or σΓ are related to Transform Unit, TU, size. The σά and or can be a function of the form (e.g. a polynomial function):
Figure imgf000012_0001
Figure imgf000013_0001
If both σά and or are derived based on TU size, a preferred example is to have different functions If the transform unit is non-quadratic, it may be possible to instead use
Figure imgf000013_0009
σά = 0.92 - min{TU block width, TU block height] * 0.025. Alternatively, it is possible to use σά = 0.92 - max{TU block width, TU block height] * 0.025 , or
Figure imgf000013_0008
mean{TU block width, TU block height] * 0.025, where mean{a, b] = (a + b)/2.
When transform size is different in vertical and horizontal directions, the σά can be separate for filter coefficients vertically and horizontally so
Figure imgf000013_0007
°_r_hor a function of the form (e.g. a polynomial function):
Figure imgf000013_0002
(TU block width) *0.025, (TU block height) *0.025
Figure imgf000013_0003
Figure imgf000013_0004
A further generalization is to have to have a weight and/or size dependent on distance based on a function based on TU size or TU width or TU height and a weight and /or size dependent on pixel closeness based on a function based on TU size or TU width or TU height. Example 5
In this example ad and σΓ are related to the Quantization Parameter, QP, value.
Thus the σά and or can be a function of the form:
Figure imgf000013_0005
A preferred function wherein bit_depth
Figure imgf000013_0006
correspond to the video bit depth, i.e. the number of bits used to represent pixels in the video. In a particular case when bit_depth=10, we have If
Figure imgf000014_0001
both ad and σΓ are derived based on QP, a preferred example is to have different functions f3≠ f4. The QP mentioned here relates to the coarseness of the quantization of transform coefficients. The QP can correspond to a picture or slice QP or even a locally used QP, i.e. QP for TU block.
QP can be defined differently in different standards so that the QP in one standard do not correspond to the QP in another standard. In HEVC, and so far in JEM, six steps of QP change doubles the quantization step. This could be different in a final version of H.266 where steps could be finer or coarser and the range could be extended beyond 51 . Thus, in a general example the range parameter is a polynomial model, for example first order model, of the QP.
Another approach is to define a table with an entry for each table where each entry relates to the reconstruction level of at least one transform coefficient quantized with QP to 1 . For instance, a table of ad and/or or a table of σΓ created where each entry, i.e., QP value, relates to the reconstruction level, i.e., pixel value after inverse transform and inverse quantization, for one transform coefficient quantized with QP to 1 , e.g., the smallest possible value a quantized transform coefficient can have. This reconstruction level indicates the smallest pixel value change that can originate from a true signal. Changes smaller than half of this value can be regarded as coding noise that the deringing filter should remove.
Yet another approach is to have the weights dependent on quantization scaling matrices especially relevant are the scaling factors for the higher frequency transform coefficients since ringing artefacts are due to quantization of higher frequency transform coefficients. Currently, HEVC uses by default a uniform reconstruction quantization (URQ) scheme that quantizes frequencies equally. HEVC has the option of using quantization scaling matrices, also referred to as scaling lists, either default ones, or quantization scaling matrices that are signaled as scaling list data in the sequence parameter set (SPS) or picture parameter set (PPS). To reduce the memory needed for storage, scaling matrices are typically only be specified for 4x4 and 8x8 matrices. For the larger transformations of sizes 16x16 and 32x32, the signaled 8x8 matrix is applied by having 2x2 and 4x4 blocks share the same scaling value, except at the DC positions. A scaling matrix, with individual scaling factors for respective transform coefficient, can be used to make a different quantization effect for respective transform coefficient by scaling the transform coefficients individually with respective scaling factor as part of the quantization. This enables, for example, that the quantization effect is strongerfor higher frequency transform coefficients than for lower frequency transform coefficients. In HEVC, default scaling matrices are defined for each transform size and can be invoked by flags in the SPS and/or the PPS. Scaling matrices also exist in H.264. In HEVC it is also possible to define own scaling matrices in SPS or PPS specifically for each combination of color component, transform size and prediction type (intra or inter mode). In an example, deringing filtering is performed for at least reconstruction sample values from one transform coefficient using the corresponding scaling factor, as the QP, to determine ad and/or σΓ. This could be performed before adding the intra/inter prediction or after adding the intra/inter prediction. Another less complex approach would be to use the maximum or minimum scaling factor, as the QP, to determine ad and/or σΓ.
The size of the filter can also be dependent of the QP so that the filter is larger for larger QP than for small QPs.
For instance, the width and/or the height of the filter kernel of the deringing filter is defined for each QP. Another example is to use a first width and/or a first height of the filter kernel for QP values equal or smaller than a threshold and a second, different width and/or a second, different height for QP values larger than a threshold.
Example 6
In this example ad and σΓ are related to video resolution.
The σά and or can be a function of the form:
Figure imgf000015_0001
The size of the filter can also be dependent of the size of the frame. If both ad and σΓ are derived based on frame diagonal, a preferred example is to have different functions f5≠ f6-
Small resolutions can contain sharper texture than large resolutions, which can cause more ringing when coding small resolutions. Accordingly, at least one of the spatial parameter and the range parameter can be set such that stronger deringing filtering is applied for small resolutions as compared to large resolutions.
Example 7
According to this example the ad and σΓ are related to QP, TU block size, video resolution and other video properties. The σά and or can be a function of the form:
Figure imgf000016_0001
An example my comprise the example 1 combined with the functions
Figure imgf000016_0002
and
Figure imgf000016_0003
Example 8
In this example the de-ringing filter is applied if an inter prediction is interpolated, e.g. not integer pixel motion, or the intra prediction is predicted from reference samples in a specific direction (e.g. non DC) or that the transform block has non-zero transform coefficients.
De-ringing can be applied directly after intra/inter prediction to improve the accuracy of the prediction signal or directly after the transform on residual samples to remove transform effects or on reconstructed samples (after addition of intra/inter prediction and residual) to remove both ringing effects from prediction and transform or both on intra/inter prediction and residual or reconstruction.
Example 9
In this example the filter weights (Wd , wr or similarly ad , ar ) and/or filter size can be individually for intra prediction mode and/or inter prediction mode.
The filter weights and/or filter size can be different in vertical and horizontal direction depending on intra prediction mode or interpolation filter used for inter prediction. For example, if close to horizontal intra prediction is performed the weights could be smaller for the horizontal direction than the vertical direction and for close to vertical intra prediction weights could be smaller for the vertical direction than the horizontal direction. If sub-pel interpolation with an interpolation filter with negative filter coefficients only is applied in the vertical direction the filter weights could be smaller in the horizontal direction than in the vertical direction and if sub-pel interpolation filter with negative filter coefficients only is applied in the horizontal direction the filter weights could be smaller in the vertical direction than in the horizontal direction.
Example 10
In this example the filter weights (Wd , wr or similarly ad , σΓ ) and/or filter size can depend on the position of non-zero transform coefficients.
The filter weights and/or filter size can be different in vertical and horizontal direction depending non-zero transform coefficient positions. For example, if non-zero transform coefficients only exist in the vertical direction at the lowest frequency in the horizontal direction the filter weights can be smaller in the horizontal direction than in the vertical direction. Alternatively, the filter is only applied in the vertical direction. Similarly, if nonzero transform coefficients only exist in the horizontal direction at the lowest frequency in the vertical direction the filter weights can be smaller in the vertical direction than in the horizontal direction. Alternatively, the filter is only applied in the horizontal direction.
The filter weights and/or filter size can also be dependent on existence of non-zero transform coefficients above a certain frequency. The filter weights can be smaller if only low frequency non-zero transform coefficients exist than when high frequency non-zero transform coefficients exist. Example 1 1
In this example the filter weights (Wd , wr or similarly ad , ar ) and/or filter size can be different for depending on a transform type.
Type of transform can refer to transform skip, KLT like transforms, DCT like transforms, DST transforms, non-separable 2D transforms, rotational transforms and combination of those. As an example the bilateral filter could only be applied to fast transforms, weight equal to 0 for all other transform types.
Different types of transforms can require smaller weights than others since they cause less ringing than other transforms.
When transform skip is used no transform is applied and, then, ringing will not come from the basis function of the transform. Still there would be some quantization error due to quantization of the residual that benefit from deringing filtering. However, in such a case the weight could be potentially be smaller in order to avoid overfiltering. More specialized transforms like KLT could possibly also benefit from filtering but likely less strong filtering, i.e., smaller filter weights and ad , σΓ , than for DCT and DST.
Example 12
In this example the filtering may be implemented as a differential filter which output is clipped (Clip) to be larger than or equal to a MIN value and less than or equal to a MAX value, and added to the pixel value instead of using a smoothing filter kernel like the Gaussian.
Figure imgf000018_0001
The differential filter can for example be designed as the difference between a dirac function and a Gaussian filter kernel. A sign (s) can optionally also be used to make the filtering to enhance edges rather than smooth edges if that is desired for some cases. The MAX and MIN value can be a function of other parameters as discussed in other examples.
The usage of a clipping function can be omitted but allows for an extra freedom to limit the amount of filtering enabling the use of a stronger bilateral filter although limiting how much it is allowed to change the pixel value.
To allow for different MAX and MIN values in the horizontal and the vertical direction the filtering can be described as a vertical filtering part and a horizontal filtering part as shown below:
Figure imgf000019_0001
The MAXJior, MAX_ver, and MINJior and MIN_ver can be a function of other parameters as discussed in other examples.
Example 13
In this example, one aspect is to keep the size of a LUT small. In case we set the σά and or parameters using
Figure imgf000019_0002
and
Figure imgf000019_0003
Then, the size of the LUT can become quite big. As an example, if we assume 10 bit accuracy, the absolute difference between two luma values can be between 0 and 1023. Thus if we know the TU block width and the QP, we need to store 1024 values, which in floating point occupies 4096 bytes.
There are four different TU sizes available. This means that we need 4 look-up tables of size 4096, which equals 16384 bytes or 16 kilobytes. This can be expensive in a hardware implementation. Therefore, in one example, we take advantage of the fact that Equation 1 can be rewritten as
Figure imgf000020_0001
If we keep ar fixed, we can now create one LUT for the expression
Figure imgf000020_0002
which will occupy 4096 bytes. The first factor of the expression in Equation 5 depend on σά . Since there are four TU sizes, there are four different possible values of on σά
Thus a LUT of only four values is sufficient to obtain
Figure imgf000020_0006
Four values can be stored in 4*4=16 bytes. Thus in this solution we have lowered the storage needs for the LUT from 16384 bytes to 4096+16=41 12 bytes, or approximately 4kB. Now, for the special case with the plus-shaped filter, we can further notice that the distance
(i - k)2 + j— I)2 will always be equal to 1 (in the case of the four neighbors) or 0 (in the case of the middle pixel). We can therefore write
Figure imgf000020_0003
where we have used the fact that ω (ί,], k, Ϊ) is equal to 1 for the middle pixel and we can write 1 as
Figure imgf000020_0004
This means that we can write where
Figure imgf000020_0008
Figure imgf000020_0005
Equation (2) thus becomes:
Figure imgf000020_0007
and we can see that we can divide both the nominator and denominator with
Figure imgf000021_0002
which yields
Figure imgf000021_0003
If we let I0 be the intensity of the middle pixel I0 = and we let the intensity of the neighboring upper pixel be l = — 1), the intensity of the neighboring right pixel be and the intensity of the neighboring left pixel be I3 = I(i— and the
Figure imgf000021_0007
intensity of the neighboring lower pixel be I0 = + 1) we can write Equation 7 as
Figure imgf000021_0001
The largest possible value for
Figure imgf000021_0004
comes when the difference in intensity is zero, which will give a value of 1.0. Assume that we want to use 8 bits for the filtering. We then simply store the value in the LUT. By doing this, we can use a
Figure imgf000021_0005
single byte per LUT entry, which means that we can go down from 1024*4+16=41 12 bytes to 1024+16 = 1040 bytes, or about 1 kByte. Furthermore, we know that the largest possible value for ar will be 16.5, assume the largest QP we will use is 59, which means that every LUT entry where the difference in intensity is larger than 59 will get a value before rounding smaller than
Figure imgf000021_0006
which will be rounded to zero. Hence it is not necessary to extend the LUT to more than 59. This reduces the LUT size to 60+16=76 bytes or about 0.07 kilobyte. The difference in intensity can be checked against 59, and if it is larger than 59 it is set to 59. The value that will be fetched from the LUT will be 0 (since the LUT for 59 is zero) which will be correct. An alternative is to make the LUT larger up to the nearest power of two minus one, this case 31. Thus it is sufficient to check if any bit larger than bit 5 is set. If so, 31 used, otherwise the value is used as is.
Example 14
In this example the approach as described above can be implemented with filtering in float or integers (8 or 16bit or 32 bit). Typically, a table lookup is used to determine respective weight. Here is an example of filtering in integers that avoids division by doing table lookup of a multiplication factor and shift factor.
Figure imgf000022_0001
Where lookup_M determines a multiplication factor to increase the gain of the filtering to close to unity (weights sum up to 1 « lookup_Sh) given that the "division" using right shift (») has the shift value (lookup_Sh) limited to be a multiple of 2. lookup_Sh(A) gives a shift factor that together with the multiplication factor lookup_M gives a sufficient approximation of 1/A. roundF is a rounding factor which is equal to lookup_Sh » 1. If this approximation is done so that the gain is less or equal to unity the filtering will not increase the value of the filtered pixel outside the value of the pixel values in the neighborhood before the filtering.
Example 15
In this example one approach to reduce the amount of filtering is to omit filtering if the sum of the weights is equal to the weight for the center pixel.
Another approach is to consider which weight is needed on neighboring pixels to be able to change the value of the current pixel. Let wn be the sum of neighboring weights and wtot be the total sum of weights including the center pixel. Then consider 10 bit data 0 to 1023. Thus to get an impact of 1 wn must be (1023*wn)/wtot=1 -> wn>=wtot/1023 or in fixed point implementation wn>= (wtot + (1 «9)) »10. Thus if the sum of the neighboring weights is below this no filtering needs to be deployed since the filtering will anyway not change the pixel value.
Example 16
The filtering as described in other examples can alternatively be performed by separable filtering in horizontal and vertical direction instead for 2D filtering as mostly described in other examples.
In addition to the above examples described in the earlier application, the following examples will now be described in order to provide further context to the present embodiments.
Example 17
In this example one set of weights (wd, wr or similarly o_(d ),o_(r )) and/or filter size is used for blocks that have been intra predicted and another set of weights and/or filter size is used for blocks that have been inter predicted. Typically the weights are set to reduce the amount of filtering for blocks which have been predicted with higher quality compared to blocks that have been predicted with lower quality. Since blocks that have been inter predicted typically has higher quality than blocks have been intra predicted they are filtered less to preserve the prediction quality.
One example to have one Wd or similarly ad for blocks that have been intra predicted and a smaller Wd or similarly_ad _for blocks that have been inter predicted. Example weights for intra predicted blocks are:
Figure imgf000023_0001
Example weights for inter predicted blocks are:
Figure imgf000023_0002
Example 18 In this example one set of weights (wd, wr or similarly o_(d ),o_(r )) and/or filter size depends on picture type /slice type.
One example is to use one set of weights for intra pictures/slices and another set weights are used for inter pictures/slices. One example to have one_Wd or similarly ad for pictures/slices that have only been intra predicted and a smaller Wd or similarly_ad for other pictures/slices.
Example weights for intra pictures/slices (e.g. I_SLICE) are:
Figure imgf000024_0001
Example weights for inter pictures/slices (e.g P_SLICE, B_SLICE) are:
Figure imgf000024_0002
B slices (bi-prediction allowed) that typically have better prediction quality than P slices (only single prediction) can in another variant of this example have a smaller weight than P slices.
In another variant generalized B-slices that are used instead of P-slices for uni-directional prediction can have same weight as P-slices. "normal" B-slices that can predict from both future and past can have a larger weight than generalized B-slices. Example weights for "normal" B-slices are:
Figure imgf000024_0003
Example 19
In this example one set of weights (wd, wr or similarly o_(d ),o_(r )) and/or filter size is used intra pictures/slices and another set weights are used for inter pictures/slices that are used for reference for prediction of other pictures and a third set of weights are used for inter pictures/slices that not are used for reference for prediction of other pictures. One example is to have one Wd or similarly od for pictures/slices that have only been intra predicted and a somewhat smaller Wd or similarly ad for pictures/slices that have been inter predicted and are used for predicting other pictures and smallest Wd or similarly ad for pictures/slices that have been inter predicted but not are used for prediction of other pictures (non_reference picture).
Example weights for intra pictures/slices (e.g. I_SLICE) are:
Figure imgf000025_0001
Example weights for inter pictures/slices (e.g. P_SLICE, B_SLICE) that not are used for reference (non_reference picture) are:
Figure imgf000025_0002
Example weights for inter pictures/slices (e.g. P_SLICE, B_SLICE) that are used for reference are:
Figure imgf000025_0003
Example 20
In this example, to enable some adaptivity with respect to the used weights (at least one of or all of wd, wr or similarly o_(d ),o_(r )) an encoder can select which values of the weights to use and encode them in SPS (sequence parameter sets) , PPS (picture parameter sets) or slice header.
A decoder can then decode the values of the weights to be used for filtering respective picture/slice.
In a variant of this example specific values of the weights are given for blocks that are intra predicted compared to blocks that are inter predicted are encoded in SPS/PPS or slice header. A decoder can then decode the values of the weights to be used for blocks that are intra predicted and the values of the weights to be used for blocks that are inter predicted. A data processing system, as illustrated in Figure 7, can be used to implement the filter of the examples described above. The data processing system includes at least one processor that is further coupled to a network interface via an interconnect. The at least one processor is also coupled to a memory via the interconnect. The memory can be implemented by a hard disk drive, flash memory, or read-only memory and stores computer-readable instructions. The at least one processor executes the computer- readable instructions and implements the functionality described above. The network interface enables the data processing system to communicate with other nodes in a network. Alternative examples may include additional components responsible for providing additional functionality, including any functionality described above and/or any functionality necessary to support the solution described herein.
Prior to describing the method according to the embodiment of Figure 8, some examples will first be described in Figures 9 to 13, which relate to embodiments forming part of a co-pending application being filed concurrently herewith by the present Applicant.
Example 21
Referring to Figure 9, this example relates to filtering blocks that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
Having a filter, for example as the bilateral filter, directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block to the right of the current block. In this example, all pixels are filtered except the rightmost column of the current block if the block to the right can use intra prediction. Thus, after all samples of the current block have been predicted the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples. Since the bilateral filter is applied to all samples except the rightmost column, in a case where there is a block to the right that uses intra prediction by extrapolation, this can be predicted directly after reconstruction of current block, if desired, and does not have to wait for the filtering to be performed. This is illustrated in the example of Figure 9 where a 4x4 block is used, but it is noted that the same applies to all sizes of blocks, including blocks that are rectangular. Pixels to be filtered are marked "x" and pixels that are not filtered are marked "o". As can be seen from Figure 9, all pixels in the block are filtered, except the rightmost column of the current block.
Thus, according to this example, the method of filtering further comprises the steps of: determining if a block to the right of a current block can use intra prediction, and, if so, selectively filtering all pixels in the block, except the rightmost column of the current block.
Example 22
Referring to Figure 10, this example relates to filtering blocks that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards
Having a filter, for example as the bilateral filter, directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block below the current block. In this example, all pixels are filtered except the bottom row of the current block if the block below uses intra prediction. Thus after all samples of the current block have been predicted the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples. Since the bilateral filter is applied to all samples except the bottom row, in a case where there is a block below the current block that uses intra prediction by extrapolation, this can be predicted directly after the reconstruction of the current block, if desired, and does not have to wait for the filtering to be performed. This is illustrated in Figure 10 where a 4x4 block is used, but it is noted that the same applies to all sizes of blocks, including b Blocks that are rectangular. Pixels to be filtered are marked "x" and pixels that are not to be filtered are marked "o". As can be seen from Figure 10, all pixels in the block are filtered, except for the bottom row of the current block. Thus, according to this example, the method of filtering further comprises the steps of: determining if a block below a current block can use intra prediction, and, if so, selectively filtering all pixels in the block, except the bottom row of the current block. In one example, the bottommost row is never filtered if the next block in the coding order is below the current block (including below to the left), as will be described further in examples 7 and 8 below.
In another example, the bottom row is never filtered, as will also be described further in examples 7 and 8 below.
Example 23
Referring to Figure 1 1 , this example relates to filtering blocks of samples that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
Having a filter, for example as the bilateral filter, directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block to the right of the current block. To reduce the amount of pixels (samples) to be filtered in this example, only pixels in the rightmost column of the current block are filtered if the block to the right can use intra prediction. Thus after all samples of the current block have been predicted the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples. Then the right most column is filtered and can then be used for intra prediction of a block to the right of current block.
This is illustrated in Figure 1 1 where a 4x4 block is used, but it is noted that the same applies to all sizes of blocks, including blocks that are rectangular. Pixels to be filtered are marked "x" and pixels that are not to be filtered are marked "o". As can be seen from Figure 1 1 , only pixels in the rightmost column of the current block are filtered.
Thus, according to this example, the method of filtering further comprises the steps of: determining if a block to the right of a current block can use intra prediction, and, if so, selectively filtering only pixels in the rightmost column of the current block. Example 4
Referring to Figure 12, this example relates to filtering of samples that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
Having a filter, for example as the bilateral filter, directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block below the current block. To reduce the amount of pixels (samples) to be filtered, in this example only pixels in the bottom row of the current block are filtered, if the block below can use intra prediction. Thus after all samples of the current block have been predicted the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples. Then the row in the bottom of current block is filtered and can then be used for intra prediction of a block below the current block.
This is illustrated in the example of Figure 12 where a 4x4 block is used, but it is noted that the same applies to all sizes of blocks, including blocks that are rectangular. Pixels to be filtered are marked "x" and pixels that not are filtered are marked "o". As can be seen from Figure 12, only pixels in the bottom row of the current block are filtered.
Thus, according to this example, the method of filtering further comprises the steps of: determining if a block below a current block can use intra prediction, and, if so, selectively filtering only pixels in the bottom row of the current block. Example 5
Referring to Figure 13, this example relates to filtering blocks that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards. Having a filter, for example as the bilateral filter, directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block to the right of the current block. In this example, all pixels are filtered except the rightmost column of the current block and the bottom row of the current block, if the block to the right and below uses intra prediction. Thus, after all samples of the current block have been predicted the block is reconstructed, for example by adding the dequantized and inverse transformed coefficients (residual) to the prediction samples.
Since the bilateral filter is applied to all samples except the rightmost column and bottom row, in a case where there is a block to the right or below that uses intra prediction by extrapolation, this can be predicted directly after reconstruction of current block, if desired, and do not have to wait for the filtering to be performed.
This is illustrated in the example of Figure 13 where a 4x4 block is used, but it is noted that the same applies to all sizes of blocks, including blocks that are rectangular. Pixels to be filtered are marked "x" and pixels that are not to be filtered are marked "o". This can be seen from Figure 13, where all pixels in the block are filtered, except the rightmost column and the bottom row of the current block. Thus, according to this example, the method of filtering further comprises the steps of: determining if a block to the right and below of a current block can use intra prediction, and, if so, selectively filtering all pixels in the block, except the rightmost column and the bottom row of the current block. In one example the rightmost column and the bottom row are never filtered if the block is the top-left block of a quadrant since the next block in coding order is to the right and directly after that the block below, as will be described further in example 7 and 8 below.
In another example, the right most column and the bottom row are never filtered, as will be described further in example 7 and 8 below.
Example 6
This example relates to filtering blocks that can be used for intra prediction by extrapolation, as for example intra prediction in HEVC, or as in upcoming standards.
Having a filter, for example as the bilateral filter, directly after the transform for current block adds one additional filtering step before one can perform intra prediction of an adjacent block, for example a block to the right of the current block. In this example the next block is predicted from unfiltered pixels. This means that the next block can be predicted before the filtering of the reconstructed samples in the block have finished, breaking the latency problem.
Example 7
This example relates to filtering a current block where the pixels of the current block can be used later for prediction of a subsequent block. In particular, if the pixels of the current blocks can be used for prediction of an immediately subsequent block, i.e., the next block in the decoding (or coding) order. If all pixels in the current block are filtered, this filtering would need to finish before the pixels can be used for prediction in the immediately subsequent block. A decoder would therefore need to wait to decode the immediately subsequent block until the filtering of the current block is finished. This wait may mean that the decoder can run out of cycles, i.e., it will not have time to decode the entire frame before it must be displayed. This problem is most acute for small blocks, such as 4x4 blocks, since these take more cycles to decode per pixel.
Therefore, according to this example, the last column and the last row of the current bock are never filtered, i.e. not filtered, as shown in Figure 13. Since an immediately subsequent block can only use the last row or the last column of pixels from the current block for prediction, and since these pixels remain unfiltered, it is possible for the decoding of the subsequent block to commence before filtering of the current block has ended. This means that decoding of the immediately subsequent block can happen in parallel with the filtering of the current block, or at least in parallel with some of the filtering of the current block, whereby such decoding in parallel saves cycles and reduces latency.
Thus, according to such an example, a step of selectively filtering comprises never filtering a last column and a last row of the current block, i.e. never filtering the last column and last row. In such an example, the decoding of an immediately subsequent block commences before filtering of a current block has ended.
In such an example, decoding of an immediately subsequent block occurs in parallel with at least some of the filtering of the current block. Example 8
In another example, filtering is avoided only for 4x4 blocks. In detail, the last column and the last row of the current bock is not filtered, as shown in Figure 13, if the block is the smallest possible block, such as a 4x4 block. Since an immediately subsequent block can only use the last row or the last column of pixels from the current block for prediction, and since these pixels remain unfiltered, it is possible for the decoding of the subsequent block to commence before filtering of the current block has ended, if the current block is a 4x4 block. This means that decoding of the immediately subsequent block can happen in parallel with the filtering of the current 4x4 block, saving cycles and reducing latency. Since the clock cycle budget is especially tight for 4x4 blocks, saving cycles for these blocks are sufficient. Also, it will allow for more pixels being filtered compared to example 7, since all pixels of larger blocks, such as 4x8 blocks, may be filtered.
In a further example, similar to example 8 above, a group of block sizes are excluded, such as 4x4, 4x8 and 8x4 from having their last row and last column filtered.
In such an example the step of not filtering a last column and a last row of the current block is applied to a group of block sizes, including block sizes comprising 4x4, 4x8 and 8x4.
Next, embodiments of the present application will be described, in which filtering a right column or bottom row, or both, is avoided.
Embodiment 1
There can only be one immediately subsequent block. Sometimes this immediately subsequent block is to the right of a current block, and sometimes the immediately subsequent block is below the current block. Therefore, in one embodiment, the method first attempts to determine where the immediately subsequent block will occur. If it occurs to the right, the method avoids filtering the right most column of the current block, as shown in Figure 9.
If the next subsequent block occurs below, the process avoids filtering the bottom most row of the current block, as shown in Figure 10. If it cannot be determined from the available information where the next immediately subsequent block will occur, the method avoids filtering both the right most column and the bottom most row of pixels in the current block.
This way, the decoding of the immediately subsequent block is not delayed thus reducing latency and saving cycles. In particular, part of the decoding of the current block, namely the filtering, can happen in parallel with decoding the immediately subsequent block.
According to one embodiment, the method keeps track of the Z-order number. Typically, blocks are arrange in Z-order according to the table below.
Figure imgf000033_0001
In this case, if the current block is of type 0 (or 2), the right most column of pixels is not filtered, since the immediately subsequent block will be block of type 1 (or 3), which will be to the right of the current block.
If the current block is of type 1 , the bottom most column of pixels is not filtered, since the immediately subsequent block will be block of type 2, which will be below the current block.
If the current block is of type 3, both the right most column of pixels and the bottom most row of the current block is not filtered, since it is not known if the immediately subsequent block will be to the right or below.
Embodiment 2
According to this embodiment, which is similar to embodiment 1 , the method keeps track of the Z-order number at two or more levels.
Figure imgf000034_0001
In the example above, the first number is a higher-level Z-order number, and the second number is a lower-level Z-order number. In the example of embodiment 1 above, for block 0,3 the method would have to avoid filtering both the last row and the last column, since the example of embodiment 1 was only looking at its lower-level Z- order number (i.e. 3).
However, if the method also looks at the higher-level Z-order number (0), the method can know that the immediately subsequent blocks will be to the right. Hence the method can avoid the filtering only on the right most column. Thus, for a block with a lower level Z-order number of 3, the method avoids filtering the last column if the higher/level Z-order number is 0 or 2, and the method avoids filtering the last row if the higher-level Z-order number is 1. If the higher-level z-order number is 3, the method avoids filtering both the last row and the last column as before, since the method again does not now where the immediately subsequent block will end up. It is possible to look at even higher levels of Z-order numbers in an analogous fashion. Embodiment 3
According to this embodiment, in a similar manner to embodiment 1 above, the Z-order is used to determine whether or not to avoid filtering certain pixels, but where filtering is only avoided if the block is of size 4x4. According to another example relating to such an embodiment, the filtering is only avoided if the block size is from a group of block sizes, for example 4x4, 4x8 or 8x4. Embodiment 4
According to this embodiment, in a similar manner to embodiment 2 above, two or more different Z-order levels are used to determine whether or not to avoid filtering certain pixels, but where filtering is only avoided if the block is of size 4x4.
According to another example relating to such an embodiment, the filtering is only avoided if the block size is from a group of block sizes, for example 4x4, 4x8 or 8x4. A filter as described in the embodiments herein may be implemented in a video encoder and a video decoder. It may be implemented in hardware, in software or a combination of hardware and software. The filter may be implemented in, e.g. comprised in, user equipment, such as a mobile telephone, tablet, desktop, netbook, multimedia player, video streaming server, set-top box or computer.
Figure 8 shows a method according to a first embodiment, performed by a filter, for filtering of a picture of a video signal. The picture comprises pixels, each pixel being associated with a pixel value, wherein a pixel value is modified by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
The method comprises performing a filtering operation on a block by block basis, each block comprising rows and columns of pixels, step 801 , for example M rows and N columns, step 801 . The filtering operation comprises determining where an immediately subsequent block to the current block will occur, step 803, and avoiding the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur, step 805. According to one embodiment, if it is determined that the immediately subsequent block occurs to the right of the current block, the method comprises avoiding the filtering of the right most column of the current block. According to one embodiment, if it is determined that the immediately subsequent block occurs below the current block, the method comprises avoiding the filtering of the bottom most row of the current block. If it is not possible to determine where the immediately subsequent block occurs, according to one embodiment the method comprises avoiding the filtering of the right most column of the current block and the bottom most row of the current block.
In some examples, determining where an immediately subsequent block occurs comprises monitoring a Z-order number of the current block, wherein the Z-order relates to a sequence in which blocks are decoded or coded, and determining where an immediately subsequent block occurs based on the next number in the Z-order sequence, compared to the Z-order number of the current block. According to other examples, determining where an immediately subsequent block occurs comprises the use of Z-order numbers on at least first and second levels.
For example, the method may comprise monitoring a first level Z-order number of the current block, wherein the first level Z-order relates to a sequence in which blocks are decoded or coded, and monitoring a second level Z-order number of the current block, wherein the second level Z-order relates to a sequence in which groups of blocks at a first level Z-order are decoded or coded.
The method comprises determining where an immediately subsequent block occurs based on the next number in the first level Z-order number, compared to the first level Z- order number of the current block, and, if the location of the immediately subsequent block cannot be determined based on the first level Z-order numbers, using the second level Z-order numbers to determine where the immediately subsequent block will occur. In some examples the step of avoiding filtering is only applied to block sizes comprising 4x4.
In other examples the step of avoiding filtering is applied to a group of block sizes, including block sizes comprising 4x4, 4x8 and 8x4. In some examples the decoding of an immediately subsequent block may commence before filtering of a current block has ended.
In some examples the decoding of an immediately subsequent block can occur in parallel with at least some of the filtering of the current block.
In the embodiments above, the Z-order is a certain order in which blocks are coded or decoded. Starting with, for example, a large block typically referred to as CTU (in JEM 128x128 but could also be 256x256) that is then split into a smaller number of blocks, e.g. four blocks, or can be split into half blocks. As an example, a 64x64 block can be split into four blocks of size 32x32, or two blocks of 32x64, or two blocks of size 64x32.
This can be done down to a certain minimum block size, typically 4x4. If the CTU is split into four blocks the coding order of those may be from top left to the top right and then to the bottom left and next bottom right, as described in the embodiments above. This is the case for all spits into 4 blocks.
In the case of a split into half blocks the order may be from left to right for a vertical split, and from top to bottom for a horizontal split. So it can be noted that blocks in the top left one quadrant (64x64) of 128x128 could be split down to for example 4x4 and the block to the right could be 64x64. Each of these blocks can be intra predicted or inter predicted.
As such, besides the possibility of a block to the right or below the current block using samples from the current block for prediction by extrapolation, for example intra prediction, it can also happen that the block to the bottom left of the current block uses samples from the current block. For example, that block can be the block next in processing order. Since a block to the right always can be a prediction block, e.g. an intra block, to minimize impact on latency according to some embodiments it is possible to always omit filtering of the right column when the next block in the processing order is to the right of the current block. Since a block below, or below to the left, always can be a prediction block, e.g. an intra block, in some embodiments it is possible to always omit filtering of the bottom row to minimize impact on latency.
It is noted that the latency problem is larger for small blocks, e.g. 4x4 than for large blocks, e.g. 8x8. As described above, a common way of splitting blocks is what is called a quadtree split, where a block of size WxH is split into four blocks of size (W/2)x(H/2). Another common way of splitting block is called a binary split, where a block of size WxH is split into two smaller blocks, either of size (W/2)xH or Wx(H/2).
A common way to combine quadtree split and binary split is to first split a block using quadtree split, and then split it further using binary split. In this way of operation, quadtree split never follows binary split. Such splitting is often referred to as "quadtree followed by binarytree", QTBT.
An example is shown below in Figures 14a to 14d of a 64x64 block being split using QTBT. Figure 14a shows an example of a starting point, e.g. a 64x64 block, A. Then this block is (quadtree) split into four 32x32 blocks B, C, D, E as shown in Figure 14b.
The block D undergoes a further quadtree split into four 16x16 blocks D1 , D2, D3, and D4. After this, the block is split using binary splitting, as shown in Figure 14c: Block C is split into two 16x32 blocks C1 and C2, D2 is split into two 16x8 blocks D21 and D22, and D4 is split into two 8x16 blocks D41 and D42.
The blocks C1 and C2 are further split using binary splitting, as shown in Figure 14d; C1 is split into two 8x32 blocks C1 1 and C12, and C2 is split into two 8x32 blocks C21 and C21 . In one embodiment, the method comprises looking at the last split. If the last split was a quadtree split, it is possible to use the Z-order number as described in a previous embodiment. However, if the last split was a binary split, the method instead takes into consideration whether the block in question is the first or the last block of the binary split in decoding order.
As an example, D2 is split horizontally into D21 and D22, where D21 is the first block of the split in decoding order and D22 is the last block of the split in decoding order. This means that D21 is immediately followed by D22 in decoder order. Hence, the method may consider avoiding filtering the lowest (last) row of D21 since it is adjacent to the immediately subsequent block in the decoding order, block D22. If instead the block is split vertically, the method may instead consider avoiding filtering the last column of the first block. An example is D4, which is split vertically into D41 and D42. Here D41 is the first block of the split and D42 is the last block of the split. Hence for D41 , the method may consider avoiding filtering the right most (last) column of D41 , since it is adjacent to the immediately subsequent block in the decoding order, block D42.
In a similar fashion, with reference to Figure 14d, the method can avoid filtering the last column for C1 1 and C21 , since they are the first block of their respective most recent (vertical) splits.
As for the last block of a binary split, in one example the method may avoid filtering both the last row and the last column, e.g. to be safe. However, according to another example the method may also go one step up in the hierarchy to see if the block was first or last. As an example, with reference to Figure 14d, block C12 was the last block of the split C1 → C1 1/C12, so one cannot tell from the most recent split what to do. However, one level up in the hierarchy, C1 was the first block of the vertical split C→C1/C2. Hence it is safe to determine that it is sufficient to avoid filtering the last column of C12.
As for C22, it was the last block both in the split from C2→ C21/C22 and furthermore C2 was the last block in the split C→C1/C2. Hence one cannot draw any conclusions from the two most recent splits. In this case, the method may need to go as far back as to the quadtree split. Here, the method can determine that the Z-order number was 1 , and hence the method should avoid filtering the last row of C22.
In Figure 14d the pixel columns and rows where filtering may be avoided according to the examples described above are shown in shaded highlight. In another embodiment, according to the method only the smallest such blocks avoid filtering, such as blocks of size 16x8 and smaller. In this case, filtering would only be avoided in the shaded areas of block D21 , D22, D41 and D42.
From the above it can be seen that, according to some embodiments, the step in Figure 8 of determining where an immediately subsequent block occurs may comprise determining how a block has been partitioned into smaller blocks using quadtree and/or binary splits for coding or decoding.
The method may comprise determining the type of last split and, if the last split was a quadtree split, using a Z-order number as described earlier for determining where an immediately subsequent block will occur.
The method may comprise determining the type of last split and, if the last split was a binary split avoiding the filtering of the last column of a current block, where the current block is a first block of a most recent vertical split, or avoiding the filtering of the last row of a current block, where the current block is a first block of a most recent horizontal split.
The method may further comprise, avoiding the filtering of the last column and last row of a current block, where the current block is a last block of a most recent vertical or horizontal split, or checking one or more higher levels in a block splitting hierarchy to determine whether filtering should be avoided in the last column only or last row only of a current block.
According to some embodiment, after all pixels (samples) of the current block have been predicted, the method comprises reconstructing the block by adding dequantized and residual inverse transformed coefficients to the prediction samples. For example, the current block can be predicted, e.g. by intra prediction using reconstructed samples from the current picture or by inter prediction using samples from another picture that already have been reconstructed.
On the encoder side. The error from that prediction compared to the source is typically compressed by a transform. The transform coefficients ae quantized to reduce overhead. All coding parameters (prediction parameters, quantized transform coefficients) may be entropy coded to further reduce overhead.
Figure 15 shows an example of a filter 1400 according to an embodiment, whereby the filter is implemented as a data processing system. The data processing system includes at least one processor 1401 that is further coupled to a network interface via a network interface 1405. The at least one processor 1401 is also coupled to a memory 1403 via the network interface 1405. The memory 1403 can be implemented by a hard disk drive, flash memory, or read-only memory and stores computer-readable instructions.
The at least one processor 1401 executes the computer-readable instructions and implements the functionality described in the embodiments above. The network interface 1405 enables the data processing system 1400 to communicate with other nodes in a network. Alternative examples may include additional components responsible for providing additional functionality, including any functionality described above and/or any functionality necessary to support the solution described herein.
The filter 1400 may be operative to filter a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, the filter being configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value.
The filter 1400 may be operative to filter on a block by block basis, each block comprising rows and columns of pixels.
The filter may be operative to determine where an immediately subsequent block to the current block will occur, and to avoid the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur.
The filter may be further operative such that: if it is determined that the immediately subsequent block occurs to the right of the current block, configuring the filter to avoid the filtering of the right most column of the current block; or if it is determined that the immediately subsequent block occurs below the current block, configuring the filter to avoid the filtering of the bottom most row of the current block; or if it is not possible to determine where the immediately subsequent block occurs, configuring the filter to avoid the filtering of the right most column of the current block and the bottom most row of the current block.
The filter may be further operative to determine where an immediately subsequent block occurs by: monitoring a Z-order number of the current block, wherein the Z-order relates to a sequence in which blocks are decoded or coded; and determining where an immediately subsequent block occurs based on the next number in the Z-order sequence, compared to the Z-order number of the current block.
The filter may be further operative to determine where an immediately subsequent block occurs by using Z-order numbers on at least first and second levels, by: monitoring a first level Z-order number of the current block, wherein the first level Z-order relates to a sequence in which blocks are decoded or coded; monitoring a second level Z-order number of the current block, wherein the second level Z-order relates to a sequence in which groups of blocks at a first level Z-order are decoded or coded. The filter may be operative to determine where an immediately subsequent block occurs based on the next number in the first level Z-order number, compared to the first level Z-order number of the current block, and, if the location of the immediately subsequent block cannot be determined based on the first level Z-order numbers, using the second level Z-order numbers to determine where the immediately subsequent block will occur.
The filter 1400 may be further operative to perform filtering operations as described here, and defined in the appended claims.
Figure 16 shows an example of part of a video coding system 1500 having a filter 1400 according to an embodiment. The filter 1400 may comprise a filter as described in any of the embodiments herein. The filter 1400 is shown as being positioned between a transform module 1501 for a current block and a prediction module 1503, the prediction module 1503 configured to provide a prediction, for example an intra prediction operation for a block to the right or below a current block.
Figure 17 shows an example of a decoder 1600 that comprises a modifying means, for example a filter as described herein, configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value. The modifying means may be operative to filter on a block by block basis, each block comprising rows and columns of pixels, e.g. M rows and N columns, and determine where an immediately subsequent block to the current block will occur, and avoid the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur. The modifying means may be further operative to perform a filtering method as described herein, and as defined in the appended claims.
It is noted that the at least one of the parameters σά and or may also depends on at least one of: quantization parameter, quantization scaling matrix, transform width, transform height, picture width, picture height, a magnitude of a negative filter coefficient used as part of inter/intra prediction.
The embodiments described herein provide an improved filter for video coding and decoding.
Figure 18 is a schematic block diagram of a video encoder 40 according to an embodiment. A current sample block, also referred to as pixel block or block of pixels, is predicted by performing a motion estimation by a motion estimator 50 from already encoded and reconstructed sample block(s) in the same picture and/or in reference picture(s). The result of the motion estimation is a motion vector in the case of inter prediction. The motion vector is utilized by a motion compensator 50 for outputting an inter prediction of the sample block.
An intra predictor 49 computes an intra prediction of the current sample block. The outputs from the motion estimator/compensator 50 and the intra predictor 49 are input in a selector 51 that either selects intra prediction or inter prediction for the current sample block. The output from the selector 51 is input 10 to an error calculator in the form of an adder 41 that also receives the sample values of the current sample block. The adder 41 calculates and outputs a residual error as the difference in sample values between the sample block and its prediction, i.e., prediction block. The error is transformed in a transformer 42, such as by a discrete cosine transform (DCT), and the resulting coefficients are quantized 15 by a quantizer 43 followed by coding in an encoder 44, such as by an entropy encoder. In inter coding, also the estimated motion vector is brought to the encoder 44 for generating the coded representation of the current sample block. The transformed and quantized residual error for the current sample block is also provided to an inverse quantizer 45 and inverse transformer 46 to reconstruct the residual error. This residual error is added by an adder 47 to the prediction output from the motion compensator 50 or the intra predictor 49 to create a reconstructed sample block that can be used as prediction block in the prediction and coding of other sample blocks. This reconstructed sample block is first processed by a device 100 for filtering of a picture according to the embodiments in order to suppress deringing artifacts. The modified, i.e., filtered, reconstructed sample block is then temporarily stored in a Decoded Picture Buffer (DPB) 48, where it is available to the intra predictor 49 and the motion estimator/compensator 50. The modified, i.e. filtered, reconstructed sample block from device 100 is also coupled directly to the intra predictor 49.
If the deringing filtering instead is applied following inverse transform, the device 100 is preferably instead arranged between the inverse transformer 46 and the adder 47.
An embodiment relates to a video decoder comprising a device for filtering of a picture according to the embodiments.
Figure 19 is a schematic block diagram of a video decoder 60 comprising a device 100 for filtering of a picture according to the embodiments. The video decoder 60 comprises a decoder 61 , such as an entropy decoder, for decoding a bitstream comprising an encoded representation of a sample block to get a set of quantized and transformed coefficients. These coefficients are dequantized in an inverse quantizer 62 and inverse transformed by an inverse transformer 63 to get a decoded residual error.
The decoded residual error is added in an adder 64 to the sample prediction values of a prediction block. The prediction block is determined by a motion stimator/compensator 67 or intra predictor 66, depending on whether inter or intra prediction is performed. A selector 68 is thereby interconnected to the adder 64 and the motion estimator/compensator 67 and the intra predictor 66. The resulting decoded sample block output from the adder 64 is input to a device 100 for filtering of a picture or part of a picture in order to suppress and combat any ringing artifacts. The filtered sample block enters a DPB 65 and can be used as prediction block for subsequently decoded sample blocks. The DPB 65 is thereby connected to the motion estimator/compensator 67 to make the stored sample blocks available to the motion estimator/compensator 67. The output from the adder 64 is preferably also input to the intra predictor 66 to be used as an unfiltered prediction block. The filtered sample block is furthermore output from the video decoder 60, such as output for display on a screen. If the deringing filtering instead is applied following inverse transform, the device 100 is preferably instead arranged between the inverse transformer 63 and the adder 64.
One idea of embodiments of the present invention is to introduce a deringing filter into the Future Video Codec, i.e., the successor to HEVC.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. The word "comprising" does not exclude the presence of elements or steps other than those listed in a claim, "a" or "an" does not exclude a plurality, and a single processor or other unit may fulfil the functions of several units recited in the claims. Any reference signs in the claims shall not be construed so as to limit their scope.

Claims

1 . A method, performed by a filter, for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, wherein a pixel value is modified by a weighted combination of the pixel value and at least one spatially neighboring pixel value, the method comprising:
performing a filtering operation on a block by block basis, each block comprising rows and columns of pixels:
wherein the filtering operation comprises:
determining where an immediately subsequent block to the current block will occur; and
avoiding the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur.
2. A method as claimed in claim 1 wherein, if it is determined that the immediately subsequent block occurs to the right of the current block, the method comprises avoiding the filtering of the right most column of the current block.
3. A method as claimed in claim 1 wherein, if it is determined that the immediately subsequent block occurs below the current block, the method comprises avoiding the filtering of the bottom most row of the current block.
4. A method as claimed in claim 1 wherein, if it is not possible to determine where the immediately subsequent block occurs, the method comprises avoiding the filtering of the right most column of the current block and the bottom most row of the current block.
5. A method as claimed in any one of claims 1 to 4, wherein determining where an immediately subsequent block occurs comprises:
monitoring a Z-order number of the current block, wherein the Z-order relates to a sequence in which blocks are decoded or coded; and
determining where an immediately subsequent block occurs based on the next number in the Z-order sequence, compared to the Z-order number of the current block.
6. A method as claimed in any one of claims 1 to 4, wherein determining where an immediately subsequent block occurs comprises the use of Z-order numbers on at least first and second levels, by:
monitoring a first level Z-order number of the current block, wherein the first level Z-order relates to a sequence in which blocks are decoded or coded;
monitoring a second level Z-order number of the current block, wherein the second level Z-order relates to a sequence in which groups of blocks at a first level Z-order are decoded or coded; and
determining where an immediately subsequent block occurs based on the next number in the first level Z-order number, compared to the first level Z-order number of the current block; and,
if the location of the immediately subsequent block cannot be determined based on the first level Z-order numbers, using the second level Z-order numbers to determine where the immediately subsequent block will occur.
7. A method as claimed in any one of the preceding claims, wherein determining where an immediately subsequent block occurs comprises determining how a block has been partitioned into smaller blocks using quadtree and/or binary splits for coding or decoding.
8. A method as claimed in claim 7 comprising, determining the type of last split and, if the last split was a quadtree split, using a Z-order number according to claim 5 or 6 for determining where an immediately subsequent block will occur.
9. A method as claimed in claim 7 comprising, determining the type of last split and, if the last split was a binary split:
avoiding the filtering of the last column of a current block, where the current block is a first block of a most recent vertical split; or
avoiding the filtering of the last row of a current block, where the current block is a first block of a most recent horizontal split.
10. A method as claimed in claim 9 comprising avoiding the filtering of the last column and last row of a current block, where the current block is a last block of a most recent vertical or horizontal split; or,
checking one or more higher levels in a block splitting hierarchy to determine whether filtering should be avoided in the last column only or last row only of a current block.
1 1 . A method as claimed in any one of claims 1 to 6, wherein the step of avoiding filtering is only applied to block sizes comprising 4x4.
12. A method as claimed in any one of claims 1 to 6, wherein the step of avoiding filtering is applied to a group of block sizes, including block sizes comprising 4x4, 4x8 and 8x4.
13. A method as claimed in any one of the preceding claims, wherein decoding of an immediately subsequent block commences before filtering of a current block has ended.
14. A method as claimed in claim 13, wherein decoding of an immediately subsequent block occurs in parallel with at least some of the filtering of the current block.
15. A filter, for filtering of a picture of a video signal, wherein the picture comprises pixels, each pixel being associated with a pixel value, the filter being configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value, wherein the filter is configured to:
filter on a block by block basis, each block comprising rows and columns of pixels:
determine where an immediately subsequent block to the current block will occur; and
avoid the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur.
16. A filter as claimed in claim 15, wherein the filter is further configured such that: if it is determined that the immediately subsequent block occurs to the right of the current block, configuring the filter to avoid the filtering of the right most column of the current block; or
if it is determined that the immediately subsequent block occurs below the current block, configuring the filter to avoid the filtering of the bottom most row of the current block; or
if it is not possible to determine where the immediately subsequent block occurs, configuring the filter to avoid the filtering of the right most column of the current block and the bottom most row of the current block.
17. A filter as claimed in claims 15 or 16, wherein the filter is configured to determine where an immediately subsequent block occurs by:
monitoring a Z-order number of the current block, wherein the Z-order relates to a sequence in which blocks are decoded or coded; and
determining where an immediately subsequent block occurs based on the next number in the Z-order sequence, compared to the Z-order number of the current block.
18. A filter as claimed in claims 15 or 16, wherein the filter is configured to determine where an immediately subsequent block occurs by using Z-order numbers on at least first and second levels, by:
monitoring a first level Z-order number of the current block, wherein the first level Z-order relates to a sequence in which blocks are decoded or coded;
monitoring a second level Z-order number of the current block, wherein the second level Z-order relates to a sequence in which groups of blocks at a first level Z-order are decoded or coded; the filter being configured to:
determine where an immediately subsequent block occurs based on the next number in the first level Z-order number, compared to the first level Z-order number of the current block; and,
if the location of the immediately subsequent block cannot be determined based on the first level Z-order numbers, using the second level Z-order numbers to determine where the immediately subsequent block will occur.
19. A filter as claimed in claim 15, wherein the filter is configured to operate according to any one of claims 2 to 14.
20. A decoder comprising a modifying means configured to modify a pixel value by a weighted combination of the pixel value and at least one spatially neighboring pixel value, and wherein the modifying means is operative to:
filter on a block by block basis, each block comprising rows and columns of pixels;
determine where an immediately subsequent block to the current block will occur; and
avoid the filtering of certain pixels in the current block depending on where it is determined that the immediately subsequent block will occur.
21 . A decoder as claimed in claim 20, wherein the modifying means is further operative to perform a method as claimed in any one of claims 2 to 14.
22. A computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method according to any one of claims 1 to 14.
23. A computer program product comprising a computer-readable medium with the computer program as claimed in claim 22.
PCT/EP2018/051329 2017-01-19 2018-01-19 Filter apparatus and methods WO2018134363A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762448058P 2017-01-19 2017-01-19
US62/448058 2017-01-19

Publications (1)

Publication Number Publication Date
WO2018134363A1 true WO2018134363A1 (en) 2018-07-26

Family

ID=61187271

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/EP2018/051328 WO2018134362A1 (en) 2017-01-19 2018-01-19 Filter apparatus and methods
PCT/EP2018/051329 WO2018134363A1 (en) 2017-01-19 2018-01-19 Filter apparatus and methods

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/051328 WO2018134362A1 (en) 2017-01-19 2018-01-19 Filter apparatus and methods

Country Status (1)

Country Link
WO (2) WO2018134362A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020039363A1 (en) * 2018-08-21 2020-02-27 Beijing Bytedance Network Technology Co., Ltd. Unequal weighted sample averages for bilateral filter
WO2025077859A1 (en) * 2023-10-12 2025-04-17 Mediatek Inc. Methods and apparatus of propagating models for extrapolation intra prediction model inheritance in video coding

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2019467372B2 (en) * 2019-09-24 2022-05-19 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image coding/decoding method, coder, decoder, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140036992A1 (en) * 2012-08-01 2014-02-06 Mediatek Inc. Method and Apparatus for Video Processing Incorporating Deblocking and Sample Adaptive Offset
WO2015054811A1 (en) * 2013-10-14 2015-04-23 Microsoft Corporation Features of intra block copy prediction mode for video and image coding and decoding

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9596461B2 (en) * 2012-11-26 2017-03-14 Qualcomm Incorporated Loop filtering across constrained intra block boundaries in video coding
US9924175B2 (en) * 2014-06-11 2018-03-20 Qualcomm Incorporated Determining application of deblocking filtering to palette coded blocks in video coding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140036992A1 (en) * 2012-08-01 2014-02-06 Mediatek Inc. Method and Apparatus for Video Processing Incorporating Deblocking and Sample Adaptive Offset
WO2015054811A1 (en) * 2013-10-14 2015-04-23 Microsoft Corporation Features of intra block copy prediction mode for video and image coding and decoding

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020039363A1 (en) * 2018-08-21 2020-02-27 Beijing Bytedance Network Technology Co., Ltd. Unequal weighted sample averages for bilateral filter
WO2020039365A1 (en) * 2018-08-21 2020-02-27 Beijing Bytedance Network Technology Co., Ltd. Quantized difference used for weighting parameters derivation in bilateral filters
US11490081B2 (en) 2018-08-21 2022-11-01 Beijing Bytedance Network Technology Co., Ltd. Unequal weighted sample averages for bilateral filter
US11558610B2 (en) 2018-08-21 2023-01-17 Beijing Bytedance Network Technology Co., Ltd. Quantized difference used for weighting parameters derivation in bilateral filters
WO2025077859A1 (en) * 2023-10-12 2025-04-17 Mediatek Inc. Methods and apparatus of propagating models for extrapolation intra prediction model inheritance in video coding

Also Published As

Publication number Publication date
WO2018134362A1 (en) 2018-07-26

Similar Documents

Publication Publication Date Title
US11902515B2 (en) Method and apparatus for video coding
US11272175B2 (en) Deringing filter for video coding
US11122263B2 (en) Deringing filter for video coding
KR101752612B1 (en) Method of sample adaptive offset processing for video coding
KR102030304B1 (en) Apparatus for applying sample adaptive offsets
US20170272758A1 (en) Video encoding method and apparatus using independent partition coding and associated video decoding method and apparatus
US20140198844A1 (en) Method and apparatus for non-cross-tile loop filtering
WO2018149995A1 (en) Filter apparatus and methods
JP7295330B2 (en) Quantization processing for palette mode
EP2664139A2 (en) A method for deblocking filter control and a deblocking filtering control device
WO2018134128A1 (en) Filtering of video data using a shared look-up table
WO2018134363A1 (en) Filter apparatus and methods
US12425660B2 (en) Combining deblock filtering and another filtering for video encoding and/or decoding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18703912

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18703912

Country of ref document: EP

Kind code of ref document: A1