[go: up one dir, main page]

WO1998030979A1 - Perceptual image distortion evaluation - Google Patents

Perceptual image distortion evaluation Download PDF

Info

Publication number
WO1998030979A1
WO1998030979A1 PCT/FR1997/002222 FR9702222W WO9830979A1 WO 1998030979 A1 WO1998030979 A1 WO 1998030979A1 FR 9702222 W FR9702222 W FR 9702222W WO 9830979 A1 WO9830979 A1 WO 9830979A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
block
values
contrast
luminance
Prior art date
Application number
PCT/FR1997/002222
Other languages
French (fr)
Inventor
Jean Louis Blin
Original Assignee
Telediffusion De France
France Telecom
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telediffusion De France, France Telecom filed Critical Telediffusion De France
Priority to EP97950240A priority Critical patent/EP0951698A1/en
Publication of WO1998030979A1 publication Critical patent/WO1998030979A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding

Definitions

  • the present invention relates to a method for estimating the local perception of quantization distortions in a still image for digital image processing in general, and in particular for digital decoders of the DCT type.
  • the transition from an analog signal representing an image to the corresponding digital signal is carried out by sampling the continuous signal at a finite number of discrete instants and by quantifying the possible values of this signal at each of these instants. While sampling under certain conditions does not alter the information carried by a signal, quantification, that is to say the restriction of the analog signal to a finite number of values, does not allow to express during the restitution on a screen of the source image (reference image), all the subtle nuances of the original analog information. A systematic approximation is introduced which can be minimized, but never completely eliminated.
  • the perception by the human eye of this quantification can be assimilated to a perception of local contrast distortion, the contrast being a relative difference in the luminance (light intensity) ⁇ L between two points or two zones.
  • the contrast is a relative difference in the luminance (light intensity) ⁇ L between two points or two zones.
  • a simple measurement of the difference between the reference image signal (non-coded image) and the test image signal (coded image) is carried out.
  • this measurement does not allow an optimum estimate of the perceived distortion, because the specificities of the treatments operated by the visual system are not taken into account.
  • An example of a contrast perception model is described in the article by F. Kingdom and B. Molden: "A model for contrast discrimination with incremental andconductiveai patches", Vision Research, volume 31, 1991.
  • the object of the invention is to measure the quality of an image at the level of perception that the human eye has of it, by obtaining a map of the values of the local perception of the contrast distortions by blocks of the image, and thus make it possible to optimize the quality of the image perceived by the eye by making it possible to choose the quantification step of the coding system best suited to visual perception.
  • the present invention provides a method of estimating the local perception of quantization distortions, that is to say of the local perception of contrast distortions, which takes into account these specificities of processing of the visual system, and which thus allows a measurement. finer image quality and better selection of the quantization step to choose.
  • the method for estimating the local perception of quantization distortion of a coding system of an image, from the coded image and the corresponding non-coded image is characterized. in that it comprises the following stages:. definition for each block I of the coded image and of the non-coded image decomposed into a plurality of blocks I, of a first window II centered on block I, on which the local luminance (L j :) is calculated, as well as a second window III centered on the first block I and on the first window II, and on which the adaptation luminance (L a ) is calculated,.
  • This value can be relativized by dividing it by the sum by spatial orientations of frequencies, of the final values of the contrasts obtained in the previous step on block I of the non-coded image, making it possible to obtain the value of the local perception of contrast distortions (PLDC) for each block I of the image.
  • PLDC contrast distortions
  • the isotropic spatial filtering step consists of filtering carried out by a filter centered around 8 cycles per degree (cpd).
  • the step of the first non-linear processing consists of a logarithmic processing (variable gain), and that of the second processing is analogous to the response of a neuron.
  • this second nonlinear processing consists of a processing by setting to power m, with m ⁇ l.
  • An example of windowing on which the method according to the present invention must be applied consists in considering a first window II made up of at least three blocks out of three blocks I of N on N points, and a second window III made up of at least five blocks on five blocks I of N on N points.
  • the method according to the invention takes into account the following phenomena: the non-linear response to the contrast,
  • the accumulation zone defining a maximum probability of perception of a stimulus (light contrast that can be perceived), and - the masking of weak signals by strong signals.
  • the method according to the present invention provides both a digital evaluation of the contrast distortion, related to the quantization perceived by the visual system, as well as the threshold below which no distortion is perceived.
  • the method is applied block of image by block of image while respecting the local perception of the light contrast distortions of each block, and thus makes it possible to obtain a map of the values of the perception of the local distortions of contrast related to the quantification. of the image.
  • FIG. 1 represents the block diagram of the method for estimating the perception of local contrast distortion according to the present invention
  • FIG. 2 represents a block associated with the windows of the test and reference image from which the first step of the method will be executed according to a preferred embodiment of the present invention
  • FIG. 3 represents a block diagram of the retinal treatment of the second and third stages of the method according to the preferred embodiment of the present invention
  • FIG. 4 represents a block diagram of the first cortical treatments of the fourth and fifth steps of the method according to the preferred embodiment of the present invention
  • FIG. 5 represents the diagram for the separation of the horizontal and vertical frequencies from the diagonal frequencies of the method according to the preferred embodiment of the present invention.
  • FIG. 6 represents the specifications of a filter for the frequency transition on a horizontal axis according to the embodiment of the present invention.
  • the method according to the present invention is shown in FIG. 1. It consists in processing in parallel the test image 8, that is to say the coded image, and the reference image 9, that is to say the uncoded image, by the method of estimating the local perception of the quantization distortions in a still image according to the present invention.
  • FIG. 1 is limited to the illustration of the processing of the reference image up to the sixth step of calculating the weighted difference of the current windows of the test images 8 and of reference 9, in which the intermediate contrast values processed from test image 8, these having undergone the same treatment as the intermediate contrast values of the reference image 9.
  • Step 1 consists in processing the digital signals of the test image 8 and of the reference image 9 to transform them into a luminance value.
  • the test image 8 and the reference image 9 are divided into a plurality of image subsets.
  • An example of such an image subset is shown in FIG. 2. It consists of an image block I which is for example square and consists of N points on N points, on which is centered, on the one hand, a first window II of i points on j points (or consisting of at least three blocks on three blocks in the case of FIG. 2), for the calculation of the luminance L j : (or surrounding luminance) , and on the other hand a second window III (of at least five blocks out of five blocks in the case of FIG.
  • Steps 2 and 3 of the method according to the invention constitute retinal treatments of the luminances defined above.
  • This first non-linear treatment is preferably of the logarithmic type, and operates at the retinal level as a variable gain control for adaptation to ambient light.
  • a treatment is carried out taking into account the sensitivity to spatial frequencies, that is to say the sensitivity to the frequencies projected onto the retina, expressed in cycles by degree (cpd), by filtering the intermediate contrast.
  • ⁇ L j preferably by an isotropic two-dimensional spatial filter.
  • the signal SQ ⁇ L ' j ; * obtained from the retina is obtained.
  • Steps 4 and 5 of the method according to the invention constitute first cortical processing of the signals S Q originating from the retina.
  • Step 4 consists in processing this signal SQ with respect to n spatial orientations of chosen frequencies.
  • n signals ⁇ L " j j * (j) 5 ⁇ L" j ; * ( 2 ) , - J and ⁇ L " j : * (n) .
  • Step 5 consists in applying to each of these separate values a new non-linear processing characteristic of the compressive response of a neuron to a contrast, to obtain the corresponding values ⁇ L " j j ** ( i ) , Aij ⁇ , j ( 2 ) v, ei AL, j (n) .
  • Steps 6 and 7 constitute second cortical treatments applied to the intermediate contrasts previously treated relative to the n spatial orientations of frequencies.
  • step 6 consists in performing the differences of the intermediate contrasts thus treated by type of orientation between the current blocks of the test image 8 and of the reference image 9, point to point, having undergone the five preceding steps. . These differences are weighted according to the orientations before being added.
  • step 7 consists of an adaptation to the contrasts of the first current reference window II by dividing the result of the sixth step by calculating the equivalent contrast by type of orientation in this window.
  • the method for estimating the local perception of contrast is applied to the 4: 2: 2 domain, a global coding standard whose specifications are described in recommendation ITU-R BT.601 -4, "Digital television coding parameters for studios" of the International Telecommunication Union.
  • the first step in the process is to first transform the digital signals from an incoming 4: 2: 2 image into a luminance value.
  • the theoretical value of the luminance L (in cd / m 2 ) expressed in the 4: 2: 2 domain over N points of the image is given by:
  • the test image 8 and the reference image 9 are divided into image subsets as shown in FIG. 2.
  • a block of image I of 16 points by 16 points is defined, around which a first is centered.
  • the calculated luminance value is then applied to the current block.
  • the operation can thus be repeated from block to block over the entire 4: 2: 2 image.
  • the matching luminance L a is as follows:
  • the adaptation luminance L a as well as the luminance L j undergo a first non-linear treatment as well as a local adaptation to the light context, like this is represented by the diagram of FIG. 3.
  • This first non-linear processing 10 is of logarithmic type and operates at the retinal level as a variable gain control for adaptation to ambient light.
  • E L:: being the surrounding luminance before and L; : * the surrounding luminance after the same non-linear treatment 10, the luminances Li, j and La after the treatment 10 are:
  • the intermediate contrast ⁇ L j : * is obtained by:
  • this intermediate contrast is filtered at 12 by an oriented spatial filter corresponding to the sensitivity to the spatial frequencies of the eye, that is to say, corresponding to the frequencies projected onto the retina (expressed in cpd , cycles per degree).
  • an oriented spatial filter corresponding to the sensitivity to the spatial frequencies of the eye, that is to say, corresponding to the frequencies projected onto the retina (expressed in cpd , cycles per degree).
  • This is shown in Figure 3, and corresponds to a retinal treatment of intermediate contrast.
  • a filter centered on 8 cpd This isotropic visual luminance spatial filter is of the bandpass type and is constituted by the difference of two Gaussians such as:
  • k c 3.177: parameter which conditions the maximum gain of the filter
  • 3.6: magnitude ratio between the central gausian and the peripheral gausian
  • ech 1: scale factor for sampling filter coefficient values, the value is taken around one minute of arc, since for a perception normalized to 6 times the height of a television screen, the distance between 2 points on the screen is 1 minute of arc.
  • the frequencies can preferably be separated into horizontal and vertical frequencies from the diagonal frequencies from an example of a two-dimensional oriented filter whose template is shown in the FIGS. 5 and 6.
  • the two quantities obtained respectively ⁇ L "jj * (H v ) and ⁇ L"j; * (DIAG) , each undergo a second non-linear treatment characteristic of the compressive response of a neuron to a contrast as shown in FIG. 4.
  • This processing makes it possible to obtain, for each signal, the following neural response to contrast:
  • K 0 0.518: factor allowing a zero response for a zero ⁇ L contrast.
  • the difference of the intermediate contrasts according to the diagonal orientation of the frequencies is weighted by a factor k, preferably equal to 0.5 due to a lower sensitivity of the eye to the diagonal frequencies.
  • PLDC Local Perception of Contrast Distortions
  • the present invention takes into account the fact that the human eye is sensitive to a relative error with respect to the local contrast and not to an absolute error as has been considered until now.
  • the human observer has a constant decision criterion for the threshold of perception of a distortion of contrast.
  • the perception of the quantification is equivalent to a contrast distortion, this is why the value of PLDC must be constant at the threshold of perception of the quantification whatever the local contrast, the spatial frequency and its orientation. .
  • the present invention thus makes it possible to obtain the values of the perception of the local distortion of contrast linked to the quantification of the image in each block thereof on the basis of objective visual perception criteria.
  • the present invention is particularly well suited for the choice of the quantization step of an image processed in 4: 2: 2 in the MPEG2 domain, but can be applied to other type of image format, such as for example in JPEG , without departing from the scope of the invention, and is independent of the type of digital coding of television signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention concerns a method for evaluating perceptual image quantization distortion in a fixed image. This method takes into account the specific characteristics of the visual processing of an image to obtain the most accurate evaluation, based on the difference between the reference image (9) and the tested image (8) divided into a plurality of image blocks (I). On the basis of each image block (I), the method consists in computing (1) the luminance point Li,j and the adaptation luminance La, in a non-linear processing (2; 10) and in computing the intermediate contrast (2; 11) ΔLi,j* = Li,j*-La*, in filtering it (3; 12) according to the spatial orientation of the frequencies, in separating (4;13) the frequencies according to orientation, in a second non linear processing (5; 14), and by computing the point-to-point difference weighted by frequency spatial orientation between the obtained intermediate contrast values, in deducing therefrom the value for each tested image block, the local perceptual contrast distortion.

Description

ESTIMATION DE LA PERCEPTION DES DISTORSIONS D'IMAGE ESTIMATED PERCEPTION OF IMAGE DISTORTIONS
La présente invention concerne un procédé d'estimation de la perception locale des distorsions de quantification dans une image fixe pour le traitement numérique d'image en général, et en particulier pour les décodeurs numériques de type DCT.The present invention relates to a method for estimating the local perception of quantization distortions in a still image for digital image processing in general, and in particular for digital decoders of the DCT type.
Le passage d'un signal analogique représentant une image, au signal numérique correspondant, est réalisé par un échantillonnage du signal continu en un certain nombre fini d'instants discrets et par une quantification des valeurs possibles de ce signal à chacun de ces instants. Alors que l'échantillonnage sous certaine conditions n'altère pas l'information portée par un signal, la quantification, c'est à dire la restriction du signal analogique à un nombre fini de valeurs, ne permet pas d'exprimer lors de la restitution sur un écran de l'image source (image de référence), toutes les nuances subtiles de l'information analogique originale. Une approximation systématique est introduite qui peut être minimisée, mais jamais éliminée complètement. Or, la perception par l'oeil humain de cette quantification peut être assimilée à une perception de distorsion locale de contraste, le contraste étant une différence relative de la luminance (intensité lumineuse) ΔL entre deux point ou deux zones. Dans l'art antérieur, une simple mesure de la différence entre le signal image de référence (image non codée) et le signal image test (image codée) est effectuée. Malheureusement, cette mesure ne permet pas une estimation optimum de la distorsion perçue, car les spécificités des traitements opérés par le système visuel ne sont pas prises en compte. Un exemple de modèle de perception de contraste est décrit dans l'article de F.Kingdom et B.Moulden : "A model for contrast discrimination with incrémental and décrémentai patches", Vision Research, volume 31, 1991.The transition from an analog signal representing an image to the corresponding digital signal is carried out by sampling the continuous signal at a finite number of discrete instants and by quantifying the possible values of this signal at each of these instants. While sampling under certain conditions does not alter the information carried by a signal, quantification, that is to say the restriction of the analog signal to a finite number of values, does not allow to express during the restitution on a screen of the source image (reference image), all the subtle nuances of the original analog information. A systematic approximation is introduced which can be minimized, but never completely eliminated. However, the perception by the human eye of this quantification can be assimilated to a perception of local contrast distortion, the contrast being a relative difference in the luminance (light intensity) ΔL between two points or two zones. In the prior art, a simple measurement of the difference between the reference image signal (non-coded image) and the test image signal (coded image) is carried out. Unfortunately, this measurement does not allow an optimum estimate of the perceived distortion, because the specificities of the treatments operated by the visual system are not taken into account. An example of a contrast perception model is described in the article by F. Kingdom and B. Molden: "A model for contrast discrimination with incremental and décémentai patches", Vision Research, volume 31, 1991.
Le but de l'invention est de mesurer la qualité d'une image au niveau de la perception qu'en a l'oeil humain, en obtenant une carte des valeurs de la perception locale des distorsions de contraste par blocs de l'image, et de permettre ainsi d'optimiser la qualité de l'image perçue par l'oeil en permettant de choisir le pas de quantification du système de codage le mieux adapté à la perception visuelle. La présent invention propose un procédé d'estimation de la perception locale des distorsions de quantification, c'est à dire de la perception locale des distorsions de contraste, qui prend en compte ces spécificités de traitement du système visuel, et qui permet ainsi une mesure plus fine de la qualité d'image et une meilleure sélection du pas de quantification à choisir.The object of the invention is to measure the quality of an image at the level of perception that the human eye has of it, by obtaining a map of the values of the local perception of the contrast distortions by blocks of the image, and thus make it possible to optimize the quality of the image perceived by the eye by making it possible to choose the quantification step of the coding system best suited to visual perception. The present invention provides a method of estimating the local perception of quantization distortions, that is to say of the local perception of contrast distortions, which takes into account these specificities of processing of the visual system, and which thus allows a measurement. finer image quality and better selection of the quantization step to choose.
En effet, selon la présente invention, le procédé d'estimation de la perception locale de distorsion de quantification d'un système de codage d'une image, à partir de l'image codée et de l'image non codée correspondantes, est caractérisé en ce qu'il comporte les étapes suivantes : . définition pour chaque bloc I de l'image codée et de l'image non codée décomposées en une pluralité de blocs I, d'une première fenêtre II centrée sur le bloc I, sur laquelle est calculée la luminance locale (Lj :), ainsi que d'une seconde fenêtre III centrée sur le premier bloc I et sur la première fenêtre II, et sur lequel est calculée la luminance d'adaptation (La), . traitement non linéaire de la luminance locale (Lj :) et de la luminance d'adaptation (La) obtenues, et calcul du contraste intermédiaire (ΔLj j*=Lj ;*-La*) par la différence entre les valeurs obtenues de la luminance point (Lj :*) et de luminance d'adaptation (La*),According to the present invention, the method for estimating the local perception of quantization distortion of a coding system of an image, from the coded image and the corresponding non-coded image, is characterized. in that it comprises the following stages:. definition for each block I of the coded image and of the non-coded image decomposed into a plurality of blocks I, of a first window II centered on block I, on which the local luminance (L j :) is calculated, as well as a second window III centered on the first block I and on the first window II, and on which the adaptation luminance (L a ) is calculated,. non-linear processing of the local luminance (L j :) and of the adaptation luminance (L a ) obtained, and calculation of the intermediate contrast (ΔL j j * = L j ; * - L a *) by the difference between the values obtained of the point luminance (L j : *) and of the adaptation luminance (L a *),
. filtrage spatial orienté du contraste intermédiaire (ΔLj :*) selon les fréquences spatiales,. oriented spatial filtering of the intermediate contrast (ΔL j : *) according to the spatial frequencies,
. séparation par orientations des valeurs relatives aux orientations relatives de fréquences du contraste intermédiaire obtenu lors de l'étape précédente. separation by orientation of the values relating to the relative frequency orientations of the intermediate contrast obtained during the previous step
(SQ=ΔLJ : *), permettant d'obtenir n valeurs (AL"; ;*^, ΔL"j j*^,..., ΔL"; :*(n)) de contraste intermédiaire, chacune correspondant à une orientation spatiale de fréquence,(SQ = ΔLJ: *), allowing to obtain n values (AL ";; * ^, ΔL" j j * ^, ..., ΔL ";: * (n) ) of intermediate contrast, each corresponding to a spatial frequency orientation,
. second traitement non linéaire de chacune des n valeurs obtenues lors de l'étape précédente permettant d'obtenir n valeurs correspondantes (ΔL"j j**(ι)5 . second non-linear processing of each of the n values obtained during the previous step, making it possible to obtain n corresponding values (ΔL " dd ** ( ι ) 5
AIj ι,j (2)'-» ΔL ι,j (n))' AIj ι, j (2) '- » ΔL ι, j (n))'
. calcul de la différence pondérée par orientation spatiale de fréquence, point à point, entre les n valeurs finales des contrastes du bloc I de l'image non codée obtenues lors de l'étape précédente, et les n valeurs finales des contrastes du bloc I de l'image codée obtenues lors de l'étape précédente.. calculation of the weighted difference by spatial frequency orientation, point to point, between the n final values of the contrasts of block I of the non-coded image obtained during the previous step, and the n final values of the contrasts of block I of the coded image obtained during the previous step.
Cette valeur peut être relativisée en la divisant par la somme par orientations spatiales de fréquences, des valeurs finales des contrastes obtenues dans l'étape précédente sur le bloc I de l'image non codée, permettant d'obtenir la valeur de la perception locale des distorsions de contraste (PLDC) pour chaque bloc I de l'image .This value can be relativized by dividing it by the sum by spatial orientations of frequencies, of the final values of the contrasts obtained in the previous step on block I of the non-coded image, making it possible to obtain the value of the local perception of contrast distortions (PLDC) for each block I of the image.
Selon un mode de réalisation préféré de la présente invention, l'étape de filtrage spatial isotrope consiste en un filtrage bi-dimensionnel des fréquences horizontales/verticales, et diagonales (n=2), et lors de l'étape de calcul de la différence pondérée par orientations spatiales de fréquences, la différence entre les valeurs selon les fréquences diagonales sont pondérées par 0,5.According to a preferred embodiment of the present invention, the isotropic spatial filtering step consists of a two-dimensional filtering of the horizontal / vertical and diagonal frequencies (n = 2), and during the step of calculating the difference weighted by spatial frequency orientations, the difference between the values according to the diagonal frequencies are weighted by 0.5.
De préférence, l'étape de filtrage spatial isotrope consiste en un filtrage réalisé par un filtre centré autour de 8 cycles par degré (cpd).Preferably, the isotropic spatial filtering step consists of filtering carried out by a filter centered around 8 cycles per degree (cpd).
L'étape du premier traitement non linéaire consistent en un traitement logarithmique (gain variable), et celle du second traitement est analogue à la réponse d'un neurone. De préférence, ce second traitement non linéaire consiste en un traitement par mise à la puissance m, avec m<l . Un exemple de fenêtrage sur lequel doit être appliqué le procédé selon la présente invention consiste à considérer une première fenêtre II constituée d'au moins trois blocs sur trois blocs I de N sur N points, et une seconde fenêtre III constituée d'au moins cinq blocs sur cinq blocs I de N sur N points.The step of the first non-linear processing consists of a logarithmic processing (variable gain), and that of the second processing is analogous to the response of a neuron. Preferably, this second nonlinear processing consists of a processing by setting to power m, with m <l. An example of windowing on which the method according to the present invention must be applied consists in considering a first window II made up of at least three blocks out of three blocks I of N on N points, and a second window III made up of at least five blocks on five blocks I of N on N points.
Le procédé selon l'invention prend en compte les phénomènes suivants : - la réponse non linéaire au contraste,The method according to the invention takes into account the following phenomena: the non-linear response to the contrast,
- la sensibilité variable en fonction de la fréquence,- the variable sensitivity depending on the frequency,
- la zone localisée de la luminance d'adaptation,- the localized area of the matching luminance,
- la zone de cumul définissant un maximum de probabilité de perception d'un stimulus (contraste lumineux pouvant être perçu), et - le masquage des signaux faibles par les signaux forts.- the accumulation zone defining a maximum probability of perception of a stimulus (light contrast that can be perceived), and - the masking of weak signals by strong signals.
Le procédé selon la présente invention fournit à la fois une évaluation numérique de la distorsion de contraste, liée à la quantification perçue par le système visuel, ainsi que le seuil au-dessous duquel aucune distorsion n'est perçue.The method according to the present invention provides both a digital evaluation of the contrast distortion, related to the quantization perceived by the visual system, as well as the threshold below which no distortion is perceived.
Le procédé s'applique bloc d'image par bloc d'image en respectant la perception locale des distorsions de contraste lumineux de chaque bloc, et permet ainsi d'obtenir une carte des valeurs de la perception des distorsions locales de contraste liée à la quantification de l'image.The method is applied block of image by block of image while respecting the local perception of the light contrast distortions of each block, and thus makes it possible to obtain a map of the values of the perception of the local distortions of contrast related to the quantification. of the image.
La présente invention sera mieux comprise à la lecture de la description qui va suivre, dans laquelle seront détaillés le procédé selon l'invention ainsi qu'un mode de réalisation préféré de la présente invention illustrés par les figures suivantes :The present invention will be better understood on reading the description which follows, in which the method according to the invention will be detailed, as well as a preferred embodiment of the present invention illustrated by the following figures:
. la figure 1 représente le bloc-diagramme synoptique du procédé d'estimation de la perception de distorsion locale de contraste selon la présente invention,. FIG. 1 represents the block diagram of the method for estimating the perception of local contrast distortion according to the present invention,
. la figure 2 représente un bloc associé aux fenêtres de l'image test et de référence à partir desquelles sera exécutée la première étape du procédé selon un mode de réalisation préféré de la présente invention,. FIG. 2 represents a block associated with the windows of the test and reference image from which the first step of the method will be executed according to a preferred embodiment of the present invention,
. la figure 3 représente un bloc-diagramme synoptique du traitement rétinien des seconde et troisième étapes du procédé selon le mode de réalisation préféré de la présente invention,. FIG. 3 represents a block diagram of the retinal treatment of the second and third stages of the method according to the preferred embodiment of the present invention,
. la figure 4 représente un bloc-diagramme synoptique des premiers traitements corticaux des quatrième et cinquième étapes du procédé selon le mode de réalisation préféré de la présente invention,. FIG. 4 represents a block diagram of the first cortical treatments of the fourth and fifth steps of the method according to the preferred embodiment of the present invention,
. la figure 5 représente le diagramme de séparation des fréquences horizontales et verticales des fréquences diagonales du procédé selon le mode de réalisation préféré de la présente invention, et. FIG. 5 represents the diagram for the separation of the horizontal and vertical frequencies from the diagonal frequencies of the method according to the preferred embodiment of the present invention, and
. la figure 6 représente les spécifications d'un filtre pour la transition en fréquence sur un axe horizontal selon le mode de réalisation de la présente invention.. FIG. 6 represents the specifications of a filter for the frequency transition on a horizontal axis according to the embodiment of the present invention.
Le procédé selon la présente invention est représenté sur la figure 1. Il consiste à traiter en parallèle l'image test 8, c'est à dire l'image codée, et l'image de référence 9, c'est à dire l'image non codée, par le procédé d'estimation de la perception locale des distorsions de quantification dans une image fixe selon la présente invention.The method according to the present invention is shown in FIG. 1. It consists in processing in parallel the test image 8, that is to say the coded image, and the reference image 9, that is to say the uncoded image, by the method of estimating the local perception of the quantization distortions in a still image according to the present invention.
Pour plus de clarté, la figure 1 est limitée à l'illustration du traitement de l'image de référence jusqu'à la sixième étape de calcul de la différence pondérée des fenêtres courantes des images test 8 et de référence 9, dans laquelle interviennent les valeurs de contraste intermédiaires traitées de l'image test 8, celles-ci ayant subi le même traitement que les valeurs de contraste intermédiaires de l'image de référence 9.For greater clarity, FIG. 1 is limited to the illustration of the processing of the reference image up to the sixth step of calculating the weighted difference of the current windows of the test images 8 and of reference 9, in which the intermediate contrast values processed from test image 8, these having undergone the same treatment as the intermediate contrast values of the reference image 9.
L'étape 1 consiste à traiter les signaux numériques de l'image test 8 et de l'image référence 9 pour les transformer en valeur de luminance. Pour cela, l'image test 8 et l'image de référence 9 sont divisées en une pluralité de sous- ensembles d'image. Un exemple d'un tel sous ensemble d'image est représenté sur la figure 2. Celui-ci est constitué d'un bloc d'image I qui est par exemple carré et constitué de N points sur N points, sur lequel est centrée, d'une part, une première fenêtre II de i points sur j points (ou constituée d'au moins trois blocs sur trois blocs dans le cas de la figure 2), pour le calcul de la luminance Lj : (ou luminance environnante), et d'autre part une seconde fenêtre III (d'au moins cinq blocs sur cinq blocs dans le cas de la figure 2), pour le calcul de la luminance d'adaptation La (ou luminance moyenne) en vue du calcul de la perception locale de distorsions de contraste. La valeur calculée de la luminance Lj : est alors appliquée au bloc courant, et l'opération est répétée de blocs en blocs sur l'ensemble des images test et de référence.Step 1 consists in processing the digital signals of the test image 8 and of the reference image 9 to transform them into a luminance value. For this, the test image 8 and the reference image 9 are divided into a plurality of image subsets. An example of such an image subset is shown in FIG. 2. It consists of an image block I which is for example square and consists of N points on N points, on which is centered, on the one hand, a first window II of i points on j points (or consisting of at least three blocks on three blocks in the case of FIG. 2), for the calculation of the luminance L j : (or surrounding luminance) , and on the other hand a second window III (of at least five blocks out of five blocks in the case of FIG. 2), for the calculation of the adaptation luminance L a (or average luminance) with a view to the calculation of the local perception of contrast distortions. The calculated value of the luminance L j : is then applied to the current block, and the operation is repeated from block to block over all of the test and reference images.
Les étapes 2 et 3 du procédé selon l'invention constituent des traitements rétiniens des luminances définies précédemment.Steps 2 and 3 of the method according to the invention constitute retinal treatments of the luminances defined above.
En effet, l'étape 2 consiste à appliquer un traitement non linéaire à la luminance Lj ; (Lj ;*) ainsi qu'à la luminance d'adaptation La (La*), puis à en déduire le contraste adapté localement à la luminance moyenne, c'est à dire le contraste intermédiaire ΔLj ;* = Lj :* - La*. Ce premier traitement non linéaire est de préférence de type logarithmique, et opère au niveau rétinien comme une commande de gain variable d'adaptation à la lumière ambiante.In fact, step 2 consists in applying a non-linear treatment to the luminance L j ; (L j ; *) as well as to the adaptation luminance L a (L a *), then to deduce therefrom the contrast locally adapted to the average luminance, that is to say the intermediate contrast ΔL j ; * = L j : * - L a *. This first non-linear treatment is preferably of the logarithmic type, and operates at the retinal level as a variable gain control for adaptation to ambient light.
Lors de l'étape 3, est effectué un traitement prenant en compte la sensibilité aux fréquences spatiales, c'est à dire la sensibilité aux fréquences projetées sur la rétine, s'exprimant en cycle par degré (cpd), en filtrant le contraste intermédiaire ΔLj ;* de préférence par un filtre spatial bi-dimensionnel isotrope. On obtient le signal SQ = ΔL'j ;* issu de la rétine.During step 3, a treatment is carried out taking into account the sensitivity to spatial frequencies, that is to say the sensitivity to the frequencies projected onto the retina, expressed in cycles by degree (cpd), by filtering the intermediate contrast. ΔL j ; * preferably by an isotropic two-dimensional spatial filter. The signal SQ = ΔL 'j; * obtained from the retina is obtained.
Les étapes 4 et 5 du procédé selon l'invention constituent des premiers traitements corticaux des signaux SQ issus de la rétine. L'étape 4 consiste à traiter ce signal SQ par rapport à n orientations spatiales de fréquences choisies. On obtient ainsi les n signaux ΔL"j j*( j)5 ΔL"j ;*(2),—J et ΔL"j :*(n). Par exemple, pour un traitement cortical complet, les orientations peuvent être de 30° (n=12). L'étape 5 consiste à appliquer à chacune de ces valeurs séparées un nouveau traitement non linéaire caractéristiques de la réponse compressive d'un neurone à un contraste, pour obtenir les valeurs correspondantes ΔL"j j**(i), Aij ι,j (2)v, ei AL ,j (n).Steps 4 and 5 of the method according to the invention constitute first cortical processing of the signals S Q originating from the retina. Step 4 consists in processing this signal SQ with respect to n spatial orientations of chosen frequencies. We thus obtain the n signals ΔL " j j * (j) 5 ΔL"j; * ( 2 ) , - J and ΔL " j : * (n) . For example, for a complete cortical treatment, the orientations can be 30 ° (n = 12) Step 5 consists in applying to each of these separate values a new non-linear processing characteristic of the compressive response of a neuron to a contrast, to obtain the corresponding values ΔL " j j ** ( i ) , Aij ι, j ( 2 ) v, ei AL, j (n) .
Les étapes 6 et 7 constituent des seconds traitements corticaux appliqués aux contrastes intermédiaires traités précédemment relativement aux n orientations spatiales de fréquences.Steps 6 and 7 constitute second cortical treatments applied to the intermediate contrasts previously treated relative to the n spatial orientations of frequencies.
En effet, l'étape 6 consiste à effectuer les différences des contrastes intermédiaires ainsi traités par type d'orientation entre les blocs courants de l'image test 8 et de l'image référence 9, point à point, ayant subit les cinq étapes précédentes. Ces différences sont pondérées en fonction des orientations avant d'être additionnées.In fact, step 6 consists in performing the differences of the intermediate contrasts thus treated by type of orientation between the current blocks of the test image 8 and of the reference image 9, point to point, having undergone the five preceding steps. . These differences are weighted according to the orientations before being added.
Enfin, l'étape 7 consiste en une adaptation aux contrastes de la première fenêtre II courante de référence en effectuant une division du résultat de la sixième étape par le calcul du contraste équivalent par type d'orientation dans cette fenêtre.Finally, step 7 consists of an adaptation to the contrasts of the first current reference window II by dividing the result of the sixth step by calculating the equivalent contrast by type of orientation in this window.
On obtient ainsi la Perception Locale des Distorsions de Contraste (notée PLDC) qui exprime une sensibilité à une erreur relative.We thus obtain the Local Perception of Distortions in Contrast (denoted PLDC) which expresses a sensitivity to a relative error.
En renouvelant ce traitement sur tous les blocs constituant l'image, il est possible d'obtenir une cartographie précise des valeurs de la perception locale des distorsions de contraste (notée PLDC) dans une image fixe.By renewing this processing on all the blocks constituting the image, it is possible to obtain a precise mapping of the values of the local perception of the distortions of contrast (denoted PLDC) in a fixed image.
Ainsi, toutes les spécificités de traitement du système visuel évoquées plus haut, ont été prises en compte, et on obtient une valeur perçue visuellement des distorsions de contraste liées à la quantification des signaux dans une image fixe.Thus, all the specifics of processing of the visual system mentioned above, have been taken into account, and a visually perceived value of the contrast distortions linked to the quantization of the signals in a fixed image is obtained.
Dans le mode de réalisation préféré décrit ci-après, le procédé d'estimation de la perception locale de contraste est appliquée au domaine 4:2:2, norme mondiale de codage dont les spécifications sont décrites dans la recommandation UIT-R BT.601-4, "Paramètres de codage de télévision numérique pour studios" de Y Union Internationale des Télécommunications. La première étape du procédé consiste, dans un premier temps, à transformer les signaux numériques d'une image 4:2:2 entrante en valeur de luminance. La valeur théorique de la luminance L (en cd/m2) exprimée dans le domaine 4:2:2 sur N points de l'image est données par:In the preferred embodiment described below, the method for estimating the local perception of contrast is applied to the 4: 2: 2 domain, a global coding standard whose specifications are described in recommendation ITU-R BT.601 -4, "Digital television coding parameters for studios" of the International Telecommunication Union. The first step in the process is to first transform the digital signals from an incoming 4: 2: 2 image into a luminance value. The theoretical value of the luminance L (in cd / m 2 ) expressed in the 4: 2: 2 domain over N points of the image is given by:
L = 70/2192-2 χ (N-16)2-2 L = 70/219 2-2 χ (N-16) 2-2
(1)(1)
avec γ=2,2, valeur nominale retenue pour la norme 4:2:2.with γ = 2.2, nominal value used for the 4: 2: 2 standard.
L'image test 8 et l'image de référence 9 sont divisées en sous ensembles d'image comme cela est représenté sur la figure 2. Un bloc d'image I de 16 points sur 16 points est défini, autour duquel est centrée une première fenêtre de 3 blocs sur 3 blocs (48 points sur 48 points), dans laquelle est calculée la luminance Lj : puis centrée une seconde fenêtre III de 5 blocs sur 5 blocs (80 points sur 80 points) dans laquelle la luminance d'adaptation La est calculée.The test image 8 and the reference image 9 are divided into image subsets as shown in FIG. 2. A block of image I of 16 points by 16 points is defined, around which a first is centered. window of 3 blocks on 3 blocks (48 points on 48 points), in which the luminance L j is calculated: then centered a second window III of 5 blocks on 5 blocks (80 points on 80 points) in which the adaptation luminance L a is calculated.
La valeur calculée de la luminance est alors appliquée au bloc courant. L'opération peut ainsi être répétée de blocs en blocs sur l'ensemble de l'image 4:2:2. La luminance d'adaptation La est la suivante:The calculated luminance value is then applied to the current block. The operation can thus be repeated from block to block over the entire 4: 2: 2 image. The matching luminance L a is as follows:
i = N j = Ni = N j = N
L0 λ _LL 0 λ _L
N X N Σ Σ L (2) y J = \ J = lN X N Σ Σ L (2) y J = \ J = l
avec N=5xl6=80 points. Durant la seconde étape du procédé selon le mode de réalisation préféré de la présente invention, la luminance d'adaptation La ainsi que la luminance Lj : subissent un premier traitement non-linéaire ainsi qu'une adaptation locale au contexte lumineux, comme cela est représenté par le diagramme de la figure 3. Ce premier traitement non-linéaire 10 est de type logarithmique et opère au niveau rétinien comme une commande de gain variable d'adaptation à la lumière ambiante. E = L: : étant la luminance environnante avant et L; :* la luminance environnante après le même traitement non-linéaire 10, les luminances Li,j et La après le traitement 10 sont :
Figure imgf000010_0001
with N = 5xl6 = 80 points. During the second step of the method according to the preferred embodiment of the present invention, the adaptation luminance L a as well as the luminance L j : undergo a first non-linear treatment as well as a local adaptation to the light context, like this is represented by the diagram of FIG. 3. This first non-linear processing 10 is of logarithmic type and operates at the retinal level as a variable gain control for adaptation to ambient light. E = L:: being the surrounding luminance before and L; : * the surrounding luminance after the same non-linear treatment 10, the luminances Li, j and La after the treatment 10 are:
Figure imgf000010_0001
La*=ln[l+La/Lh]L a * = ln [l + L a / L h ]
(4)(4)
Lj1 étant la luminance de coupure correspondant à la limitation du gain d'une ganglionnaire (dR/dL=K/[l+L/Lj1], avec R réponse d'une ganglionnaire) du fait de la limite physiologique liée à la vision diurne, cette luminance de coupure est d'environ 0,4 cd/m2. Le contraste intermédiaire ΔLj :* est obtenu par :L j1 being the cut-off luminance corresponding to the limitation of the gain of a lymph node (dR / dL = K / [l + L / L j1 ], with R response of a lymph node) due to the physiological limit linked to the daytime vision, this cut-off luminance is approximately 0.4 cd / m 2 . The intermediate contrast ΔL j : * is obtained by:
Λ T . . * = T . . * _ T =ι (5)Λ T. . * = T. . * _ T = ι (5)
Puis, dans la troisième étape du procédé, ce contraste intermédiaire est filtré en 12 par un filtre spatial orienté correspondant à la sensibilité aux fréquences spatiales de l'oeil, c'est à dire, correspondant aux fréquences projetées sur la rétine (exprimées en cpd, cycles par degré). Cela est représenté sur la figure 3, et correspond à un traitement rétinien du contraste intermédiaire. L'expérience a montré que l'on pouvait utiliser de préférence un filtre centré sur 8 cpd. Ce filtre spatial isotrope visuel de luminance est de type passe-bande et est constitué par la différence de deux gaussiennes telles que:Then, in the third step of the method, this intermediate contrast is filtered at 12 by an oriented spatial filter corresponding to the sensitivity to the spatial frequencies of the eye, that is to say, corresponding to the frequencies projected onto the retina (expressed in cpd , cycles per degree). This is shown in Figure 3, and corresponds to a retinal treatment of intermediate contrast. Experience has shown that it is preferable to use a filter centered on 8 cpd. This isotropic visual luminance spatial filter is of the bandpass type and is constituted by the difference of two Gaussians such as:
Figure imgf000010_0002
Figure imgf000010_0002
avec: kc = 3,177 : paramètre qui conditionne le gain maximal du filtre, α = 3,6 : rapport de grandeur entre la gausienne centrale et la gausienne périphérique, σc = 0,666 minutes d'arc : écart type de la gausienne centrale, et ech = 1 : facteur d'échelle d'échantillonnage des valeurs des coefficients du filtre, sa valeur est prise autour de 1 minute d'arc puisque pour une perception normalisée à 6 fois la hauteur d'un écran de télévision, la distance entre 2 points de l'écran est de 1 minute d'arc .with: k c = 3.177: parameter which conditions the maximum gain of the filter, α = 3.6: magnitude ratio between the central gausian and the peripheral gausian, σ c = 0.666 arc minutes: standard deviation of the central gausian, and ech = 1: scale factor for sampling filter coefficient values, the value is taken around one minute of arc, since for a perception normalized to 6 times the height of a television screen, the distance between 2 points on the screen is 1 minute of arc.
Comme cela est représenté sur la figure 4, dans une quatrième étape, le signal SQ = ΔL'J :* issu de la rétine est filtré en 13 de manière à séparer les fréquences horizontales et verticales des fréquences diagonales. Comme cela est représenté schématiquement sur la figure 5, du fait du traitement cortical, les fréquences pourront être de préférence séparées en fréquences horizontales et verticales des fréquences diagonales à partir d'un exemple de filtre bi-dimensionnel orienté dont le gabarit est représenté sur les figures 5 et 6. Dans une cinquième étape du procédé selon l'invention, les deux grandeurs obtenues, respectivement ΔL"j j*(Hv) et ΔL"j ;*(DIAG), subissent chacun un second traitement non-linéaire caractéristique de la réponse compressive d'un neurone à un contraste comme cela est représenté sur la figure 4. Ce traitement permet d'obtenir, pour chaque signal, la réponse neuronale au contraste point suivante:As shown in Figure 4, in a fourth step, the signal SQ = ΔL'J: * from the retina is filtered at 13 so as to separate the horizontal and vertical frequencies from the diagonal frequencies. As shown schematically in Figure 5, due to cortical processing, the frequencies can preferably be separated into horizontal and vertical frequencies from the diagonal frequencies from an example of a two-dimensional oriented filter whose template is shown in the FIGS. 5 and 6. In a fifth step of the method according to the invention, the two quantities obtained, respectively ΔL "jj * (H v ) and ΔL"j; * (DIAG) , each undergo a second non-linear treatment characteristic of the compressive response of a neuron to a contrast as shown in FIG. 4. This processing makes it possible to obtain, for each signal, the following neural response to contrast:
ΔL' .** ΛT ' • •* ι,J + c0]m - κ0 (VΔL '. ** ΛT' • • * ι, J + c 0 ] m - κ 0 (V
avec: m = 0,243 : exposant matérialisant la réponse compressive à un contraste pour simuler la réponse d'un neurone (m<l),with: m = 0.243: exponent materializing the compressive response to a contrast to simulate the response of a neuron (m <l),
C0 = 0,067 : sensibilité au seuil de détection (ΔL=0) dans le mode de réalisation décrit, etC 0 = 0.067: sensitivity to the detection threshold (ΔL = 0) in the embodiment described, and
K0 = 0,518 : facteur permettant d'avoir une réponse nulle pour un contraste ΔL nul.K 0 = 0.518: factor allowing a zero response for a zero ΔL contrast.
Puis, durant la sixième étape du procédé, est effectuée la. somme points à points (i et j compris entre 1 et N) des différences entre les contrastes intermédiaires ainsi obtenus selon les orientations horizontale/verticale, et diagonales, du bloc I courant de l'image de référence et du bloc I courant de l'image test. Dans cette somme la différence des contrastes intermédiaires selon l'orientation diagonale des fréquences est pondérée par un facteur k, de préférence égal à 0,5 du fait d'une moindre sensibilité de l'oeil aux fréquences diagonales. Pour obtenir la Perception Locale de Distorsions de Contraste (PLDC) relative, cette somme est divisée par la somme par orientations horizontale/verticale et diagonale des contrastes intermédiaires ainsi obtenus du bloc I de l'image de référence selon l'équation suivante :Then, during the sixth step of the process, the. sum point to point (i and j between 1 and N) of the differences between the intermediate contrasts thus obtained according to the horizontal / vertical, and diagonal, orientations of the block I running of the reference image and of the block I running of the test image. In this sum the difference of the intermediate contrasts according to the diagonal orientation of the frequencies is weighted by a factor k, preferably equal to 0.5 due to a lower sensitivity of the eye to the diagonal frequencies. To obtain the relative Local Perception of Contrast Distortions (PLDC), this sum is divided by the sum by horizontal / vertical and diagonal orientations intermediate contrasts thus obtained from block I of the reference image according to the following equation:
Figure imgf000012_0001
Figure imgf000012_0001
II uII u
QQ
P avec k = 0,5 et N = 48 points, soit P with k = 0.5 and N = 48 points, i.e.
Figure imgf000013_0001
Figure imgf000013_0001
UU
QQ
Le masquage des signaux faibles par les signaux forts est implicitement inclus dans le calcul de la fraction.The masking of weak signals by strong signals is implicitly included in the calculation of the fraction.
Le processus d'analyse pour mettre en relief des différences entre images s'effectue soit en comparant directement les images lorsque cela est possible ou alors par mémorisation préalable à la suite d'un apprentissage. Notre calcul de perception des distorsions de quantification est donc basé sur la perception d'une différence entre l'image référence non codée et l'image test codée.The analysis process to highlight differences between images is carried out either by directly comparing the images when possible or then by memorization prior to learning. Our calculation of perception of quantization distortions is therefore based on the perception of a difference between the uncoded reference image and the coded test image.
La présente invention tient compte du fait l'œil humain est sensible à une erreur relative par rapport au contraste local et non à une erreur absolue comme cela était considéré jusqu'à présent. L'observateur humain a un critère de décision constant pour le seuil de perception d'une distorsion de contraste. Or, il est considéré ici que la perception de la quantification est équivalente à une distorsion de contraste, c'est pourquoi la valeur de PLDC doit être constante au seuil de perception de la quantification quelque soit le contraste local, la fréquence spatiale et son orientation. Au dessous de cette valeur, il n'y a pas de perception d'une distorsion de contraste (perception de la quantification). L'estimation de la distorsion locale de contraste liée à la quantification qui est proposée est basée sur la perception d'une erreur relative à l'énergie locale du signal de référence et prend en compte les sensibilités différentes selon les orientations. Dans le cas d'un traitement selon les orientations horizontales et diagonales, les différences des contrastes intermédiaires selon les fréquences diagonales peuvent être avantageusement pondérées par k = 0,5.The present invention takes into account the fact that the human eye is sensitive to a relative error with respect to the local contrast and not to an absolute error as has been considered until now. The human observer has a constant decision criterion for the threshold of perception of a distortion of contrast. However, it is considered here that the perception of the quantification is equivalent to a contrast distortion, this is why the value of PLDC must be constant at the threshold of perception of the quantification whatever the local contrast, the spatial frequency and its orientation. . Below this value, there is no perception of a contrast distortion (perception of the quantization). The estimation of the local contrast distortion linked to the quantification which is proposed is based on the perception of an error relating to the local energy of the reference signal and takes into account the different sensitivities according to the orientations. In the case of a treatment according to the horizontal and diagonal orientations, the differences of the intermediate contrasts according to the diagonal frequencies can be advantageously weighted by k = 0.5.
La présente invention permet ainsi d'obtenir les valeurs de la perception de la distorsion locale de contraste liée à la quantification de l'image dans chaque blocs de celle-ci sur la base de critères de perception visuelle objectifs.The present invention thus makes it possible to obtain the values of the perception of the local distortion of contrast linked to the quantification of the image in each block thereof on the basis of objective visual perception criteria.
La présente invention est particulièrement bien adaptée pour le choix du pas de quantification d'une image traitée en 4:2:2 dans le domaine MPEG2, mais peut être appliquée à d'autre type de format d'image, comme par exemple au JPEG, sans sortir du cadre de l'invention, et est indépendante du type de codage numérique de signaux de télévision. The present invention is particularly well suited for the choice of the quantization step of an image processed in 4: 2: 2 in the MPEG2 domain, but can be applied to other type of image format, such as for example in JPEG , without departing from the scope of the invention, and is independent of the type of digital coding of television signals.

Claims

REVENDICATIONS
1. Procédé d'estimation de la perception locale de distorsion de quantification d'un système de codage d'une image, à partir de l'image codée (8) et de l'image non codée (9) correspondante, caractérisé en ce qu'il comporte les étapes suivantes :1. Method for estimating the local perception of quantization distortion of an image coding system, from the coded image (8) and the corresponding non-coded image (9), characterized in that that it includes the following stages:
. définition (1) pour chaque bloc (I) de l'image codée et de l'image non codée décomposées en une pluralité de blocs (I), d'une première fenêtre (II) centrée sur le bloc (I), sur laquelle est calculée la luminance locale (Lj :), ainsi que d'une seconde fenêtre (III) centrée sur le premier bloc (I) et sur la première fenêtre (II), et sur laquelle est calculée la luminance d'adaptation (La),. definition (1) for each block (I) of the coded image and of the non-coded image decomposed into a plurality of blocks (I), of a first window (II) centered on the block (I), on which the local luminance (L j :) is calculated, as well as a second window (III) centered on the first block (I) and on the first window (II), and on which the adaptation luminance (L a ),
. traitement non linéaire (2 ; 10) de la luminance locale (Lj :) et de la luminance d'adaptation (La) obtenues, et calcul du contraste intermédiaire (ΔLj j*=L;j*-La*) (2 ; 11) par la différence entre les valeurs obtenues de la luminance locale (Lj :*) et la de luminance d'adaptation (La*),. non-linear processing (2; 10) of the local luminance (L j :) and of the adaptation luminance (L a ) obtained, and calculation of the intermediate contrast (ΔL j j * = L; j * -L a *) (2; 11) by the difference between the values obtained from the local luminance (L j : *) and the from the adaptation luminance (L a *),
. filtrage spatial orienté du contraste intermédiaire (ΔLj :*) selon les fréquences spatiales,. oriented spatial filtering of the intermediate contrast (ΔL j : *) according to the spatial frequencies,
. séparation par orientations des valeurs relatives aux orientations spatiales de fréquences du contraste intermédiaire obtenu lors de l'étape précédente. separation by orientation of the values relating to the spatial orientation of frequencies from the intermediate contrast obtained during the previous step
(SQ=ΔLJ : *), permettant d'obtenir n valeurs (ΔL"j :*(1 ), ΔL"j ;*(2)'---> ΔL"; :*{n)) de contraste intermédiaire, chacune correspondant à une orientation spatiale de fréquence,(S Q = ΔLJ: *), allowing to obtain n values (ΔL "j: * (1) , ΔL"j; * ( 2 ) '--- > ΔL ";: * {n) ) of intermediate contrast , each corresponding to a spatial frequency orientation,
. second traitement non linéaire de chacune des n valeurs obtenues lors de l'étape précédente permettant d'obtenir n valeurs correspondantes (ΔL"j ;**(i),. second non-linear processing of each of the n values obtained during the previous step, making it possible to obtain n corresponding values (ΔL "j; ** ( i ) ,
Δ ΛTL " ,•j .* * (2),..., Δ ΛTL " ,.j .** (n) "LΔ ΛTL ", • j. * * (2) , ..., Δ ΛTL ", .j. ** (n) " L
. calcul de la différence pondérée (6) par orientation spatiale de fréquence, point à point, entre les valeurs finales des contrastes du bloc (I) de l'image non codée (9) obtenues lors de l'étape précédente, et les valeurs finales des contrastes du bloc (I) de l'image codée obtenues lors de l'étape précédente, et. calculation of the weighted difference (6) by spatial frequency orientation, point to point, between the final values of the contrasts of the block (I) of the uncoded image (9) obtained during the previous step, and the final values contrasts of the block (I) of the coded image obtained during the previous step, and
. division (7) de la valeur obtenue par la somme par orientations spatiales de fréquences, des valeurs finales des contrastes obtenues dans l'étape précédente sur le bloc (I) de l'image non codée, permettant d'obtenir la valeur de la perception locale des distorsions de contraste (PLDC) pour chaque bloc (I) de l'image.. division (7) of the value obtained by the sum by spatial orientations of frequencies, of the final values of the contrasts obtained in the previous step on the block (I) of the uncoded image, making it possible to obtain the value of the local perception of contrast distortions (PLDC) for each block (I) of the image.
2. Procédé selon la revendication 1, caractérisé en ce que l'étape de filtrage spatial orienté (4; 13) consiste en un filtrage bi-dimensionnel des fréquences horizontales/verticales, et diagonales (n=2).2. Method according to claim 1, characterized in that the oriented spatial filtering step (4; 13) consists of a two-dimensional filtering of the horizontal / vertical and diagonal frequencies (n = 2).
3. Procédé selon la revendication 2, caractérisé en ce que lors de l'étape de calcul de la différence pondérée (6) par orientations spatiales de fréquences, la différence entre les valeurs selon les fréquences diagonales sont pondérées par 0,5.3. Method according to claim 2, characterized in that during the step of calculating the weighted difference (6) by spatial frequency orientations, the difference between the values according to the diagonal frequencies are weighted by 0.5.
4. Procédé selon l'une quelconque des revendications précédentes, caractérisée en ce que l'étape de filtrage spatial isotrope (3 ; 12) est réalisée par un filtre centré autour de 8 cycles par degré (cpd).4. Method according to any one of the preceding claims, characterized in that the isotropic spatial filtering step (3; 12) is carried out by a filter centered around 8 cycles per degree (cpd).
5. Procédé selon l'une quelconque des revendications précédentes, caractérisé en ce que l'étape du premier traitement non linéaire (2 ; 10) consiste en un traitement logarithmique.5. Method according to any one of the preceding claims, characterized in that the step of the first non-linear processing (2; 10) consists of a logarithmic processing.
6. Procédé selon l'une quelconque des revendications précédentes, caractérisé en ce que l'étape du second traitement non linéaire (5 ; 14) consiste en un traitement par mise à la puissance m (m<l).6. Method according to any one of the preceding claims, characterized in that the step of the second non-linear processing (5; 14) consists of a processing by setting to power m (m <l).
7. Procédé selon l'une quelconque des revendications précédentes, caractérisé en ce que la première fenêtre (II) est constituée d'au moins trois blocs sur trois blocs (I) de N sur N points.7. Method according to any one of the preceding claims, characterized in that the first window (II) consists of at least three blocks on three blocks (I) of N on N points.
8. Procédé selon l'une quelconque des revendications précédentes, caractérisé en ce que la seconde fenêtre (III) est constituée d'au moins cinq blocs sur cinq blocs (I) de N sur N points. 8. Method according to any one of the preceding claims, characterized in that the second window (III) consists of at least five blocks out of five blocks (I) of N at N points.
PCT/FR1997/002222 1997-01-08 1997-12-05 Perceptual image distortion evaluation WO1998030979A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP97950240A EP0951698A1 (en) 1997-01-08 1997-12-05 Perceptual image distortion evaluation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR9700096A FR2758198B1 (en) 1997-01-08 1997-01-08 METHOD OF ESTIMATING THE LOCAL PERCEPTION OF QUANTIFICATION DISTORTIONS IN A STILL IMAGE
FR97/00096 1997-01-08

Publications (1)

Publication Number Publication Date
WO1998030979A1 true WO1998030979A1 (en) 1998-07-16

Family

ID=9502425

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FR1997/002222 WO1998030979A1 (en) 1997-01-08 1997-12-05 Perceptual image distortion evaluation

Country Status (3)

Country Link
EP (1) EP0951698A1 (en)
FR (1) FR2758198B1 (en)
WO (1) WO1998030979A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000049570A1 (en) * 1999-02-19 2000-08-24 Unisearch Limited Method for visual optimisation of embedded block codes to exploit visual masking phenomena
US8743291B2 (en) 2011-04-12 2014-06-03 Dolby Laboratories Licensing Corporation Quality assessment for images that have extended dynamic ranges or wide color gamuts
US8760578B2 (en) 2010-04-19 2014-06-24 Dolby Laboratories Licensing Corporation Quality assessment of high dynamic range, visual dynamic range and wide color gamut image and video

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1209624A1 (en) 2000-11-27 2002-05-29 Sony International (Europe) GmbH Method for compressed imaging artefact reduction

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0541302A2 (en) * 1991-11-08 1993-05-12 AT&T Corp. Improved video signal quantization for an MPEG like coding environment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0541302A2 (en) * 1991-11-08 1993-05-12 AT&T Corp. Improved video signal quantization for an MPEG like coding environment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HEEGER D J ET AL: "A model of perceptual image fidelity", PROCEEDINGS. INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (CAT. NO.95CB35819), PROCEEDINGS INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, WASHINGTON, DC, USA, 23-26 OCT. 1995, ISBN 0-7803-3122-2, 1995, LOS ALAMITOS, CA, USA, IEEE COMPUT. SOC. PRESS, USA, pages 343 - 345 vol.2, XP000673812 *
KINGDOM F ET AL: "A model for contrast discrimination with incremental and decremental test patches", VISION RESEARCH, 1991, UK, vol. 31, no. 5, ISSN 0042-6989, pages 851 - 858, XP000674889 *
TEO P C ET AL: "PERCEPTUAL IMAGE DISTORTION", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (IC, AUSTIN, NOV. 13 - 16, 1994, vol. 2 OF 3, 13 November 1994 (1994-11-13), INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, pages 982 - 986, XP000522762 *
WESTEN S J P ET AL: "Perceptual image quality based on a multiple channel HVS model", 1995 INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING. CONFERENCE PROCEEDINGS (CAT. NO.95CH35732), 1995 INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, DETROIT, MI, USA, 9-12 MAY 1995, ISBN 0-7803-2431-5, 1995, NEW YORK, NY, USA, IEEE, USA, pages 2351 - 2354 vol.4, XP000674113 *
WESTEN S J P ET AL: "PERCEPTUAL OPTIMIZATION OF IMAGE CODING ALGORITHMS", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), WASHINGTON, OCT. 23 - 26, 1995, vol. 2, 23 October 1995 (1995-10-23), INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, pages 69 - 72, XP000623914 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000049570A1 (en) * 1999-02-19 2000-08-24 Unisearch Limited Method for visual optimisation of embedded block codes to exploit visual masking phenomena
US6760482B1 (en) 1999-02-19 2004-07-06 Unisearch Limited Method for visual optimisation of embedded block codes to exploit visual masking phenomena
US8760578B2 (en) 2010-04-19 2014-06-24 Dolby Laboratories Licensing Corporation Quality assessment of high dynamic range, visual dynamic range and wide color gamut image and video
US8743291B2 (en) 2011-04-12 2014-06-03 Dolby Laboratories Licensing Corporation Quality assessment for images that have extended dynamic ranges or wide color gamuts

Also Published As

Publication number Publication date
FR2758198B1 (en) 2001-09-28
FR2758198A1 (en) 1998-07-10
EP0951698A1 (en) 1999-10-27

Similar Documents

Publication Publication Date Title
US6888564B2 (en) Method and system for estimating sharpness metrics based on local edge kurtosis
JP4047385B2 (en) Method and apparatus for evaluating the visibility of the difference between two image sequences
CN1324906C (en) Fuzzy measurement in video-frequeney Sequence
CN106558308B (en) Internet audio data quality automatic scoring system and method
KR100347202B1 (en) Method and apparatus for decomposing an image stream into units of local contrast
CN112697814A (en) Cable surface defect detection system and method based on machine vision
KR20010022487A (en) Apparatus and methods for image and signal processing
CN115862657B (en) Noise-following gain method and device, vehicle-mounted system, electronic equipment and storage medium
CN119132257B (en) Adaptive microstructure optical film display screen and dynamic optimization method thereof
JP6124295B2 (en) Method for evaluating the texture of bare skin or makeup skin
WO1998030979A1 (en) Perceptual image distortion evaluation
WO2001001705A1 (en) Method for evaluating the quality of audio-visual sequences
US7102667B2 (en) Picture quality diagnostics for revealing cause of perceptible impairments
CN119273823A (en) Enhanced low-light point cloud coloring method, device and electronic device
CN108288267A (en) A kind of scanning electron microscope (SEM) photograph image sharpness based on dark is without with reference to evaluation method
JP4008497B2 (en) Training process
DE69801165T2 (en) SIGNAL PROCESSING
Ivkovic et al. An algorithm for image quality assessment
CN110490824A (en) The method and device of image denoising
CN115767209B (en) Infrared image acquisition control method, device, system and storage medium
Tanaka et al. Parameter estimation of PuRet algorithm for managing appearance of material objects on display devices
CN120355629A (en) Visual reconstruction method, apparatus, device, computer readable medium and program product
CN119295359B (en) Image enhancement method, image enhancement device, electronic device, and storage medium
FR2880498A1 (en) IMAGE ANALYSIS DEVICE AND METHOD
CN119521029A (en) Noise Analysis Method for Random Telegraph Signal of Image Sensor

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1997950240

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1997950240

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 1998530584

Format of ref document f/p: F

WWW Wipo information: withdrawn in national office

Ref document number: 1997950240

Country of ref document: EP