HK1037443A - Image processing circuit and method for modifying a pixel value - Google Patents
Image processing circuit and method for modifying a pixel value Download PDFInfo
- Publication number
- HK1037443A HK1037443A HK01108142.9A HK01108142A HK1037443A HK 1037443 A HK1037443 A HK 1037443A HK 01108142 A HK01108142 A HK 01108142A HK 1037443 A HK1037443 A HK 1037443A
- Authority
- HK
- Hong Kong
- Prior art keywords
- pixel value
- random number
- pixel
- value
- threshold
- Prior art date
Links
Description
Technical Field
The present invention relates generally to electronic and computer circuitry, and more particularly to image processing circuitry and methods for changing pixel values. For example, such circuits and methods may be used to eliminate outline traces of "lossy" compressed electronic images.
Background
Undesirable visible marks, such as contour marks, sometimes appear in the decompressed electronic image. Quantization in image compression often results in loss of image information; namely the term "lossy compression". Unfortunately, such distortions can cause pixel value errors during decompression of the image, and these errors can produce visible traces in the decompressed image. A contour trace is a line pattern that resembles a contour in a geographical picture and is typically characterized by being more visible in low-brightness areas (i.e., darker places) than in high-brightness areas (i.e., lighter places) of the decompressed image.
Unfortunately, some image processing techniques, such as gamma correction, make such contour traces more visible. Generally, gamma correction methods enhance the contrast of bright versus dark places in an image. That is, the luminance variation in the brighter area is enlarged relative to the luminance variation in the darker area. For example, a brightness change of 1 lumen in a brighter region is gamma corrected to 1.5 lumens, while a brightness change of 1 lumen in a darker region may not change or be gamma corrected to less than 1.5 lumens. The gamma correction algorithm actually used depends on variables such as characteristic parameters of the image display device. The image processing circuit uses gamma correction to compensate for the non-linear luminance response of the human eye. The human eye is more sensitive to changes in brightness in darker image areas than in lighter image areas. That is, the human eye perceives a 1 lumen change in the darker regions more strongly than a 1 lumen change in the lighter regions. Thus, gamma correction allows the human eye to perceive the entire luminance region of an image with a linear or near linear contrast. Unfortunately, gamma correction methods often actually increase the quantization level of darker areas relative to lighter areas, thereby potentially increasing the visibility of existing contouring artifacts. Overview of image compression/decompression technique
To assist the reader in understanding the concepts discussed above and in the following description of the invention, a basic overview of image compression/decompression techniques is provided below.
In order to transmit a relatively high resolution image in a relatively low bandwidth channel, or to store such an image in a relatively small memory space, it is often necessary to compress the digital data representing the image. For example, High Definition Television (HDTV) video images are compressed to be transmitted in existing television channels. Without compression, HDTV video images would require a much wider transmission channel than the bandwidth of existing television channels. In addition, in order to reduce the amount of data and transmission time to an acceptable level, images can be transmitted over the internet only after compression. Alternatively, to increase the image storage space of a CD-ROM or computer server, the image must also be compressed before storage.
Typically, such image compression involves reducing the number of data bits necessary to represent an image. Unfortunately, many compression techniques are often lossy. That is, video information contained in the original image may be lost in compression. As mentioned above, this loss of information may cause a noticeable difference in the decompressed image, which is often referred to as a visual trace. Many times, these marks are undesirable because they degrade the visual quality of the decompressed image compared to the visual quality of the original image.
Some discussion of the basic principles of the currently popular block-based Motion Picture Experts Group (MPEG) compression standard is made with reference to fig. 1-3. The standards include MPEG-1 and MPEG-2. For exemplary purposes, the discussion is based on compressing an image using the MPEG4:2:0 format, the image using color scale space Y, CB、CRTo indicate. However, the basic concepts discussed are also applicable to other MPEG formats, images represented in other color scale spaces, and other block-based compression standards, such as the Joint Photographic Experts Group (JPEG) standard, which is commonly used for compressing still images. Also, although for simplicity, the MPEG standard is Y, CB、CRMany details of the color patch space have been omitted, but are well known and disclosed in a number of available references.
Referring to fig. 1-3, the MPEG standard is often used to compress temporal sequences of pictures, i.e., video frames, as are common in, for example, television broadcasts. Each video frame is divided into regions called macroblocks, each macroblock comprising one or more pixels. Fig. 1A is a macroblock 10 of size 16 x 16 pixels having 256 pixels 12. In the MPEG standard, one macroblock is always 16 × 16 pixels in size, while other compression standards may use macroblocks having other sizes. In the original video frame, each pixel 12 has a respective luminance value Y and a respective color pair (i.e., color difference pair), i.e., different CBAnd CRThe value is obtained.
Referring to fig. 1A-1D, the digital luminance values (Y) and color differences (C) to be used for compression are obtained prior to frame compressionBAnd CR) The values, i.e. the pre-compressed values, are derived from the original Y, C of the original frameBAnd CRA value is generated. In the MPEG4:2:0 format, the pre-compressed Y value is the same as the original Y value. Therefore, each pixel 12 maintains the original luminance value Y. However, to reduce the amount of data to be compressed, the MPEG4:2:0 format only allows for the retention of one pre-compressed C for each group 14 of four pixels 12BValue and a precompressionCRThe value is obtained. These precompressed CBAnd CREach of the values is taken from the original C of the four pixels 12 in each group 14, respectivelyBAnd CRThe value is obtained. For example, pre-compressed CBThe value is set to be equal to the original C of the four pixels 12 in each group 14BAre equal on average. Referring to FIGS. 1B-1D, a pre-compressed Y, C generated for a macroblock 10BAnd CRThe values are arranged as a precompression Y value of a 16 x 16 matrix 17 (equal to the original Y value of each pixel 12), a precompression C of an 8 x 8 matrix 18BThe value (equal to one C taken from each group 14 of four pixels 12)BValue), pre-compression C of an 8 x 8 matrix 20RThe value (equal to one C taken from each group 14 of four pixels 12)RValue). However, in the industry, the 8 x 8 quadrants of matrices 18 and 20 and 17 are often referred to as "block" values. Moreover, since it is more convenient to perform the compression transform on the basis of 8 x 8 blocks of pixel values rather than 16 x 16 blocks, the macroblock 17 of pre-compressed Y values is further divided into four 8 x 8 blocks 22a-22D, which correspond to the 8 x 8 blocks a-D of pixels 12 in the macroblock 10, respectively. Still referring to fig. 1B-1D, six 8 x 8 blocks of pre-compressed pixel values are generated for each macroblock 10: precompression of four 8 x 8 blocks 22a-22d of Y value, precompression CBOne 8 x 8 block 18 of values, precompressed CROne 8 x 8 block 20 of values.
Fig. 2 is a general block diagram of MPEG compressor 30, MPEG compressor 30 being more generally referred to as encoder 30. In general, encoder 30 converts pre-compressed data for a frame or sequence of frames into encoded data that represents the same frame or sequence of frames, but contains a significantly reduced number of bits than the pre-compressed data. To accomplish this conversion, the encoder 30 subtracts or eliminates the excess data from the pre-compressed data and reformats the remaining data using efficient transformation and encoding techniques.
More specifically, encoder 30 includes a frame reorder buffer 32 that receives pre-compressed data for one or more sequences of frames and rearranges the frames in the appropriate sequence for encoding. Thus, the reordered sequence is often different from the sequence that produced the frames or would be displayed. Encoder 30 assigns each stored frame to a respective group, referred to as a group of pictures (GOP), and identifies each frame as an intra (I) frame or a non-intra (non-I) frame. For example, each GOP might include three I-frames and 12 non-I-frames for a total of 15 frames. Encoder 30 encodes an I-frame without reference to additional frames, but often references one or more other frames in the same GOP when encoding non-I-frames. However, encoder 30 does not refer to frames in a different GOP when encoding non-I-frames.
Referring to FIGS. 1 and 2, in encoding an I-frame, a precompression Y, C representing the I-frameBAnd CRThe 8 x 8 blocks of values (fig. 1B-1D) pass through a summer 34 and then into a Discrete Cosine Transformer (DCT)36 which transforms the blocks of values into distinct 8 x 8 blocks having a DC (zero frequency) coefficient and 63 AC (non-zero frequency) coefficients. That is, summer 34 is not required when encoder 30 encodes an I-frame, so that the precompressed value passes through summer 34 without being summed with any other value. (as described below, however, summer 34 is often necessary when encoder 30 encodes a non-I frame.) quantizer 38 confines each coefficient to a respective range of quantization values and provides quantized AC and DC coefficients at paths 40 and 42, respectively. A predictive encoder 44 predictively encodes the DC coefficients, while a variable length encoder 46 converts the quantized AC coefficients and the quantized and predictively encoded DC coefficients into variable length codewords. These codewords constitute encoded data representing the encoded I-frame pixel values. The transmit buffer 48 then temporarily stores the codewords to allow for synchronous transmission of the encoded data to a decoder (discussed below in conjunction with fig. 3). Conversely, if the encoded data is to be stored rather than transmitted, the encoder 46 provides the variable length codeword directly to a storage medium such as a CD-ROM.
If an I-frame is to be used as a reference for one or more non-I-frames in the same GOP, which is often used as such, the encoder 30 will decode the encoded I-frame using approximately or the same decoding technique as used by the decoder to obtain the corresponding reference frame (fig. 3) for the following reasons. When decoding non-I-frames that refer to I-frames, the decoder has no choice but to use the decoded I-frames as reference frames. The decoded I-frame pixel values are usually different from the pre-compressed pixel values of the original I-frame due to the distortion of MPEG coding-some information is lost due to quantization of the AC and DCDCT coefficients. Therefore, using a pre-compressed I-frame as a reference frame in encoding will result in additional footprint in decoding non-I-frames, because the reference frame used for decoding (decoded I-frame) is different from the reference frame used for encoding (pre-compressed I-frame).
Thus, to generate a reference frame for the encoder 30 that is similar or identical to the reference frame of the decoder (fig. 3), the encoder includes a dequantizer 50(dequantizer) and an inverse DCT52, which are designed to emulate the dequantizer and inverse DCT of the decoder. The dequantizer 50 dequantizes the quantized DCT coefficients from the quantizer 38, and the inverse DCT52 transforms the dequantized DCT coefficients into Y, CBAnd CRThe corresponding 8 x 8 blocks of pixel values, which together make up the reference frame. However, because distortion occurs in quantization, some or all of these decoded pixel values may differ from their corresponding pre-compressed pixel values, and thus, the reference frame may differ from the pre-compressed frame as described above. These decoded pixel values are then passed through a summer 54-used when reference frames are generated from non-I-frames, as described below-and then into a reference frame buffer 56, where the reference frames are stored.
When encoding a non-I-frame, encoder 30 initially encodes each macroblock of the non-I-frame in at least two ways: one is the aforementioned I-frame mode and the other is the motion prediction mode discussed below. The encoder 30 then stores and transmits the resulting codeword with the least number of bits. Thus, this technique ensures that macroblocks of non-I-frames are encoded with the least number of bits.
With respect to motion prediction, an object in a frame exhibits motion characteristics if the object changes position in a previous or subsequent frame. For example, if a horse is racing through the screen, it shows relative motion. Alternatively, if the camera is moving following the horse, the background exhibits relative motion with respect to the horse. Typically, each subsequent frame in which an object can be displayed contains at least some of the same macroblocks as the previous frame. However, these matching macroblocks occupy different respective frame positions in the subsequent frame than they occupy in the previous frame. In addition, a macroblock that includes a partially stationary object (e.g., a tree) or background (e.g., the sky) occupies the same frame position in each successive frame and thus exhibits zero displacement. In either case, rather than encoding each frame independently, the decoder "macroblocks R and Z of frame 1 (non-I-frame) are the same or similar to macroblocks of positions S and T, respectively, of frame 0 (reference frame)" with fewer data bits. (frames 0 and 1 and macroblocks R, S, T and Z are not shown.) this "assertion" is encoded as each motion vector (one from R to S and the other from T to Z) having a respective displacement value that indicates the relative motion of the respective macroblock from frame to frame, respectively. For an object with fast relative motion, the relative motion and relative displacement will be relatively large. Conversely, for a stationary or relatively slow moving object or background, its relative motion and its relative displacement will be relatively small or equal to zero.
Still referring to FIG. 2, in encoding the non-I frame, the motion predictor 58 pre-compresses the Y value of the macroblock in the non-I frame (not used for C in motion prediction)BAnd CR) Compare with the decoded Y values of the macroblocks in the reference I frame and find the matching macroblock. For each macroblock in the non-I frame, if one of the non-I frames is found to match the reference I frame, the motion predictor 58 generates a motion vector that indicates the location of the matching macroblock in the reference frame and the reference frame. Thus, as will be discussed below in connection with fig. 3, in decoding these motion encoded macroblocks of non-I frames, the decoder uses the motion vectors to obtain pixel values of the motion encoded macroblocks from matching macroblocks in the reference frame. Predictive coder 44 predicts the encoded motion vectors and coder 46 generates respective codewords for the encoded motion vectors and provides the codewords to transmit buffer 48.
Further, since macroblocks in non-I frames are often close to, but not identical to, matching macroblocks in reference I frames, encoder 30 encodes their differences along with the motion vectors so that the decoder can decode in conjunction therewith. More specifically, the motion predictor 58 provides the summer 34 with the decoded Y value for the matching macroblock in the reference frame, and the summer 34 subtracts the decoded Y value from the pre-compressed Y value of the non-I frame macroblock being encoded on a pixel by pixel basis. These differences, called residuals, are arranged as 8 x 8 blocks and processed in DCT36, quantizer 38, encoder 46 and buffer 48 in a similar manner as described above, except that the quantized DC coefficients of the residual blocks are input directly to encoder 46 via line 40 and are therefore not predictively encoded by predictive encoder 44.
In addition, non-I frames may also be used as reference frames. When a non-I frame is used as the reference frame, the quantized residue from quantizer 38 is dequantized and inverse transformed by dequantizer 50 and inverse DCT52, respectively, to enable the non-I reference frame to be the same as that used by the decoder described above for the reasons described above.
The motion predictor 58 provides the decoded Y value of the reference frame to the summer 54 and generates a residue therefrom. Summer 54 adds the residuals from inverse DCT52 to the decoded Y values of the reference frames, respectively, to generate respective Y values for the non-I reference frames. The reference frame buffer 56 then stores the reference non-I frames along with the reference I frames for use in the motion encoded sequence non-I frames.
Still referring to fig. 2, the encoder 30 also includes a rate controller 60 to ensure that the transmit buffer 48 (which typically transmits encoded frame data at a fixed rate) does not overflow, nor underflow, i.e., there is insufficient memory. If either of these occurs, errors will be introduced into the encoded data stream. For example, if the buffer 48 overflows, data from the encoder 46 is lost. Thus, the rate controller 60 adjusts the quantization scale factor used by the quantizer 38 based on the degree of storage of the transmit buffer 48. Specifically, the more full the buffer 48, the greater the scale factor that the proportional controller 60 causes the encoder 46 to produce fewer data bits. Conversely, the more empty the buffer 48, the smaller the scaling factor the proportional controller 60 causes the encoder 46 to generate more data bits. The continuous adjustment ensures that the buffer 48 neither overflows nor underflows.
Fig. 3 is a block diagram of a conventional MPEG decompressor 60. The decompressor, more often referred to as a decoder, decodes the frames encoded by the encoder 30 of fig. 2.
For macroblocks of the I frame and non-I frames that are not motion predicted, the variable length decoder 62 decodes the variable length codewords received from the encoder 30 (fig. 2). The predictive decoder 64 decodes the predicted decoded DC coefficients and a dequantizer 65 (which is similar or identical to the dequantizer 50 of fig. 2) dequantizes the decoded AC and DC coefficients. Inverse DCT circuit 66 (which is similar or identical to inverse DCT52 in fig. 2) transforms the dequantized coefficients into pixel values. The decoded pixel values are passed through a summer 68 (which is used in the decoding of non-I frame motion predicted macroblocks, as described below) and then into a frame reordering buffer 70. The frame reordering buffer 70 stores the decoded frames and rearranges the frames in a proper order to display an image on the video display unit 72. If the encoded I frame is used as a reference frame, it will also be stored in reference frame buffer 74.
For non-I frame motion predicted macroblocks, decoder 62, dequantizer 65, and inverse DCT68 process the residual coefficients using the I frame coefficient processing method described above. In addition, the prediction decoder 64 decodes the motion vectors, and the motion interpolator 76 provides the pixel values from the reference frame macroblock to which the motion vector points to the summer 68. The summer 68 adds these pointed to pixel values to the remaining pixel values to generate pixel values for the decoded macroblock and provides these decoded pixel values to the frame reorder buffer 70. If encoder 30 (fig. 2) uses a decoded non-I frame as a reference frame, then the decoded non-I frame is stored in reference frame buffer 74.
Referring to fig. 2 and 3, although, as noted above, encoder 30 and decoder 60 each include multi-function circuit blocks, they may each be implemented in hardware, software, or a combination of software and hardware. For example, encoder 30 and decoder 60 are often implemented by processors that perform the functions of the corresponding circuit blocks, respectively.
The MPEG encoder 30 and MPEG decoder 60 shown in fig. 2 and 3 are found in many publications, and a detailed discussion of the general MPEG standards, such as "video compression" by Peter d.symes in 1998 (McGraw Hill publication) is incorporated by reference. Further, there are other well-known block-based compression techniques used to encode and decode video and still images.
Summary of the invention
One aspect of the invention is: the image processing circuit compares the pixel value with a threshold value and changes the pixel value when the pixel value has a predetermined relationship with the threshold value.
Another aspect of the invention is: the image processing circuit generates a random number and combines the random number with the pixel value.
Such image processing circuitry may be used to remove traces such as outline traces from the decoded electronic image.
Brief description of the drawings
Fig. 1A is a conventional macroblock of pixels.
FIG. 1B is a conventional block corresponding to pre-compressed Y values of pixels in the macroblock of FIG. 1A, respectively.
FIG. 1C is a pre-compression C of groups of pixels in the macroblock of FIG. 1A, respectivelyBA regular block of values.
FIG. 1D is a pre-compression C of pixel groups in the macroblock of FIG. 1A, respectivelyRA regular block of values.
Fig. 2 is a block diagram of a conventional MPEG encoder.
Fig. 3 is a block diagram of a conventional MPEG decoder.
Fig. 4 is a block diagram of a pixel circuit according to an embodiment of the invention.
Fig. 5 is a flow chart of the operation of a pixel circuit according to an embodiment of the invention.
Fig. 6 is a block diagram of respective portions of two sequences of video frames that may be processed by the pixel circuit of fig. 4, in accordance with one embodiment of the present invention.
Detailed description of the invention
Fig. 4 is a block diagram of the pixel circuit 100 according to an embodiment of the invention. The pixel circuit 100 alters the pixel values in the decoded electronic image in order to reduce the visibility of traces such as outline traces. In particular, the human eye is more sensitive to image disturbances such as quantization (discussed above in connection with fig. 1 and 2) than random disturbances. Since quantization results in contouring, the pixel circuit 100 introduces random interference into the image in order to make the contouring less noticeable or visible to the human eye. As described above, since the contouring is more pronounced in darker image areas than in lighter image areas, the described embodiment of the circuit 100 introduces random interference only to the darker image areas to reduce processing time. In another embodiment, however, the circuit 100 also introduces random interference to brighter image areas.
The circuit 100 includes a threshold comparison circuit 102 that compares the pixel value (including both color difference values and luminance values) of each pixel in the image to a respective threshold. The circuit 102 provides pixel values below a threshold (dark pixel values) to a first input 104 of a synthesizer 106, which in one embodiment is a summer, and pixel values above a threshold (bright pixel values) to a first input 108 of an image buffer 110.
Random number generator 112 has input and output terminals 114 and 116 and generates a particular random number for each pixel location in the image whose pixel values are being processed by circuit 102. In one embodiment, the output 116 is coupled to the input 114 to form a feedback loop. An optional truncator circuit 120 truncates the random number to an appropriate length. Truncator 120 (or random number generator 112 provides a random number to second input 122 of synthesizer 106 when truncator 120 is omitted.) thus, for each pixel position, random number generator 112 provides a corresponding random number to synthesizer 106, and synthesizer 106 combines the random number with the respective dark pixel value to produce an altered dark pixel value, if a pixel position has a light pixel value instead of a dark pixel value, random number generator 112 still generates a random number for that pixel position, although synthesizer 106 does not use the random number to alter the pixel value.
Clipper circuit 124 receives the modified dark pixel values from combiner 106 via input 126 and determines whether the modified pixel values fall outside a predetermined range of pixel values. If the modified pixel value does fall outside the predetermined range of pixel values, clipper circuit 124 "clips" the modified pixel value to fall within the predetermined range, thereby preventing register overflow resulting in an erroneous modified pixel value. If the modified pixel value does not fall outside the predetermined range of pixel values, the clipper circuit 124 does not change the modified pixel value. The clipper circuit 124 provides clipped or unclipped modified pixel values to a second input 128 of the image buffer 110, and the image buffer 110 stores the light pixel values and the modified dark pixel values in the proper order for display of an image on a display device (not shown).
Referring to the flow diagrams of fig. 4 and 5, the operation of the pixel circuit 100 according to an embodiment of the invention will be discussed.
Referring to block 138 of fig. 5, the threshold comparator 102 receives a pixel value (which may be a luminance value of the pixel or a color difference value of the pixel). In one embodiment, the pixel values are 8 bits, so they range in size from 0-255. Also, the comparator 102 may receive pixel values in any order. For example, the comparator 102 may receive and process all luminance pixel values of an image before processing all color difference pixel values of the image. Alternatively, the comparator 102 receives and processes all pixel values (luminance and color difference values) for the first pixel location, then receives and processes all pixel values for the second pixel location, and so on.
Referring now to block 140, the comparator 102 determines whether the pixel value is less than its corresponding threshold value. In one embodiment, the comparator 102 compares all luminance pixel values with a luminance threshold and all color difference pixel values with a color difference threshold. In another embodiment, circuit 102 compares both the luminance and color difference pixel values to the same threshold. In yet another embodiment, comparator 102 uses a different threshold for each pixel location. Further, although the threshold may be any number, in one embodiment it is in the range of 50-80. Additionally, although the foregoing is directed to determining whether a pixel value is less than its threshold, the circuit 102 may be configured to determine whether a pixel value is less than or equal to, greater than, or greater than or equal to its threshold, depending on the type of trace that needs to be mitigated.
Referring to block 142, if the pixel value is not less than the threshold, the comparator 102 determines that the pixel value is a bright pixel value and effectively generates a modified pixel value equal to the bright pixel value (to maintain uniformity in the flow chart, the modified pixel value is referred to as a modified pixel value although it is actually equal to the unmodified bright pixel value). Referring to block 144, the comparator 102 provides the altered pixel value to the input 108 of the buffer 110 for storage.
However, if the pixel value is less than the threshold, then referring to block 146, the comparator 102 determines that the pixel value is a dark pixel value and the combiner 106 combines the dark pixel value with the random number from the random number generator 112 (and optional truncator 120, if any) to generate the modified pixel value. In one embodiment, the synthesizer 106 sums the dark pixel values with a random number to generate modified pixel values. The operation of the random number generator 112 is described in detail below.
Referring now to block 150, clipper circuit 124 determines whether the modified dark pixel values from combiner 106 are within the appropriate range. If the modified dark pixel value is within the range, referring to block 144, circuit 124 outputs the modified dark pixel value to input 128 of buffer 110 for storage. However, if the modified dark pixel value is not within the range, circuitry 124 sets the modified pixel value to a value within the range and then provides it to buffer 110, see block 152. For example, in one embodiment, if the appropriate pixel value range is 0-255 and the modified pixel value is less than 0, then circuitry 124 sets the modified pixel value to 0. Likewise, if the modified pixel value is greater than 255, then circuit 124 sets the modified pixel value to 255.
Referring to block 154, the pixel circuit 100 determines whether there are more pixel values to process. If not, the pixel circuit 100 ends its flow until more pixel values are provided. If there are more pixel values to process, referring to block 138, the threshold comparator receives the next pixel value and repeats the steps described above beginning at block 140.
Referring again to FIG. 4, the operation of the random number generator 112 in accordance with an embodiment of the present invention will be discussed. The generator 112 generates random numbers according to a random number equation, which in one embodiment is:
(1) random number = (1664525 × seed value +1013904223) mod232
Although it is possible for generator 112 to generate a different random number for each pixel value, in one embodiment, generator 112 generates a different random number for each pixel location within the image. Thus, in such embodiments, each random number is used to alter all dark pixel values (luminance and color differences) for a pixel location. To generate these different random numbers, each pixel location is provided with a different seed value. In one embodiment, the initial seed value is provided to the input 114 of the generator to generate a first random number for the first image. The first random number is then fed back to the input 114 via a feedback path 118 and used as a seed value to generate a next random number. This feedback continues until a random number is generated for each pixel location in the image. This feedback reduces processing time and throughput compared to providing a new non-feedback seed value to the input 114 to obtain each random number. To generate the first random number for the next image, a new initial seed value may be provided or generated as a feedback of the last random number to be generated for the previous image.
Since the random numbers generated by the random number generator 112 are quite large, an optional truncator 120 may be included in the circuit 100 to reduce the random numbers to a reasonable size before providing them to the synthesizer 106. For example, in one embodiment, truncator 120 reduces all random numbers to three bits (two bit magnitude and one bit sign), the truncated random number is one of the following values: -3, -2, -1, 0, 1, 2, 3.
Referring to fig. 4 and 6, the operation of the random number generator 112 to generate both time varying random disturbances and time invariant random disturbances is discussed. Fig. 6 is a pixel diagram of two video frames 160 and 162 from the same sequence of video frames. To allow pixel circuit 100 to process video frames using varying random disturbs over time, generator 112 generates different random numbers for the same pixel bits in different video frames. For example, generator 112 is pixel location a00And b00Generating different random numbers as pixel position a01And b01Generate different random numbers, and so on. Therefore, the random interference pattern is different every frame from every frame. Since a frame represents a scene at different times, the random interference pattern changes over time and thus over time. In contrast, to allow pixel circuit 100 to process video frames using random interference that is constant over time, generator 112 generates the same random numbers for the same pixel bits in different video frames. For example, generator 112 is pixel location a00And b00Generating the same random number as pixel position a01And b01The same random numbers are generated, and so on. Thus, the random interference pattern is the same from frame to frame and therefore does not change over time. Although the visual effect produced by processing a sequence of video images with time-invariant random disturbances is different from that produced by processing a sequence of video images with time-variant random disturbances, both techniques reduce contour traces to substantially the same extent.
In one embodiment, the random number generator 112 uses the feedback technique described above to generate a time varying random disturbance and provides a different signal for each video frameThe initial seed value of (a). For example, assume that pixel circuit 100 processes frame 160 before processing frame 162. Generator 112 uses the first initial seed value as pixel bit a00Generating a random number, a00Random number feedback as pixel location a01The seed value of (a), and so on. After circuit 100 finishes processing frame 160, generator 112 uses a second initial seed value, different from the first initial seed value, for pixel bit b00Generating random numbers, and mixing b00Random number feedback as pixel bit b01The seed value of (a), and so on. Since the generator 112 uses different initial seed values for frames 160 and 162, then a00Is different from b00A is01Is different from b01And so on. Thus, the random interference pattern of frame 160 is different from the random interference pattern of frame 162. Even though optional truncator 120 causes random numbers of the same truncation for certain pixel positions of frame 160 and certain pixel positions of frame 162, the random interference patterns of the two are substantially different.
In another embodiment, the random number generator 112 generates time-invariant interference using the feedback technique described above and uses the same seed value for each video frame. For example, assume that pixel circuit 100 processes frame 160 before processing frame 162. Generator 112 uses an initial seed value for pixel location a00Generating a random number, a00Random number feedback as a pixel bit a01The seed value of (a), and so on. After circuit 100 finishes processing frame 160, generator 112 uses the same initial seed value for pixel position b00Generating random numbers, and mixing b00Random number feedback as pixel location b01The seed value of (a), and so on. Since generator 112 uses the same initial seed value for frames 160 and 162, then a00Is equivalent to b00A is01Is equivalent to b01And so on. Thus, the random interference pattern of frame 160 is the same as the random interference pattern of frame 162.
From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. For example, pixel circuit 100 (fig. 4) may change all pixel values, including light and dark, rather than only changing the dark pixel values. Further, although the pixel circuit 100 is described as using different functional circuit blocks, it may be implemented by software, hardware, or a combination of both. For example, pixel circuit 100 may be a processor or may be part of a large image processing circuit implemented by one or more processors. Processors that may be used to implement circuit 100 include the Map1000 processor developed by Equator Technologies, the Pentium or Celeron processor developed by Intel, and the K-6 processor developed by Advanced Micro Devices (AMD).
Claims (70)
1. An image processing circuit comprising:
a pixel circuit operable to compare the pixel value with the threshold value, an
The pixel value is altered when a predetermined relationship exists between the pixel value and the threshold value.
2. The image processing circuit of claim 1 wherein the pixel values comprise luminance pixel values.
3. The image processing circuit of claim 1 wherein the pixel values comprise color difference pixel values.
4. The image processing circuit of claim 1 wherein the threshold is substantially in the range of 50-80.
5. The image processing circuit of claim 1 wherein the compensation value comprises a randomly generated value.
6. The image processing circuit of claim 1 wherein the compensation value comprises a randomly generated value in the range of-3 to 3.
7. The image processing circuit of claim 1 wherein the pixel circuit is further operable to:
judging whether the sum of the pixel value and the compensation value is within a predetermined range of the pixel value; and
if the sum is outside the range, the pixel value is set equal to a value within the range.
8. The image processing circuit of claim 1 wherein the pixel circuit is operable to alter the pixel value when the pixel value is less than the threshold value.
9. The image processing circuit of claim 1 wherein the pixel circuit comprises a processor.
10. The image processing circuit of claim 1 wherein the pixel circuit is operative to modify the pixel value by adding a compensation value to the pixel value.
11. An image processing circuit comprising:
a pixel circuit operable to
Generating a random number, an
The random number is compared to the pixel value.
12. The image processing circuit of claim 11 wherein the pixel circuit is operable to truncate the random number prior to combining the random number with the pixel value.
13. The image processing circuit of claim 11 wherein the pixel circuit is further operable to clip pixel values when the pixel values are outside a predetermined range.
14. The image processing circuit of claim 11 wherein the pixel circuit is operable to add the random number to the pixel value.
15. An image processing circuit comprising:
a pixel circuit operable to
Comparing a first pixel value to a first threshold, the first pixel value corresponding to a pixel location of a first video frame;
adding a first compensation value to the first pixel value when the first pixel value is smaller than a first threshold value;
comparing a second pixel value to a second threshold, the second pixel value corresponding to a pixel location of a second video frame; and is
And adding a second compensation value to the second pixel value when the second pixel value is smaller than the second threshold value.
16. The image processing circuit of claim 15 wherein the first and second pixel values comprise respective luminance pixel values.
17. The image processing circuit of claim 15 wherein the first and second pixel values comprise respective color difference pixel values.
18. The image processing circuit of claim 15 wherein the first and second thresholds are substantially in the range of 50-80.
19. The image processing circuit of claim 15 wherein the first threshold is equal to the second threshold.
20. The image processing circuit of claim 15 wherein the first and second compensation values comprise respective randomly generated numbers.
21. The image processing circuit of claim 15 wherein the first compensation value is equal to the second compensation value.
22. The image processing circuit of claim 15 wherein the first and second compensation values comprise randomly generated numbers each in the range of-3 to 3.
23. The image processing circuit of claim 15 wherein the pixel circuit is further operable to:
comparing a first sum of the first pixel value and the first compensation value with a second sum of the second pixel value and the second compensation value with zero, respectively; and is
The first pixel value is set to zero when the first sum is less than zero and the second pixel value is set to zero when the second sum is less than zero.
24. An image processing circuit comprising:
a pixel circuit can be operated as follows
Generating a first random number using a first seed number;
comparing the first pixel value to a first threshold;
if the first pixel value is less than the first threshold, adding the first random number to the first pixel value;
generating a second random number using a second seed number;
comparing the second pixel value to a second threshold;
if the second pixel value is less than the second threshold, the second random number is added to the second pixel value.
25. The image processing circuit of claim 24 wherein the pixel circuit is operable to:
truncating the first random number before adding the first random number to the first pixel value; and
the second random number is truncated before being added to the second pixel value.
26. The image processing circuit of claim 24 wherein the second seed number is equal to the first random number.
27. The image processing circuit of claim 24 wherein the second seed number is equal to the first seed number.
28. The image processing circuit of claim 24 wherein the pixel circuit is operable to:
truncating the first random number before adding the first random number to the first pixel value;
truncating the second random number before adding the second random number to the second pixel value;
the second seed number is set equal to the first random number that is not truncated.
29. The image processing circuit of claim 24 wherein the pixel circuit is operable to generate the first and second random numbers using the following equation:
random number = (1664525 × seed number +1013904223) mod232。
30. The image processing circuit of claim 24 wherein:
the first pixel value corresponds to a first pixel location in an image; and
the second pixel value corresponds to a second pixel location in the image that is consecutive to the first pixel location.
31. An image processing circuit comprising:
a pixel circuit operable to:
generating a first random number using a first seed number;
comparing a first pixel value to a first threshold, the first pixel value corresponding to a starting pixel position in a first video frame;
adding a first random number to the first pixel value when the first pixel value is less than a first threshold;
generating a second random number using a second seed number;
comparing a second pixel value to a second threshold, the second pixel value corresponding to a starting pixel position in a second video frame; and is
When the second pixel value is less than the second threshold, a second random number is added to the second pixel value.
32. The image processing circuit of claim 31 wherein the second seed number is equal to the first seed number.
33. The image processing circuit of claim 31 wherein the pixel circuit is further operable to:
generating a third random number using the third sub-number;
comparing a third pixel value to a third threshold, the third pixel value corresponding to an end pixel position in the first video frame;
adding a third random number to the third pixel value when the third pixel value is less than a third threshold; and is
Setting the second seed number equal to the third random number.
34. An image processing circuit comprising:
a pixel circuit operable to:
generating a first random number;
adding a first random number to the first pixel value;
generating a second random number; and is
Adding a second random number to the second pixel value;
35. the image processing circuit of claim 34 wherein the pixel circuit is operable to generate the first and second random numbers from the first and second seed numbers, respectively.
36. The image processing circuit of claim 34 wherein the pixel circuit is operable to:
generating a first random number from a seed number; and
a second random number is generated from the first random number.
37. The image processing circuit of claim 34 wherein:
the first pixel value corresponds to a pixel location in the first video frame;
the second pixel value corresponds to the pixel location in the second video frame; and is
The first random number is equal to the second random number.
38. The image processing circuit of claim 34 wherein:
the first pixel value corresponds to a starting pixel position in the first video frame;
the second pixel value corresponds to the pixel location in the second video frame; and is
The first random number is not equal to the second random number.
39. A circuit, comprising:
a comparator having a pixel value input and first and second pixel value outputs;
a random number generator having a seed number input and a random number output;
a combiner having a first input coupled to the first pixel value output, a second input coupled to the random number output, and an output; and
an image buffer having a first input connected to the second pixel value output and a second input connected to the combiner output.
40. The circuit of claim 39, wherein the comparator is capable of performing the operations of: receiving a pixel value at a pixel value input end, and outputting the pixel value at a first pixel value output end when the pixel value is smaller than a threshold value; and outputting the pixel value at the second pixel value output end when the pixel value is larger than the threshold value.
41. The circuit of claim 39 wherein the random number output is coupled to the seed number input.
42. The circuit of claim 39, wherein the combiner includes a summer.
43. The circuit of claim 39, further comprising a random number truncator disposed between the random number generator and the synthesizer, the truncator having an input coupled to the random number output of the random number generator and having an output coupled to the second input of the synthesizer.
44. The circuit of claim 39, further comprising a clipper disposed between the combiner and the image buffer, the clipper having an input coupled to the combiner output and having an output coupled to the second input of the image buffer.
45. A method, comprising:
comparing the pixel value to a threshold; and
the pixel value is modified when there is a predetermined relationship between the pixel value and the threshold value.
46. The method of claim 45, further comprising:
generating a random number; and is
The compensation value is set equal to the random number.
47. The method of claim 45, further comprising:
generating a random number;
truncating the random number to a number in the range of-3 to 3; and is
Setting a compensation value equal to the truncated random number.
48. The method of claim 45, further comprising:
determining whether a sum of the pixel value and the compensation value is within a predetermined range of the pixel value; and is
If the sum is outside the range, the pixel value is set equal to a value within the range.
49. The method of claim 45, wherein said altering comprises altering the pixel value when the pixel value is less than a threshold value.
50. The method of claim 45, wherein said modifying comprises adding a compensation value to the pixel value.
51. A method, comprising:
generating a random number; and is
A random number is combined with the pixel value.
52. The method of claim 51, further comprising truncating the random number before combining the random number with the pixel value.
53. The method of claim 51 further comprising clipping the pixel value when the pixel value is outside a predetermined range.
54. A method, comprising:
comparing a first pixel value to a first threshold, the first pixel value corresponding to a pixel location in a first video frame;
adding a first compensation value to the first pixel value when the first pixel value is less than a first threshold;
comparing a second pixel value to a second threshold, the second pixel value corresponding to a pixel location in a second video frame; and is
When the second pixel value is less than the second threshold, the second compensation value is added to the second pixel value.
55. The method of claim 54, wherein the first threshold is equal to the second threshold.
56. The method of claim 54, wherein the first and second compensation values are equal to the same randomly generated number.
57. The method of claim 54, further comprising:
comparing a first sum of the first pixel value and the first compensation value with zero;
setting the first pixel value equal to zero when the first sum is less than zero;
comparing a second sum of the second pixel value and the second compensation value with zero;
the second pixel value is set equal to zero when the second sum is less than zero.
58. A method, comprising:
generating a first random number using a first seed number;
comparing the first pixel value to a first threshold;
adding a first random number to the first pixel value when the first pixel value is less than a first threshold;
generating a second random number using a second seed number;
comparing the second pixel value to a second threshold;
when the second pixel value is less than the second threshold, a second random number is added to the second pixel value.
59. The method of claim 58, wherein:
generating the first random number includes truncating the first random number; and
generating the second random number includes truncating the second random number.
60. The method of claim 58, wherein the second seed number is equal to the first random number.
61. The method of claim 58, wherein the second seed number is equal to the first seed number.
62. The method of claim 58, wherein generating the first and second random numbers comprises generating the first and second random numbers according to the following equation:
random number = (1664525 × seed number +1013904223) mod232。
63. A method, comprising:
generating a first random number using a first seed number;
comparing a first pixel value to a first threshold, the first pixel value corresponding to a starting pixel position in a first video frame;
adding a first random number to the first pixel value when the first pixel value is less than a first threshold;
generating a second random number using a second seed number;
comparing a second pixel value to a second threshold, the second pixel value corresponding to a starting pixel position in a second video frame; and is
A second random number is added to the second pixel value when the second pixel value is less than the second threshold.
64. The method of claim 63, further comprising setting the second seed number equal to the first seed number.
65. The method of claim 63, further comprising
Generating a third random number using the third sub-number;
comparing a third pixel value to a third threshold, the third pixel value corresponding to an end pixel position in the first video frame;
adding a third random number to the third pixel value when the third pixel value is less than a third threshold; and is
Setting the second seed number equal to the third random number.
66. A method, comprising:
generating a first random number;
adding a first random number to the first pixel value;
generating a second random number;
a second random number is added to the second pixel value.
67. The method of claim 66, wherein generating the first and second random numbers comprises generating the first and second random numbers from the first and second seed numbers, respectively.
68. The method of claim 66, wherein:
generating the first random number includes generating the first random number from a seed number; and
generating the second random number includes generating the second random number from the first random number.
69. The method of claim 66, wherein:
the first pixel value corresponds to a pixel position of the first video frame;
the second pixel value corresponds to the pixel location of the second video frame;
generating the second random number includes generating the second random number equal to the first random number.
70. The method of claim 66, wherein:
the first pixel value corresponds to a starting pixel position of the first video frame;
the second pixel value corresponds to the pixel location of the second video frame;
generating the second random number includes generating the second random number not equal to the first random number.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US60/091,407 | 1998-07-01 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1037443A true HK1037443A (en) | 2002-02-08 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9521433B2 (en) | Video encoding device, video decoding device, video encoding method, video decoding method, video encoding or decoding program | |
| US8855202B2 (en) | Flexible range reduction | |
| US8644394B2 (en) | Deblocking filter | |
| CN100571365C (en) | Method and apparatus for selecting scan mode in dual pass encoding | |
| US8396311B2 (en) | Image encoding apparatus, image encoding method, and image encoding program | |
| US20100061449A1 (en) | Programmable quantization dead zone and threshold for standard-based h.264 and/or vc1 video encoding | |
| US9131249B2 (en) | Apparatus for encoding moving images to minimize an amount of generated code | |
| US7095448B2 (en) | Image processing circuit and method for modifying a pixel value | |
| WO1991014295A1 (en) | Digital image coding using a random scanning of image frames | |
| US7502415B2 (en) | Range reduction | |
| JP2001285867A (en) | DCT domain down-conversion system to compensate for IDCT mismatch | |
| KR100229796B1 (en) | Image decoding system with compensation function for degraded image | |
| JP2006191253A (en) | Rate conversion method and rate conversion apparatus | |
| JP3429429B2 (en) | Visual error processing method and image coding apparatus using this method | |
| US7822125B2 (en) | Method for chroma deblocking | |
| CN110708547B (en) | Efficient entropy coding grouping method for transform modes | |
| CN101836453B (en) | Method for alternating entropy coding | |
| Naccari et al. | Intensity dependent spatial quantization with application in HEVC | |
| US7436889B2 (en) | Methods and systems for reducing requantization-originated generational error in predictive video streams using motion compensation | |
| WO1991014339A1 (en) | Digital image coding with quantization level computation | |
| CN1166208C (en) | Transcoding method and apparatus | |
| CN1206871C (en) | Video frequency source coding by movement prediction and block effect filtering | |
| CN1129870C (en) | Method and device for estimating motion in digitized image with pixels | |
| CN1174632C (en) | Method for reducing signal degradation | |
| JP2025525018A (en) | Image decoding method, encoding method and device |