CN117750037B - Image processing method and device, electronic device and computer readable storage medium - Google Patents
Image processing method and device, electronic device and computer readable storage medium Download PDFInfo
- Publication number
- CN117750037B CN117750037B CN202310241877.9A CN202310241877A CN117750037B CN 117750037 B CN117750037 B CN 117750037B CN 202310241877 A CN202310241877 A CN 202310241877A CN 117750037 B CN117750037 B CN 117750037B
- Authority
- CN
- China
- Prior art keywords
- video frame
- pixel
- target video
- boundary
- gradient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 12
- 238000000034 method Methods 0.000 claims abstract description 37
- 230000000903 blocking effect Effects 0.000 claims description 136
- 238000012545 processing Methods 0.000 claims description 98
- 230000015654 memory Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 15
- 238000009499 grossing Methods 0.000 description 13
- 230000000694 effects Effects 0.000 description 12
- 238000004891 communication Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 230000010365 information processing Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013475 authorization Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The application discloses an image processing method and device, electronic equipment and a computer readable storage medium. The method comprises the following steps: acquiring a target video frame and coding information of the target video frame, wherein the coding information comprises macro block positions of macro blocks in the target video frame; determining a boundary position of a boundary of a macroblock in the target video frame based on the macroblock position; and determining a blockiness region from the target video frame based on the boundary position, wherein the blockiness region is a region where blockiness occurs.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer readable storage medium.
Background
The compression of an image may cause blocking in the image, and the occurrence of blocking obviously reduces the image quality of the image, so that the blocking in the image needs to be eliminated in order to improve the image quality. Before eliminating the blocking effect in the image, it is necessary to determine the area where the blocking effect occurs from the image, and therefore, it is very important to determine the blocking effect area from the image.
Disclosure of Invention
The application provides an image processing method and device, an electronic device and a computer readable storage medium.
In a first aspect, there is provided an image processing method, the method comprising:
Acquiring a target video frame and coding information of the target video frame, wherein the coding information comprises macro block positions of macro blocks in the target video frame;
Determining a boundary position of a boundary of a macroblock in the target video frame based on the macroblock position;
And determining a blockiness region from the target video frame based on the boundary position, wherein the blockiness region is a region where blockiness occurs.
In combination with any one of the embodiments of the present application, the determining a blocking area from the target video frame based on the boundary position includes:
determining boundary pixels located on a boundary of the macroblock based on the boundary position;
Constructing a pixel neighborhood with a preset size by taking a pixel to be confirmed in the target video frame as a center;
determining a target number of the boundary pixels within the pixel neighborhood;
Determining that the pixel to be confirmed is a blockiness pixel under the condition that the target number is larger than or equal to a first threshold value;
The blockiness region is determined from the blockiness pixels in the target video frame.
In combination with any one of the embodiments of the present application, before determining that the pixel to be confirmed is a blocking pixel if the target number is greater than or equal to a first threshold, the method further includes:
Determining the average gradient of the boundary pixels in the pixel neighborhood to obtain a first gradient;
Determining the average gradient of the pixels in the pixel neighborhood except the boundary pixels to obtain a second gradient;
And determining the pixel to be confirmed as a blockiness pixel under the condition that the target number is greater than or equal to a first threshold value, wherein the method comprises the following steps:
and determining the pixel to be confirmed as the blockiness pixel under the condition that the target number is larger than or equal to a first threshold value and the first gradient is larger than the product of the second gradient and a preset value, wherein the preset value is larger than 1.
In combination with any one of the embodiments of the present application, the determining the average gradient of the boundary pixels in the pixel neighborhood, to obtain a first gradient includes:
determining the gradient sum of the boundary pixels in the pixel neighborhood through a Sobel operator;
Determining the gradient of the boundary pixels and the quotient of the gradient and the target number to obtain the first gradient.
In combination with any one of the embodiments of the present application, the determining a blocking area from the target video frame based on the boundary position includes:
determining pixels in the target video frame, the distance from the pixels to the boundary position of which is smaller than or equal to a second threshold value, as blocking effect pixels;
The blockiness region is determined from the blockiness pixels in the target video frame.
In combination with any one of the embodiments of the present application, the determining that the pixel in the target video frame having a distance from the boundary position less than or equal to the second threshold is a blocking pixel includes:
determining a flat region from the target video frame;
And determining pixels in the flat region, the distance from the boundary position of which is smaller than or equal to a second threshold value, as the blockiness pixels.
In combination with any of the embodiments of the present application, after the determining of the blockiness region from the target video frame based on the boundary position, the method further includes:
And smoothing the blockiness area in the target video frame to obtain an enhanced video frame.
In combination with any one of the embodiments of the present application, the smoothing the blockiness region in the target video frame to obtain an enhanced video frame includes:
Sharpening the area except the blockiness area in the target video frame, and smoothing the blockiness area in the target video frame to obtain the enhanced video frame.
In a second aspect, there is provided an image processing apparatus comprising:
An obtaining unit, configured to obtain a target video frame and encoding information of the target video frame, where the encoding information includes a macroblock position of a macroblock in the target video frame;
A determining unit configured to determine a boundary position of a boundary of a macroblock in the target video frame based on the macroblock position;
The determining unit is further configured to determine a blocking area from the target video frame based on the boundary position, where the blocking area is an area where a blocking occurs.
In combination with any one of the embodiments of the present application, the determining unit is configured to:
determining boundary pixels located on a boundary of the macroblock based on the boundary position;
Constructing a pixel neighborhood with a preset size by taking a pixel to be confirmed in the target video frame as a center;
determining a target number of the boundary pixels within the pixel neighborhood;
Determining that the pixel to be confirmed is a blockiness pixel under the condition that the target number is larger than or equal to a first threshold value;
The blockiness region is determined from the blockiness pixels in the target video frame.
In combination with any one of the embodiments of the present application, the determining unit is further configured to:
Determining the average gradient of the boundary pixels in the pixel neighborhood to obtain a first gradient;
Determining the average gradient of the pixels in the pixel neighborhood except the boundary pixels to obtain a second gradient;
and determining the pixel to be confirmed as the blockiness pixel under the condition that the target number is larger than or equal to a first threshold value and the first gradient is larger than the product of the second gradient and a preset value, wherein the preset value is larger than 1.
In combination with any one of the embodiments of the present application, the determining unit is configured to:
determining the gradient sum of the boundary pixels in the pixel neighborhood through a Sobel operator;
Determining the gradient of the boundary pixels and the quotient of the gradient and the target number to obtain the first gradient.
In combination with any one of the embodiments of the present application, the determining unit is configured to:
determining pixels in the target video frame, the distance from the pixels to the boundary position of which is smaller than or equal to a second threshold value, as blocking effect pixels;
The blockiness region is determined from the blockiness pixels in the target video frame.
In combination with any one of the embodiments of the present application, the determining unit is configured to:
determining a flat region from the target video frame;
And determining pixels in the flat region, the distance from the boundary position of which is smaller than or equal to a second threshold value, as the blockiness pixels.
In combination with any of the embodiments of the application, the device further comprises:
And the processing unit is used for carrying out smoothing processing on the blockiness area in the target video frame to obtain an enhanced video frame.
In combination with any one of the embodiments of the present application, the processing unit is configured to:
Sharpening the area except the blockiness area in the target video frame, and smoothing the blockiness area in the target video frame to obtain the enhanced video frame.
In a third aspect, an electronic device is provided, including: a processor and a memory for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform a method as described in the first aspect and any one of its possible implementations.
In a fourth aspect, there is provided another electronic device comprising: a processor, a transmitting means, an input means, an output means and a memory for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the first aspect and any implementation thereof as described above.
In a fifth aspect, there is provided a computer readable storage medium having stored therein a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the first aspect and any implementation thereof as described above.
In a sixth aspect, there is provided a computer program product comprising a computer program or instructions which, when run on a computer, cause the computer to perform the first aspect and any embodiments thereof.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
In the application, after the image processing device acquires the target video frame and the coding information of the target video frame, the boundary position of the boundary of the macro block in the target video frame is determined by utilizing the macro block position in the coding information. Also, since the blocking effect is generally liable to occur at the boundary of the macroblock, and the probability of occurrence of the blocking effect is higher in a region closer to the boundary of the macroblock, the image processing apparatus can determine the blocking effect region from the target video frame based on the boundary position, whereby determination of the blocking effect region in the target video frame based on the encoding information of the target video frame can be achieved.
Because the coding information of the target video frame is carried information of the target video frame, the block effect area in the target video frame is determined based on the coding information of the target video frame, the efficiency of determining the block effect area in the target video frame can be improved, and the data processing capacity of determining the block effect area in the target video frame is reduced.
Drawings
In order to more clearly describe the embodiments of the present application or the technical solutions in the background art, the following description will describe the drawings that are required to be used in the embodiments of the present application or the background art.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a target video frame according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating another image processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a boundary map E obtained by determining boundary positions in the target video frame shown in FIG. 2 according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
Fig. 6 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Compression of an image may cause blocking artifacts in the image, for example, encoding video may cause blocking artifacts in video frames in the video. The occurrence of blocking obviously reduces the image quality of the image, and therefore, in order to improve the image quality, it is necessary to eliminate the blocking in the image. Prior to eliminating the blocking effect in the image, a region where the blocking effect occurs (hereinafter referred to as a blocking effect region) needs to be determined from the image, and the current technology generally determines the blocking effect region from the image through a specific blocking effect region detection algorithm, which results in large data processing amount and low efficiency in determining the blocking effect region from the image. Based on this, the embodiment of the application provides an image processing method to reduce the data processing amount of determining a blockiness area from an image and to improve the efficiency.
The execution main body of the embodiment of the application is an image processing device, wherein the image processing device can be any electronic equipment capable of executing the technical scheme disclosed by the embodiment of the method of the application. Alternatively, the image processing apparatus may be one of the following: cell-phone, computer, panel computer, wearable smart machine.
It should be understood that the method embodiments of the present application may also be implemented by means of a processor executing computer program code. Embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. Referring to fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the application.
101. And acquiring the target video frame and the coding information of the target video frame.
In the embodiment of the present application, the target video frame is any frame in the encoded video obtained by encoding the video to be encoded, the target video frame is obtained by encoding the video frame to be encoded in the video to be encoded, for example, in the process of encoding the video a to obtain the video B, the first frame in the video B is obtained by encoding the first frame in the video a, at this time, the video a is the video to be encoded, and the video B is the video after encoding. If the first frame in video A is the video frame to be encoded, then the first frame in video B is the target video frame.
In the embodiment of the application, the encoding information of the target video comprises information for encoding the video frame to be encoded, wherein the encoding information comprises the macro block position of the macro block in the target video frame, and the macro block position represents the position of the macro block in the target video frame. The macro blocks are coding units for video coding, for example, the target video frame is obtained by coding a video frame to be coded, so that in the process of coding the video frame to be coded to obtain the target video frame, the video frame to be coded is divided into at least two macro blocks, and then each macro block is coded to obtain the target video frame. For example, fig. 2 shows a target video frame, where some of the macro blocks have been visualized in fig. 2, specifically, an orange square is provided on the back of the hand below in fig. 2, where the square includes a plurality of small squares, where each small square is a macro block.
In one implementation of acquiring a target video frame, an image processing device receives a target video frame input by a user through an input component to acquire the target video frame. The input assembly includes: keyboard, mouse, touch screen, touch pad and audio input device.
In another implementation manner of acquiring the target video frame, the image processing device receives the target video frame sent by the terminal to acquire the target video frame. Alternatively, the terminal may be any of the following: cell phone, computer, panel computer, server. For example, the terminal is a server, and there is a communication connection between the server and the image processing apparatus. The server transmits the encoded video to the image processing apparatus via the communication connection, and the image processing apparatus acquires the target video frame by receiving the encoded video.
In one implementation of acquiring encoded information of a target video frame, an image processing apparatus receives a target video frame input by a user through an input component to acquire the target video frame.
In another implementation manner of acquiring the coding information of the target video frame, the image processing device receives the target video frame sent by the terminal to acquire the target video frame. For example, the terminal is a server, and there is a communication connection between the server and the image processing apparatus. The server transmits the encoded video to the image processing apparatus via the communication connection, wherein the encoded video carries encoded information of the encoded video, and the image processing apparatus uses the encoded information of the encoded video as encoded information of the target video frame.
102. And determining the boundary position of the boundary of the macro block in the target video frame based on the macro block position.
In the embodiment of the application, the boundary position is the position of the boundary of the macro block in the target video frame, and the image processing device can determine the position of the boundary of the macro block in the target video frame according to the position of the macro block, namely, can determine the boundary position.
103. And determining a blockiness region from the target video frame based on the boundary position.
In the embodiment of the present application, the blocking area is an area where blocking occurs in the target video frame, for example, in the target video frame shown in fig. 2, blocking exists on the back of the hand. Since blocking artifacts generally tend to occur at the boundaries of a macroblock, and regions closer to the boundaries of the macroblock have a higher probability of occurrence of blocking artifacts, the image processing device may determine the blocking artifact region from the target video frame based on the boundary position.
In the embodiment of the application, after the image processing device acquires the target video frame and the coding information of the target video frame, the boundary position of the boundary of the macro block in the target video frame is determined by utilizing the macro block position in the coding information. Also, since the blocking effect is generally liable to occur at the boundary of the macroblock, and the probability of occurrence of the blocking effect is higher in a region closer to the boundary of the macroblock, the image processing apparatus can determine the blocking effect region from the target video frame based on the boundary position, whereby determination of the blocking effect region in the target video frame based on the encoding information of the target video frame can be achieved.
Because the coding information of the target video frame is carried information of the target video frame, the block effect area in the target video frame is determined based on the coding information of the target video frame, the efficiency of determining the block effect area in the target video frame can be improved, and the data processing capacity of determining the block effect area in the target video frame is reduced.
As an alternative embodiment, the image processing apparatus performs the following steps in performing step 103:
201. boundary pixels located on the boundaries of the macro-blocks are determined based on the boundary positions.
In the embodiment of the present application, pixels located on the boundary of a macroblock are referred to as boundary pixels. The image processing apparatus may take a pixel whose position in the target video frame is a boundary position as a boundary pixel.
202. And constructing a pixel neighborhood with a preset size by taking the pixel to be confirmed in the target video frame as the center.
In the embodiment of the present application, the pixel to be confirmed may be any one pixel in the target video frame. The preset size is a preset size, optionally, the preset size is n×n, where N is an odd number, for example, the preset size is 5×5, where the size of the pixel neighborhood is 5×5, and for example, the preset size is 7×7, where the size of the pixel neighborhood is 7×7. The center of the pixel neighborhood is the pixel to be confirmed, i.e. the pixel to be confirmed is the geometric center of the pixel neighborhood.
203. And determining the target number of the boundary pixels in the pixel neighborhood.
In the embodiment of the application, the target number is the number of boundary pixels in the pixel neighborhood.
204. And determining the pixel to be confirmed as a blockiness pixel under the condition that the target number is larger than or equal to a first threshold value.
Since the size of the pixel neighborhood is a preset size, the size of the pixel neighborhood is fixed, that is, the number of pixels within the pixel neighborhood is fixed. Therefore, the number of boundary pixels in the pixel neighborhood is large, which indicates that the center of the pixel neighborhood is close to the macro block boundary, that is, the probability of blocking effect at the position where the pixel to be confirmed is located is high, whereas the number of boundary pixels in the pixel neighborhood is small, which indicates that the center of the pixel neighborhood is far from the macro block boundary, that is, the probability of blocking effect at the position where the pixel to be confirmed is located is low.
In the embodiment of the application, the image processing device determines whether the number of boundary pixels in the pixel neighborhood is more or less based on the first threshold, specifically, the number of boundary pixels in the pixel neighborhood is greater than or equal to the first threshold, which indicates that the number of boundaries in the pixel neighborhood is more, the number of boundary pixels in the pixel neighborhood is less than the first threshold, and indicates that the number of boundaries in the pixel neighborhood is less.
Therefore, the image processing device determines that the probability of occurrence of the blocking effect at the position where the pixel to be confirmed is located is high when the target number is greater than or equal to the first threshold value, and further determines that the pixel to be confirmed is the blocking effect pixel, whereas the image processing device determines that the probability of occurrence of the blocking effect at the position where the pixel to be confirmed is located is low when the target number is less than the first threshold value, and further determines that the pixel to be confirmed is not the blocking effect pixel. In the embodiment of the application, the blocking pixel is a pixel with high probability of blocking.
205. And determining the blockiness area according to the blockiness pixels in the target video frame.
In one possible implementation, the image processing apparatus takes a pixel region including a blocking pixel as a blocking region. Alternatively, the image processing apparatus takes a pixel region composed of blocking pixels as the blocking region.
In this embodiment, the image processing apparatus determines the target number of boundary pixels within the pixel neighborhood after determining boundary pixels located on the boundary of the macroblock based on the boundary position and constructing a pixel neighborhood of a preset size centering on the pixel to be confirmed in the target video frame. And then determining whether the pixel to be confirmed is a blockiness pixel or not according to the size relation between the target number and the first threshold value, and particularly determining that the pixel to be confirmed is the blockiness pixel under the condition that the target number is larger than or equal to the first threshold value. That is, the image processing apparatus may determine whether each pixel in the target video frame is a blocking pixel based on steps 201 to 204, respectively, and after determining all the blocking pixels in the target video frame, may determine the blocking region according to the blocking pixels in the target video frame.
As an alternative embodiment, the image processing apparatus further performs the following steps before performing step 204:
301. And determining the average gradient of the boundary pixels in the pixel neighborhood to obtain a first gradient.
In one possible implementation, the image processing apparatus may determine gradients of respective boundary pixels within the pixel neighborhood, and then determine an average value of the gradients of the boundary pixels within the pixel neighborhood, to obtain the first gradient.
In another possible implementation, the image processing apparatus determines the gradient sum of boundary pixels within the pixel neighborhood by a sobel (sobel) operator. Then, the quotient of the gradient of the boundary pixel and the target number is determined, and a first gradient is obtained.
302. And determining the average gradient of the pixels in the pixel neighborhood except the boundary pixels, and obtaining a second gradient.
For convenience of description, pixels within a pixel neighborhood other than boundary pixels are hereinafter referred to as non-boundary pixels, and in one possible implementation, the image processing apparatus may determine gradients of respective non-boundary pixels within the pixel neighborhood, and then determine an average value of gradients of the non-boundary pixels within the pixel neighborhood, to obtain the second gradient.
In another possible implementation, the image processing apparatus determines the sum of gradients of non-boundary pixels within the pixel neighborhood by a sobel operator, and determines the number of non-boundary pixels within the pixel neighborhood. The quotient of the gradient of the non-boundary pixels and the number of non-boundary pixels is then determined, resulting in a second gradient.
In the case where the first gradient and the second gradient are obtained, the image processing apparatus performs the following steps in performing step 204:
303. And determining the pixel to be confirmed as the blockiness pixel under the condition that the target number is larger than or equal to a first threshold value and the first gradient is larger than the product of the second gradient and a preset value.
There may be a region of large gradient in the target video frame (hereinafter, the region of large gradient will be simply referred to as a large gradient region), and in the case where a blocking effect occurs in the large gradient region, the influence of the blocking effect on the display effect of the target video frame is small, specifically, in the case where a blocking effect occurs in the large gradient region, the existence of the blocking effect is generally hardly perceived by the naked human eye. Therefore, it is possible to detect a blocking area in an area other than the large gradient area (hereinafter, the area other than the large gradient area will be simply referred to as a flat area) without detecting a blocking area in the large gradient area, and thus, by subsequently removing a blocking in the blocking area in the flat area, the display effect of the target video frame, that is, the image quality of the target video frame is improved, and the data processing amount to improve the image quality of the target video frame can be reduced.
Since the gradient of non-boundary pixels in a large gradient region may be large, and the gradient of non-boundary pixels in a flat region is generally small, in the case where the gradient of boundary pixels is large, it is possible to determine whether the pixel neighborhood is a large gradient region or a flat region based on the difference between the average gradient of boundary pixels in the pixel neighborhood and the average gradient of non-boundary pixels in the pixel neighborhood. Specifically, the difference between the average gradient of the boundary pixels in the pixel neighborhood and the average gradient of the non-boundary pixels in the pixel neighborhood is large, that is, the difference between the first gradient and the second gradient is large, which means that the pixel neighborhood is a flat region, and the difference between the average gradient of the boundary pixels in the pixel neighborhood and the average gradient of the non-boundary pixels in the pixel neighborhood is small, that is, the difference between the first gradient and the second gradient is small, which means that the pixel neighborhood is a large gradient region.
In the embodiment of the application, the image processing device determines that the difference between the first gradient and the second gradient is large by determining whether the first gradient is larger than the product of the second gradient and a preset value, wherein the preset value is larger than 1. Specifically, the image processing apparatus determines that the difference between the first gradient and the second gradient is large in the case where the first gradient is larger than the product of the second gradient and the preset value, and determines that the difference between the first gradient and the second gradient is small in the case where the first gradient is smaller than or equal to the product of the second gradient and the preset value. Then, the image processing device determines that the pixel neighborhood is a flat area, that is, the pixel to be confirmed is a pixel in the flat area when the first gradient is greater than the product of the second gradient and the preset value, and further can determine whether the pixel to be confirmed is a blockiness pixel, and determines that the pixel neighborhood is a large gradient area, that is, the pixel to be confirmed is a pixel in the large gradient area when the first gradient is less than or equal to the product of the second gradient and the preset value, and further can not determine whether the pixel to be confirmed is a blockiness pixel.
Therefore, the image processing apparatus determines that the pixel to be confirmed is a blocking pixel in the case where the target number is greater than or equal to the first threshold value and the first gradient is greater than the product of the second gradient and the preset value.
In this embodiment, the image processing apparatus determines, before determining whether the pixel to be confirmed is a blocking pixel, an average gradient of boundary pixels in the pixel neighborhood to obtain a first gradient, and determines an average gradient of non-boundary pixels in the pixel neighborhood to obtain a second gradient. And then determining whether the pixel neighborhood is a large gradient area or a flat area according to the difference between the first gradient and the second gradient, and determining whether the pixel to be confirmed is a pixel in the flat area or not under the condition that the pixel neighborhood is the flat area, thereby determining whether the pixel to be confirmed is a blockiness pixel or not. Therefore, when the target number is greater than or equal to the first threshold and the first gradient is greater than the product of the second gradient and the preset value, the pixel to be confirmed is determined to be the blockiness pixel, and the blockiness pixel can be made to be the pixel with high probability of occurrence of the blockiness in the flat area.
As an alternative embodiment, the image processing apparatus performs the following steps in performing step 103:
401. and determining pixels with the distance from the boundary position in the target video frame being less than or equal to a second threshold value as blocking effect pixels.
As described in step 103, the blocking effect is generally easy to occur at the boundary of the macroblock, and the probability of occurrence of the blocking effect is higher in the region closer to the boundary of the macroblock, so the image processing apparatus may select the pixel closer to the boundary of the macroblock from the target video frame as the blocking effect pixel, in other words, the pixel closer to the boundary in the target video frame may be regarded as the blocking effect pixel.
In the embodiment of the application, the image processing device determines whether the distance from the pixel in the target video frame to the boundary position is near or far based on the second threshold, specifically, the distance from the pixel in the target video frame to the boundary position is less than or equal to the second threshold, which indicates that the pixel is near to the boundary position, and further, the pixel can be determined to be a blockiness pixel, otherwise, the distance from the pixel in the target video frame to the boundary position is greater than the second threshold, which indicates that the pixel is far to the boundary position, and further, the pixel can be determined not to be a blockiness pixel. Accordingly, the image processing apparatus determines pixels in the target video frame whose distance to the boundary position is less than or equal to the second threshold value as blocking pixels.
402. And determining the blockiness area according to the blockiness pixels in the target video frame.
The implementation of this step may be referred to as implementation of step 205, and will not be described here again.
In this embodiment, the image processing apparatus determines whether the pixel in the target video frame is a blocking pixel based on a magnitude relation between a distance from the pixel in the target video frame to the boundary position and the second threshold value, specifically, determines that the pixel in the target video frame is a blocking pixel in a case where the distance from the pixel in the target video frame to the boundary position is less than or equal to the second threshold value. That is, the image processing apparatus may determine whether each pixel in the target video frame is a blocking pixel based on step 401, respectively, and after determining all the blocking pixels in the target video frame, may determine the blocking region according to the blocking pixels in the target video frame.
As an alternative embodiment, the image processing apparatus performs the following steps in performing step 401:
501. a flat region is determined from the target video frame.
In the embodiment of the present application, the flat region is a region with small gradient, that is, a region other than the large gradient region described above. In one possible implementation, the average gradient of pixels within the flat region is less than or equal to the third threshold. A mean gradient of pixels within the flat region being less than or equal to the third threshold value indicates that the mean gradient of pixels within the flat region is small, i.e. the flat region is a region of small gradient.
In another possible implementation, the flat region is a region in the target video frame where the gray level variation is less than or equal to the fourth threshold. A gray level change in the flat region being less than or equal to the fourth threshold value indicates that the flat region is a region with a small gradient.
502. And determining that the pixel with the distance from the boundary position in the flat area being smaller than or equal to a second threshold value is the blockiness pixel.
In the case where the blocking effect occurs in a large gradient region, the presence of the blocking effect is generally difficult for the human eye to perceive, as depicted in step 303. Therefore, it is possible to detect a blocking area in an area other than the large gradient area (hereinafter, the area other than the large gradient area will be simply referred to as a flat area) without detecting a blocking area in the large gradient area, and thus, by subsequently removing a blocking in the blocking area in the flat area, the display effect of the target video frame, that is, the image quality of the target video frame is improved, and the data processing amount to improve the image quality of the target video frame can be reduced. Accordingly, the image processing apparatus determines, in step 502, a blockiness pixel from a flat region in the target video frame, specifically, the image processing apparatus determines a pixel in the flat region having a distance to the boundary position less than or equal to the second threshold value as a blockiness pixel.
In this embodiment, the image processing apparatus determines, before determining whether the pixel to be confirmed is a blocking pixel, a flat area from the target video frame, and then determines whether the pixel in the flat area is a blocking pixel, specifically, determines that a pixel in the flat area whose distance to the boundary position is less than or equal to the second threshold value is a blocking pixel, so that the blocking pixel is a pixel in the flat area where the probability of occurrence of blocking is high.
As an alternative embodiment, the image processing apparatus further performs the following steps after determining the blockiness region from the target video frame based on the boundary position:
601. And smoothing the blockiness area in the target video frame to obtain an enhanced video frame.
Since a blocking effect may occur in a blocking area in a target video frame, which in turn results in a degradation of the image quality of the target video frame, by smoothing the blocking area in the target video frame, the blocking effect of the blocking effect region can be eliminated, so that the image processing device can eliminate the blocking effect of the blocking effect region by carrying out smoothing processing on the blocking effect region in the target video frame, thereby improving the image quality of the target video frame.
As an alternative embodiment, the image processing apparatus performs the following steps in performing step 601:
701. And sharpening the area except the blockiness area in the target video frame, and smoothing the blockiness area in the target video frame to obtain the enhanced video frame.
In step 701, the image processing apparatus performs the smoothing processing on the blockiness region in the target video frame and performs the sharpening processing on the region other than the blockiness region in the target video frame, so that the edge in the target video frame can be sharpened while eliminating the blockiness in the target video frame, and thus the image quality of the target video frame can be improved while sharpening the edge in the target video frame. Therefore, the situation that the boundary of the blockiness region is sharpened due to sharpening processing is avoided.
In one possible implementation manner, the image processing apparatus performs sharpening processing on the target video frame by the following formula, and may implement smoothing processing on a blockiness region in the target video frame, and perform sharpening processing on a region other than the blockiness region in the target video frame to obtain an enhanced video frame:
a pro =a+gain× (a-f (a)) … formula (1)
Where a pro denotes the enhancement video frame and a denotes the target video frame. f (A) represents an intermediate frequency signal and a low frequency signal in the target video frame, and optionally, f (·) represents gaussian filtering, that is, f (A) represents removing a high frequency signal in the target video frame by performing gaussian filtering on the target video frame to obtain an intermediate frequency signal in the target video frame and a low frequency signal in the target video frame.
Gain represents gain. Optionally, for the blocking area in the target video frame, the value of gain is-1, where a pro =f (a), that is, in the process of sharpening the target video frame to obtain the enhanced video frame, the high-frequency signal in the blocking area in the target video frame is removed, and the low-frequency signal and the intermediate-frequency signal in the blocking area in the target video frame are reserved, in other words, the blocking area in the target video frame is smoothed. For regions other than the blockiness region in the target video frame, the value of gain is positive.
Based on the technical scheme provided by the embodiment of the application, the embodiment of the application also provides a possible application scene. Referring to fig. 3, fig. 3 is a flowchart illustrating another image processing method according to an embodiment of the application,
As shown in fig. 3, after a code stream is input to the image processing apparatus, the image processing apparatus performs video decoding on the code stream to obtain an encoded video, where the encoded video includes a decoded video frame a. And taking the decoded video frame A as the target video frame, obtaining the coding information of the target video frame according to the coding information of the coded video, and obtaining the possible position of the blockiness in the target video frame according to the position of the macro block in the coding information of the target video frame, namely obtaining the boundary position of the boundary of the macro block in the target video frame.
Alternatively, the image processing apparatus obtains the boundary map E by determining the boundary position in the target video frame, for example, the image shown in fig. 4 is the boundary map E obtained by determining the boundary position in the target video frame shown in fig. 2. In the boundary map E shown in fig. 4, the black line indicates a position in the target video frame where a blocking effect may occur, and specifically, the black line is a boundary of a macroblock, and it should be understood that boundaries of two adjacent macroblocks do not overlap, and thus the black line includes two rows of pixels, or includes two columns of pixels, that is, the black line includes boundaries of two adjacent macroblocks.
Optionally, in the boundary map E, for any pixel p, E (p) =1 indicates that the pixel p is in a region covered by a black line, E (p) =0 indicates that the pixel p is in a region not covered by a black line, and specifically, the pixel value in the boundary map E may be represented by the following formula:
The image processing apparatus further performs a block region calculation on the target video frame according to the boundary position in the boundary map E, so as to determine a block region from the target video frame (corresponding to step 103 described above), and obtain a block map B, where the block map B includes the position information of the block region in the target video frame.
Alternatively, in the case where the pixel values in the boundary map E are expressed by the formula (2), the image processing apparatus determines the blockiness region in the target video frame by performing steps 201, 202, 203, 301, 302, and 303, the image processing apparatus may determine whether the pixel to be confirmed in the target video frame is a blockiness pixel by the following formula:
Where sum1 represents the target number, optionally, Where n×n denotes a preset size, E (q i) =1 denotes a boundary pixel, and E (q i) =0 denotes a non-boundary pixel. grad i represents the gradient of the ith pixel in the pixel neighborhood constructed centered on the pixel to be validated,Representing the sum of gradients of boundary pixels in the neighborhood of the pixel,The sum of gradients representing non-boundary pixels in the pixel neighborhood, α, is the preset value in step 303.
Finally, after obtaining the block diagram B, the image processing apparatus performs sharpening calculation on the target video frame according to the block diagram B (the implementation process of the sharpening calculation may refer to step 701 described above), so as to obtain the enhanced video frame (i.e. the enhanced video frame a_pro in fig. 3).
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information and obtains the autonomous agreement of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a kind of personal information to be processed.
The foregoing details of the method according to the embodiments of the present application and the apparatus according to the embodiments of the present application are provided below.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image processing apparatus 1 according to an embodiment of the present application, where the image processing apparatus 1 includes an obtaining unit 11 and a determining unit 12, and optionally, the image processing apparatus 1 further includes a processing unit 13, specifically:
An obtaining unit 11, configured to obtain a target video frame and encoding information of the target video frame, where the encoding information includes a macroblock position of a macroblock in the target video frame;
a determining unit 12 for determining a boundary position of a boundary of a macroblock in the target video frame based on the macroblock position;
the determining unit 12 is further configured to determine a blocking area from the target video frame based on the boundary position, where the blocking area is an area where blocking occurs.
In combination with any of the embodiments of the present application, the determining unit 12 is configured to:
determining boundary pixels located on a boundary of the macroblock based on the boundary position;
Constructing a pixel neighborhood with a preset size by taking a pixel to be confirmed in the target video frame as a center;
determining a target number of the boundary pixels within the pixel neighborhood;
Determining that the pixel to be confirmed is a blockiness pixel under the condition that the target number is larger than or equal to a first threshold value;
The blockiness region is determined from the blockiness pixels in the target video frame.
In combination with any embodiment of the present application, the determining unit 12 is further configured to:
Determining the average gradient of the boundary pixels in the pixel neighborhood to obtain a first gradient;
Determining the average gradient of the pixels in the pixel neighborhood except the boundary pixels to obtain a second gradient;
and determining the pixel to be confirmed as the blockiness pixel under the condition that the target number is larger than or equal to a first threshold value and the first gradient is larger than the product of the second gradient and a preset value, wherein the preset value is larger than 1.
In combination with any of the embodiments of the present application, the determining unit 12 is configured to:
determining the gradient sum of the boundary pixels in the pixel neighborhood through a Sobel operator;
Determining the gradient of the boundary pixels and the quotient of the gradient and the target number to obtain the first gradient.
In combination with any of the embodiments of the present application, the determining unit 12 is configured to:
determining pixels in the target video frame, the distance from the pixels to the boundary position of which is smaller than or equal to a second threshold value, as blocking effect pixels;
The blockiness region is determined from the blockiness pixels in the target video frame.
In combination with any of the embodiments of the present application, the determining unit 12 is configured to:
determining a flat region from the target video frame;
And determining pixels in the flat region, the distance from the boundary position of which is smaller than or equal to a second threshold value, as the blockiness pixels.
In combination with any of the embodiments of the application, the device further comprises:
And a processing unit 13, configured to perform smoothing processing on the blockiness region in the target video frame, so as to obtain an enhanced video frame.
In combination with any embodiment of the present application, the processing unit 13 is configured to:
Sharpening the area except the blockiness area in the target video frame, and smoothing the blockiness area in the target video frame to obtain the enhanced video frame.
In the embodiment of the application, after the image processing device acquires the target video frame and the coding information of the target video frame, the boundary position of the boundary of the macro block in the target video frame is determined by utilizing the macro block position in the coding information. Also, since the blocking effect is generally liable to occur at the boundary of the macroblock, and the probability of occurrence of the blocking effect is higher in a region closer to the boundary of the macroblock, the image processing apparatus can determine the blocking effect region from the target video frame based on the boundary position, whereby determination of the blocking effect region in the target video frame based on the encoding information of the target video frame can be achieved.
Because the coding information of the target video frame is carried information of the target video frame, the block effect area in the target video frame is determined based on the coding information of the target video frame, the efficiency of determining the block effect area in the target video frame can be improved, and the data processing capacity of determining the block effect area in the target video frame is reduced.
In some embodiments, the functions or modules included in the apparatus provided by the embodiments of the present application may be used to perform the methods described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
Fig. 6 is a schematic hardware structure of an electronic device according to an embodiment of the present application. The electronic device 2 comprises a processor 21 and a memory 22. Optionally, the electronic device 2 further comprises input means 23 and output means 24. The processor 21, memory 22, input device 23, and output device 24 are coupled by connectors including various interfaces, transmission lines or buses, etc., as are not limited by the present embodiments. It should be appreciated that in various embodiments of the application, coupled is intended to mean interconnected by a particular means, including directly or indirectly through other devices, e.g., through various interfaces, transmission lines, buses, etc.
The processor 21 may comprise one or more processors, for example one or more central processing units (central processing unit, CPU), which in the case of a CPU may be a single-core CPU or a multi-core CPU. Alternatively, the processor 21 may be a processor group constituted by a plurality of CPUs, the plurality of processors being coupled to each other through one or more buses. In the alternative, the processor may be another type of processor, and the embodiment of the application is not limited.
Memory 22 may be used to store computer program instructions as well as various types of computer program code for performing aspects of the present application. Optionally, the memory includes, but is not limited to, random access memory (random access memory, RAM), read-only memory (ROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM), or portable read-only memory (compact disc read-only memory, CD-ROM) for associated instructions and data.
The input means 23 are for inputting data and/or signals and the output means 24 are for outputting data and/or signals. The input device 23 and the output device 24 may be separate devices or may be an integral device.
It will be appreciated that in embodiments of the present application, the memory 22 may be used to store not only relevant instructions, but also relevant data, and embodiments of the present application are not limited to the specific data stored in the memory.
It will be appreciated that fig. 6 shows only a simplified design of an electronic device. In practical applications, the electronic device may further include other necessary elements, including but not limited to any number of input/output devices, processors, memories, etc., and all electronic devices that can implement the embodiments of the present application are within the scope of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein. It will be further apparent to those skilled in the art that the descriptions of the various embodiments of the present application are provided with emphasis, and that the same or similar parts may not be described in detail in different embodiments for convenience and brevity of description, and thus, parts not described in one embodiment or in detail may be referred to in description of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital versatile disk (DIGITAL VERSATILE DISC, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: a read-only memory (ROM) or a random access memory (random access memory, RAM), a magnetic disk or an optical disk, or the like.
Claims (10)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310241877.9A CN117750037B (en) | 2023-03-14 | 2023-03-14 | Image processing method and device, electronic device and computer readable storage medium |
| PCT/CN2023/109330 WO2024187659A1 (en) | 2023-03-14 | 2023-07-26 | Image processing method and apparatus, electronic device, and computer readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310241877.9A CN117750037B (en) | 2023-03-14 | 2023-03-14 | Image processing method and device, electronic device and computer readable storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN117750037A CN117750037A (en) | 2024-03-22 |
| CN117750037B true CN117750037B (en) | 2024-11-19 |
Family
ID=90281928
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310241877.9A Active CN117750037B (en) | 2023-03-14 | 2023-03-14 | Image processing method and device, electronic device and computer readable storage medium |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN117750037B (en) |
| WO (1) | WO2024187659A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103119939A (en) * | 2010-08-20 | 2013-05-22 | 英特尔公司 | Techniques for identifying block artifacts |
| CN110115039A (en) * | 2016-12-28 | 2019-08-09 | 索尼公司 | Image processing apparatus, image processing method, and program |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20060108994A (en) * | 2005-04-14 | 2006-10-19 | 엘지전자 주식회사 | Post-Processing Method for Improving Block Effects of Image Coding |
| TWI422228B (en) * | 2009-01-15 | 2014-01-01 | Silicon Integrated Sys Corp | Deblock method and image processing apparatus |
| CN101494787B (en) * | 2009-02-10 | 2011-02-09 | 重庆大学 | A Deblocking Method Based on Blocking Detection |
| CN102098501B (en) * | 2009-12-09 | 2013-05-08 | 中兴通讯股份有限公司 | Method and device for removing block effects of video image |
| KR20110125153A (en) * | 2010-05-12 | 2011-11-18 | 에스케이 텔레콤주식회사 | Image filtering method and apparatus and method and apparatus for encoding / decoding using the same |
| CN107360435B (en) * | 2017-06-12 | 2019-09-20 | 苏州科达科技股份有限公司 | Blockiness detection methods, block noise filtering method and device |
| US11290749B2 (en) * | 2018-07-17 | 2022-03-29 | Comcast Cable Communications, Llc | Systems and methods for deblocking filtering |
-
2023
- 2023-03-14 CN CN202310241877.9A patent/CN117750037B/en active Active
- 2023-07-26 WO PCT/CN2023/109330 patent/WO2024187659A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103119939A (en) * | 2010-08-20 | 2013-05-22 | 英特尔公司 | Techniques for identifying block artifacts |
| CN110115039A (en) * | 2016-12-28 | 2019-08-09 | 索尼公司 | Image processing apparatus, image processing method, and program |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2024187659A9 (en) | 2025-01-02 |
| CN117750037A (en) | 2024-03-22 |
| WO2024187659A1 (en) | 2024-09-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111402170B (en) | Image enhancement method, device, terminal and computer readable storage medium | |
| JP6726060B2 (en) | Image processing apparatus, control method thereof, and program | |
| CN111861938A (en) | Image denoising method and device, electronic equipment and readable storage medium | |
| CN107563979B (en) | Image processing method, image processing device, computer-readable storage medium and computer equipment | |
| CN107908998B (en) | Two-dimensional code decoding method and device, terminal equipment and computer readable storage medium | |
| CN112419161B (en) | Image processing method and device, storage medium and electronic equipment | |
| CN111311619A (en) | Method and device for realizing slider verification | |
| CN114596210A (en) | Noise estimation method, device, terminal equipment and computer readable storage medium | |
| WO2021102702A1 (en) | Image processing method and apparatus | |
| CN111724326B (en) | Image processing method and device, electronic equipment and storage medium | |
| CN113422956B (en) | Image coding method and device, electronic equipment and storage medium | |
| CN111083478A (en) | Video frame reconstruction method and device and terminal equipment | |
| CN117750037B (en) | Image processing method and device, electronic device and computer readable storage medium | |
| CN111083494A (en) | Video coding method and device and terminal equipment | |
| CN111091506A (en) | Image processing method and device, storage medium and electronic equipment | |
| CN113627314B (en) | Face image blur detection method, device, storage medium and electronic device | |
| CN117768647B (en) | Image processing method, device, equipment and readable storage medium | |
| CN110516680B (en) | Image processing method and device | |
| CN116681618B (en) | Image denoising method, electronic device and storage medium | |
| CN108765503B (en) | Skin color detection method, device and terminal | |
| CN112770015B (en) | Data processing method and related device | |
| CN112911186B (en) | Image storage method and device, electronic equipment and storage medium | |
| CN114373153B (en) | Video imaging optimization system and method based on multi-scale array camera | |
| CN111080550B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
| WO2015128302A1 (en) | Method and apparatus for filtering and analyzing a noise in an image |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |