CN115361556A - High-efficiency video compression algorithm based on self-adaption and system thereof - Google Patents
High-efficiency video compression algorithm based on self-adaption and system thereof Download PDFInfo
- Publication number
- CN115361556A CN115361556A CN202210822132.7A CN202210822132A CN115361556A CN 115361556 A CN115361556 A CN 115361556A CN 202210822132 A CN202210822132 A CN 202210822132A CN 115361556 A CN115361556 A CN 115361556A
- Authority
- CN
- China
- Prior art keywords
- module
- coding
- decoding
- compensation
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000006835 compression Effects 0.000 title claims abstract description 25
- 238000007906 compression Methods 0.000 title claims abstract description 25
- 238000005070 sampling Methods 0.000 claims abstract description 31
- 238000012545 processing Methods 0.000 claims abstract description 12
- 230000003044 adaptive effect Effects 0.000 claims description 33
- 238000000034 method Methods 0.000 claims description 16
- 239000013598 vector Substances 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 9
- 230000000750 progressive effect Effects 0.000 claims description 8
- 238000011176 pooling Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 230000014509 gene expression Effects 0.000 claims description 6
- 238000012546 transfer Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 5
- 241000282414 Homo sapiens Species 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000013139 quantization Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/154—Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/63—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention relates to the technical field of image processing, and discloses a high-efficiency video compression algorithm based on self-adaptation and a system thereof, which comprises a coding input module, a discrete wavelet transform module, a storage classification module, a decoding output module and a data sampling point compensation module, wherein the coding input module is used for coding video data through context self-adaptive variable length coding and binary arithmetic coding based on the data sampling point compensation module, the coding input module has scalability, and the discrete wavelet transform module is used for discretizing the scale of continuous wavelet transform according to the power of a signal algorithm.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a self-adaptive high-efficiency video compression algorithm and a system thereof.
Background
The images can be divided into two categories, one is static images, the other is dynamic images, also called videos, because most of information received by human beings comes from vision, wherein the static images and the dynamic images are the media which are most abundant in information amount, intuitive and specific and bear information, with the rapid development of the network era, people can fully realize the benefits brought by the static images and the dynamic images, especially the dynamic images and live online broadcast are rapidly developed in recent years, and the videos become one of the most widespread transmission forms in the modern life. In addition to recording the splendid moments of life, along with the prevalence of video platforms, more and more users select to upload videos to share their lives, however, the problem also comes with the videos, the video material files are large in size and often occupy a large amount of equipment space, and meanwhile, many video platforms also limit the size and format of the uploaded videos, so that compressing the videos becomes a necessary choice.
Due to the occurrence of a large number of videos, in the prior art, the conversion rate of video compression streams is low when video encoding and decoding are processed, the quality is poor, and the problem of image distortion occurs when the videos are played, so that the user experience is poor.
Disclosure of Invention
Solves the technical problem
Aiming at the defects of the prior art, the invention provides a high-efficiency video compression algorithm based on self-adaptation and a system thereof, which adopts a discrete wavelet transform module and a scalable coding and decoding module to realize the high-quality high-magnification compression conversion dynamically adapting to various video streams; reconstructing the image by adopting a data sampling point compensation module so as to reduce the distortion between the source image and the reconstructed image; the upper fine-grain self-adaptive variable length coding and the binary arithmetic coding are adopted, so that the distortion is reduced, and the code stream is reduced.
Technical scheme
In order to achieve the purpose, the invention provides the following technical scheme: a high-efficiency video compression algorithm based on self-adaptation and a system thereof comprise a coding input module, a discrete wavelet transform module, a storage classification module, a decoding output module and a data sampling point compensation module, wherein the coding input module is used for coding video data through context self-adaptive variable length coding and binary arithmetic coding based on the data sampling point compensation module, the coding input module has scalability, the discrete wavelet transform module is used for discretizing the scale of continuous wavelet transform according to the power of a signal algorithm, the storage classification module is a transfer station of the video data and classifies the video through classification coding, the decoding output module is used for decoding and transmitting the video data based on the context self-adaptive variable length decoding module and the data sampling point compensation module, the decoding output module has scalability, and the data sampling point compensation module is used for reconstructing images through self-adaptation data sampling point compensation.
Further, the context adaptive variable length coding dynamically selects a code table used in coding by using the condition of coded syntax elements, and updates the length of a trailing coefficient suffix at any time to obtain a high compression ratio, and a video image is transmitted to a discrete wavelet transform module after being subjected to prediction, transformation and quantization coding.
Further, the binary arithmetic coding is to convert each non-binary syntax element value into a binary bit sequence, when each syntax element is in a progressive calculation process of the encoder, if each syntax element is not output, each syntax element is idle, if an input stream is long, an obtained interval is small, transmission conditions are recorded with high precision, a coding input module does not wait for progressive operation to a final interval to output a codeword, in a binary arithmetic coding module, upper and lower limits of the interval are represented in a binary form, and when a high significant bit of the lower limit is the same as a high significant bit of the upper limit, the bit is removed to ensure that the encoder continuously outputs the code stream during the progressive calculation, and the sequence appears in a direct proportion relation with the interval.
Further, the discrete wavelet transform module comprises low-frequency information and high-frequency information, the low-frequency information contains the characteristics of the signals, the high-frequency information gives details or differences of the signals, and if the high-frequency signals are removed from human voice, the content can still be known; if enough low-frequency information is removed, some meaningless sounds are heard, the low-frequency information and the high-frequency information are often used in wavelet analysis, an original signal generates two signals through two mutual filters, the low-frequency signal is continuously decomposed through a continuous decomposition process, and the signals are decomposed into a plurality of low-resolution components.
Further, the storage classification module is a transfer station of video data, videos are stored through the classification coding module, the storage method comprises feature extraction and feature pooling, the feature extraction is used for extracting image features and audio features from a video stream according to 1s intervals, and the image features are extracted by using Inception V3 to extract feature vectors of a last layer of full-connection layer; extracting audio features by using VGG; the characteristic pooling comprises two groups of input streams, namely image input and audio input, each input stream is sent to a learnable pooling to form a single expression, then the two expressions are connected and sent to a full-connection layer for dimensionality reduction to obtain a dimensional characteristic vector, and finally the characteristic vector is sent to a CG layer, characteristic correlation information of the characteristic vector is captured, and weight is adjusted again.
Further, a context adaptive variable length decoding module in the decoding output module is an inverse process of the context adaptive variable length coding module, input data of the context adaptive variable length decoding is a bit stream from slice layer data, a basic unit of decoding is a 4 × 4 pixel block, and output is a sequence including all amplitudes of each pixel of the 4 × 4 block, and the context adaptive variable length decoding step is as follows:
a. initializing all coefficient amplitudes;
b. decoding the number of nonzero coefficients and the number of trailing coefficients;
c. decoding trailing coefficient symbols;
d. decoding the non-zero coefficient amplitude;
e. decoding total _ zeros and run _ before;
f. and combining the nonzero coefficient amplitude and the run-length information to obtain the whole residual error data block.
Furthermore, the data sampling point compensation module firstly divides the whole into a plurality of units, then performs data sampling point compensation operation on each pixel in each unit, selects a pixel compensation mode according to the characteristics of the unit pixels to reduce distortion between a source image and a reconstructed image, and divides the self-adaptive sampling point compensation mode into two categories of strip compensation and edge compensation, wherein the strip compensation divides the intensity level of the pixel value into a plurality of strips, the pixels in each strip have the same compensation value, and selects the corresponding strip compensation value for compensation according to the strip where the reconstructed pixel point is located during compensation, the edge compensation is mainly used for compensating the outline of the image, compares the current pixel point value with the adjacent 2 pixel values and is used for selecting from the 4 templates of the compared 2 adjacent pixels so as to obtain the type of the pixel point, and the decoding end performs corresponding compensation correction according to the type information of the indication pixel point in the code stream.
Furthermore, the scalability of the encoding input module and the decoding output module is to divide the video stream into two or more code streams during encoding, the code streams are also called layers, at least one of the layers is a base layer, the rest are enhancement layers, the base layer contains the basic and most important information of the video signal, the receiving end can reconstruct an image with basic quality after receiving the base layer, the enhancement layers contain the detail information of the video signal, and the receiving end decodes the base layer and the enhancement layers together to reconstruct an image with higher quality.
Further, the method comprises the following specific steps:
s1, firstly, a user records video information in a coding input module of shooting equipment through the shooting equipment, and the coding input module carries out coding processing on video data by utilizing context adaptive variable length coding and binary arithmetic coding;
s2, discretizing the video data by a discrete wavelet transform module in the shooting equipment according to the continuous wavelet transform scale of the video data and the power of a signal algorithm;
s3, the video data are processed by the coding input module and the discrete wavelet transform module and then transmitted to the storage classification module for classification;
and S4, finally, transmitting the video data from the storage classification module to a decoding output module, and carrying out output processing by a context self-adaptive variable length decoding module and a data sampling point compensation module in the decoding output module.
Advantageous effects
The invention provides a self-adaptive high-efficiency video compression algorithm and a system thereof, which have the following beneficial effects: the high-efficiency video compression algorithm and the system thereof based on self-adaptation adopt a discrete wavelet transform module and a scalable coding and decoding module to realize the high-quality high-magnification compression conversion which dynamically adapts to various video streams; reconstructing the image by adopting a data sampling point compensation module so as to reduce the distortion between the source image and the reconstructed image; the upper fine-grain self-adaptive variable length coding and the binary arithmetic coding are adopted, so that the distortion is reduced, and the code stream is reduced.
Drawings
FIG. 1 is an overall system diagram of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be discussed further in subsequent figures.
Embodiments of the application are directed to a computer system/server that is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with computer systems/servers include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
The computer system/server is described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server is implemented in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules are located in both local and remote computer system storage media including memory storage devices.
Example 1
The embodiment provides a self-adaptive high-efficiency video compression algorithm and a system thereof as shown in fig. 1, which includes a coding input module, a discrete wavelet transform module, a storage classification module, a decoding output module and a data sampling point compensation module, wherein the coding input module is based on the data sampling point compensation module and performs coding processing on video data through context adaptive variable length coding and binary arithmetic coding, the coding input module has scalability, the discrete wavelet transform module performs discretization processing on the scale of continuous wavelet transform according to the power of a signal algorithm, the storage classification module is a transfer station of video data and classifies the video through classification coding, the decoding output module is based on the context adaptive variable length decoding module and the data sampling point compensation module and transmits the video data after decoding processing, the decoding output module has the data sampling point compensation module performs reconstruction image through adaptive data sampling point compensation to reduce distortion between a source image and a reconstructed image.
A high-efficiency video compression algorithm based on self-adaptation and a management method of a system thereof are disclosed, which specifically comprise the following steps:
s1, firstly, a user records video information in a coding input module of shooting equipment through the shooting equipment, and the coding input module carries out coding processing on the video data by utilizing context adaptive variable length coding and binary arithmetic coding;
in this embodiment, it should be specifically described that the context adaptive variable length coding dynamically selects a code table used in coding under the condition that coded syntax elements are utilized, and updates the length of the suffix of the trailing coefficient at any time to obtain a high compression ratio, and the video image is transmitted to the discrete wavelet transform module after being subjected to prediction, transform and quantization coding, which is not limited in this embodiment.
Specifically, it is to be noted that the binary arithmetic coding is to convert each syntax element value that is not binary into a binary bit sequence, when each syntax element is in a progressive calculation process of an encoder, if each syntax element is not output, each syntax element is idle, if an input stream is long, an obtained interval is small, and a transmission condition is recorded with high precision, a coding input module does not output a codeword until the coding input module advances to a final interval, in a binary arithmetic coding module, upper and lower limits of the interval are represented in a binary form, and when a high significant bit of the lower limit is the same as a high significant bit of the upper limit, the bit is removed to ensure that the encoder continuously outputs the code stream during the progressive calculation, and a sequence appears in a proportional relationship with the interval, which is not specifically limited in this embodiment.
S2, discretizing the video data by a discrete wavelet transform module in the shooting equipment according to the continuous wavelet transform scale of the video data and the power of a signal algorithm;
in this embodiment, it is specifically noted that the discrete wavelet transform module includes low frequency information and high frequency information, where the low frequency information includes characteristics of signals, and the high frequency information gives details or differences of the signals, and if the high frequency signal is removed from human voice, the content can still be known; if enough low-frequency information is removed, some meaningless sounds are heard, the low-frequency information and the high-frequency information are often used in wavelet analysis, an original signal generates two signals through two mutual filters, the low-frequency signal is continuously decomposed through a continuous decomposition process, and the signals are decomposed into a plurality of low-resolution components.
S3, the video data are processed by the coding input module and the discrete wavelet transform module and then transmitted to the storage classification module for classification;
in this embodiment, it needs to be specifically stated that the storage classification module is a transfer station of video data, and stores a video through the classification coding module, where the storage method includes feature extraction and feature pooling, where the feature extraction extracts image features and audio features from a video stream at 1s intervals, and the image features extract feature vectors of a last full link layer by using inclusion v 3; extracting audio features by using VGG; the feature pooling includes two input streams, namely, an image input and an audio input, each input stream is sent to a learnable pool to form a single expression, the two expressions are connected and sent to a full-connection layer for dimensionality reduction to obtain a dimensional feature vector, and finally the feature vector is sent to a CG layer, feature association information of the feature vector is captured, and weight is adjusted again.
And S4, finally, transmitting the video data from the storage classification module to a decoding output module, and carrying out output processing by a context self-adaptive variable length decoding module and a data sampling point compensation module in the decoding output module.
In this embodiment, it needs to be specifically explained that the context adaptive variable length decoding module in the decoding output module is an inverse process of the context adaptive variable length coding module, the input data of the context adaptive variable length decoding is a bit stream from slice layer data, a basic unit of decoding is a 4 × 4 pixel block, and the output is a sequence including all amplitude values of each pixel of the 4 × 4 block, and the context adaptive variable length decoding step is as follows:
a. initializing all coefficient amplitudes;
b. decoding the number of nonzero coefficients and the number of trailing coefficients;
c. decoding the trailing coefficient symbols;
d. decoding the non-zero coefficient amplitude;
e. decoding total _ zeros and run _ before;
f. and combining the nonzero coefficient amplitude and the run information to obtain the whole residual error data block.
The data sampling point compensation module is divided into a plurality of units, then data sampling point compensation operation is carried out on each pixel in each unit, a pixel compensation mode is selected according to the unit pixel characteristics of each unit to reduce distortion between a source image and a reconstructed image, the self-adaptive sampling point compensation mode is divided into two main types of strip compensation and edge compensation, wherein the strip compensation divides the intensity level of the pixel value into a plurality of strips, the pixels in each strip have the same compensation value, the corresponding strip compensation value is selected according to the strip where the reconstructed pixel point is located during compensation, the edge compensation is mainly used for compensating the outline of the image, the current pixel point value is compared with the adjacent 2 pixel values and is selected from 4 templates of the 2 adjacent pixels for comparison, so that the type of the pixel point is obtained, and the decoding end carries out corresponding compensation correction according to the type information of the code stream indicating the pixel point.
It should be noted that the scalability of the encoding input module and the decoding output module is to divide the video stream into two or more code streams during encoding, where these code streams are also referred to as layers, at least one of these layers is a base layer, and the rest are enhancement layers, where the base layer contains the basic and most important information of the video signal, and the receiving end can reconstruct an image with basic quality after receiving the base layer, and the enhancement layers contain the detailed information of the video signal, and the receiving end decodes the base layer and the enhancement layers together to reconstruct an image with higher quality.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (9)
1. An adaptive-based high-efficiency video compression system, comprising: the video data coding and decoding device comprises a coding input module, a discrete wavelet transformation module, a storage classification module, a decoding output module and a data sampling point compensation module, wherein the coding input module is used for coding video data through context adaptive variable length coding and binary arithmetic coding based on the data sampling point compensation module, the coding input module has scalability, the discrete wavelet transformation module is used for discretizing the scale of continuous wavelet transformation according to the power of a signal algorithm, the storage classification module is a transfer station of the video data and classifies the video through the classification coding, the decoding output module is used for decoding and transmitting the video data based on the context adaptive variable length decoding module and the data sampling point compensation module, the decoding output module has scalability, and the data sampling point compensation module is used for reconstructing a compensated image through adaptive data sampling points.
2. An adaptive-based high efficiency video compression system according to claim 1, wherein: the context adaptive variable length coding dynamically selects a code table used in coding by using the condition of coded syntax elements, and updates the length of a trailing coefficient suffix at any time to obtain a high compression ratio, and a video image is transmitted to a discrete wavelet transform module after being subjected to prediction, transformation and quantization coding.
3. An adaptive-based high efficiency video compression system according to claim 1, wherein: the binary arithmetic coding is characterized in that each non-binary syntax element value is converted into a binary bit sequence, when each syntax element is in the progressive calculation process of an encoder, if each syntax element is not output, each syntax element is idle, if input stream is long, an obtained interval is small, the transmission condition is recorded with high precision, a coding input module does not output code words until the last interval is reached in a progressive mode, in a binary arithmetic coding module, the upper limit and the lower limit of the interval are represented in a binary mode, and when the high effective bit of the lower limit is the same as the high effective bit of the upper limit, the bit is removed to ensure that the encoder continuously outputs code streams during the progressive calculation, and the sequence has a direct proportion relation with the interval.
4. An adaptive-based high efficiency video compression system as defined in claim 1, wherein: the discrete wavelet transform module comprises low-frequency information and high-frequency information, the low-frequency information contains the characteristics of signals, the high-frequency information gives out details or differences of the signals, and if the high-frequency signals are removed from human voice, the content can still be known; if enough low-frequency information is removed, some meaningless sounds are heard, the low-frequency information and the high-frequency information are often used in wavelet analysis, an original signal generates two signals through two mutual filters, and the low-frequency signal is continuously decomposed through a continuous decomposition process to be decomposed into a plurality of low-resolution components.
5. An adaptive-based high efficiency video compression system as defined in claim 1, wherein: the storage classification module is a transfer station of video data, videos are stored through the classification coding module, the storage method comprises feature extraction and feature pooling, image features and audio features are extracted from a video stream according to 1s intervals by the feature extraction, and feature vectors of a last full-connection layer are extracted by the image features by using Inception V3; extracting audio features by using VGG; the characteristic pooling comprises two groups of input streams, namely image input and audio input, each input stream is sent to a learnable pool to form a single expression, then the two expressions are connected and sent to a full-connection layer for dimensionality reduction to obtain a dimensional characteristic vector, and finally the characteristic vector is sent to a CG layer, characteristic correlation information of the characteristic vector is captured, and weight is adjusted again.
6. An adaptive-based high efficiency video compression system as defined in claim 1, wherein: the context adaptive variable length decoding module in the decoding output module is the inverse process of the context adaptive variable length coding module, the input data of the context adaptive variable length decoding is a bit stream from slice layer data, the basic unit of decoding is a 4 x 4 pixel block, the output is a sequence containing all amplitude values of each pixel point of the 4 x 4 block, and the context adaptive variable length decoding step is as follows:
a. initializing all coefficient amplitudes;
b. decoding the number of nonzero coefficients and the number of trailing coefficients;
c. decoding the trailing coefficient symbols;
d. decoding the non-zero coefficient amplitude;
e. decoding total _ zeros and run _ before;
f. and combining the nonzero coefficient amplitude and the run information to obtain the whole residual error data block.
7. An adaptive-based high efficiency video compression system according to claim 1, wherein: the data sampling point compensation module firstly divides the whole into a plurality of units, then performs data sampling point compensation operation on each pixel in each unit, selects a pixel compensation mode according to the characteristics of the unit pixels to reduce distortion between a source image and a reconstructed image, and divides a self-adaptive sampling point compensation mode into two categories of strip compensation and edge compensation, wherein the strip compensation divides the intensity level of the pixel value into a plurality of strips, the pixels in each strip have the same compensation value, and selects the corresponding strip compensation value for compensation according to the strip where the reconstructed pixel point is located during compensation, the edge compensation is mainly used for compensating the outline of the image, compares the current pixel point value with the adjacent 2 pixel values and selects from the 4 templates of the 2 adjacent pixels for comparison so as to obtain the type of the pixel point, and a decoding end performs corresponding compensation correction according to the type information of the marking pixel point in a code stream.
8. An adaptive-based high efficiency video compression system as defined in claim 1, wherein: the scalability of the encoding input module and the decoding output module is to divide a video stream into two or more code streams during encoding, the code streams are also called layers, at least one of the layers is a base layer, the rest are enhancement layers, the base layer contains the basic and most important information of the video signal, a receiving end can reconstruct and obtain an image with basic quality after receiving the base layer, the enhancement layers contain the detail information of the video signal, and the receiving end decodes the base layer and the enhancement layers together to reconstruct an image with higher quality.
9. An algorithm for an adaptive-based high-efficiency video compression system according to any one of claims 1-8, characterized by: the method comprises the following specific steps:
s1, firstly, a user records video information in a coding input module of shooting equipment through the shooting equipment, and the coding input module carries out coding processing on video data by utilizing context adaptive variable length coding and binary arithmetic coding;
s2, discretizing the video data by a discrete wavelet transform module in the shooting equipment according to the continuous wavelet transform scale of the video data and the power of a signal algorithm;
s3, the video data are processed by the coding input module and the discrete wavelet transform module and then transmitted to the storage classification module for classification;
and S4, finally, transmitting the video data from the storage classification module to a decoding output module, and carrying out output processing by a context self-adaptive variable length decoding module and a data sampling point compensation module in the decoding output module.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210822132.7A CN115361556A (en) | 2022-07-12 | 2022-07-12 | High-efficiency video compression algorithm based on self-adaption and system thereof |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210822132.7A CN115361556A (en) | 2022-07-12 | 2022-07-12 | High-efficiency video compression algorithm based on self-adaption and system thereof |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN115361556A true CN115361556A (en) | 2022-11-18 |
Family
ID=84032400
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210822132.7A Pending CN115361556A (en) | 2022-07-12 | 2022-07-12 | High-efficiency video compression algorithm based on self-adaption and system thereof |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115361556A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116567270A (en) * | 2023-02-27 | 2023-08-08 | 慧之谷(南京)信息科技有限公司 | A HZG Lossless High Compression Graphics Coding Method |
| CN117278765A (en) * | 2023-11-23 | 2023-12-22 | 北京铁力山科技股份有限公司 | Video compression method, device, equipment and storage medium |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1665299A (en) * | 2005-04-07 | 2005-09-07 | 西安交通大学 | Scalable Video Codec Architecture Design Methodology |
| CN102238387A (en) * | 2011-05-25 | 2011-11-09 | 深圳市融创天下科技股份有限公司 | Video entropy code as well as entropy coding method, device and medium |
| CN103856786A (en) * | 2012-12-04 | 2014-06-11 | 中山大学深圳研究院 | Streaming media video encryption method and device based on H.264 |
| CN106454383A (en) * | 2016-06-01 | 2017-02-22 | 上海魅视数据科技有限公司 | High-rate digital video compression processing system |
| CN110113603A (en) * | 2019-04-22 | 2019-08-09 | 屠晓 | HD video processing terminal |
| CN113590850A (en) * | 2021-01-29 | 2021-11-02 | 腾讯科技(深圳)有限公司 | Multimedia data searching method, device, equipment and storage medium |
| CN113850162A (en) * | 2021-09-10 | 2021-12-28 | 北京百度网讯科技有限公司 | Video auditing method and device and electronic equipment |
-
2022
- 2022-07-12 CN CN202210822132.7A patent/CN115361556A/en active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1665299A (en) * | 2005-04-07 | 2005-09-07 | 西安交通大学 | Scalable Video Codec Architecture Design Methodology |
| CN102238387A (en) * | 2011-05-25 | 2011-11-09 | 深圳市融创天下科技股份有限公司 | Video entropy code as well as entropy coding method, device and medium |
| CN103856786A (en) * | 2012-12-04 | 2014-06-11 | 中山大学深圳研究院 | Streaming media video encryption method and device based on H.264 |
| CN106454383A (en) * | 2016-06-01 | 2017-02-22 | 上海魅视数据科技有限公司 | High-rate digital video compression processing system |
| CN110113603A (en) * | 2019-04-22 | 2019-08-09 | 屠晓 | HD video processing terminal |
| CN113590850A (en) * | 2021-01-29 | 2021-11-02 | 腾讯科技(深圳)有限公司 | Multimedia data searching method, device, equipment and storage medium |
| CN113850162A (en) * | 2021-09-10 | 2021-12-28 | 北京百度网讯科技有限公司 | Video auditing method and device and electronic equipment |
Non-Patent Citations (1)
| Title |
|---|
| 卢文涛: "基于H.264的运动估计算法优化及熵编码研究" * |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116567270A (en) * | 2023-02-27 | 2023-08-08 | 慧之谷(南京)信息科技有限公司 | A HZG Lossless High Compression Graphics Coding Method |
| CN117278765A (en) * | 2023-11-23 | 2023-12-22 | 北京铁力山科技股份有限公司 | Video compression method, device, equipment and storage medium |
| CN117278765B (en) * | 2023-11-23 | 2024-02-13 | 北京铁力山科技股份有限公司 | Video compression method, device, equipment and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109785847B (en) | Audio compression algorithm based on dynamic residual error network | |
| CN110290387B (en) | A Generative Model-Based Image Compression Method | |
| CN113810693B (en) | A kind of JPEG image lossless compression and decompression method, system and device | |
| CN103873877A (en) | Image transmission method and device for remote desktop | |
| CN115361556A (en) | High-efficiency video compression algorithm based on self-adaption and system thereof | |
| CN116723333B (en) | Segmentable video coding methods, devices and products based on semantic information | |
| US20240015336A1 (en) | Filtering method and apparatus, computer-readable medium, and electronic device | |
| Li et al. | Multiple description coding based on convolutional auto-encoder | |
| CN105392014B (en) | A kind of wavelet-transform image compression method of optimization | |
| Rojatkar et al. | Image compression techniques: Lossy and lossless | |
| Kabir et al. | Edge-based transformation and entropy coding for lossless image compression | |
| CN102572426A (en) | A method and device for data processing | |
| US9948928B2 (en) | Method and apparatus for encoding an image | |
| US20060193529A1 (en) | Image signal transforming method, image signal inversely-transforming method, image encoding apparatus, image encoding method, image encoding program, image decoding apparatus, image decoding method, and image decoding program | |
| Bhatnagar et al. | Image compression using dct based compressive sensing and vector quantization | |
| CN118055247A (en) | Human-machine-vision-combined scalable image compression system and method | |
| Rahman et al. | An integer wavelet transform based lossless image compression technique using arithmetic coding | |
| Padmavati et al. | DCT combined with fractal quadtree decomposition and Huffman coding for image compression | |
| Rao et al. | Evaluation of lossless compression techniques | |
| Hussin et al. | A comparative study on improvement of image compression method using hybrid DCT-DWT techniques with Huffman encoding for wireless sensor network application | |
| Pabi et al. | Tri-mode dual level 3-D image compression over medical MRI images | |
| Tola | Comparative study of compression functions in modern web programming languages | |
| Sangeeta et al. | Comprehensive Analysis of Flow Incorporated Neural Network-based Lightweight Video Compression Architecture | |
| Garg et al. | Various Image Compression Techniques: A Review. | |
| Dudhagara et al. | A comparative study and analysis of EZW and SPIHT methods for wavelet based image compression |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221118 |