[go: up one dir, main page]

CN113630597B - Method and system for preventing video from losing packets irrelevant to encoding and decoding - Google Patents

Method and system for preventing video from losing packets irrelevant to encoding and decoding Download PDF

Info

Publication number
CN113630597B
CN113630597B CN202110953039.5A CN202110953039A CN113630597B CN 113630597 B CN113630597 B CN 113630597B CN 202110953039 A CN202110953039 A CN 202110953039A CN 113630597 B CN113630597 B CN 113630597B
Authority
CN
China
Prior art keywords
subframe
video
frame
decoded
packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110953039.5A
Other languages
Chinese (zh)
Other versions
CN113630597A (en
Inventor
闫城辉
冯文澜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suirui Technology Group Co Ltd
Original Assignee
Suirui Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suirui Technology Group Co Ltd filed Critical Suirui Technology Group Co Ltd
Priority to CN202110953039.5A priority Critical patent/CN113630597B/en
Publication of CN113630597A publication Critical patent/CN113630597A/en
Application granted granted Critical
Publication of CN113630597B publication Critical patent/CN113630597B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64784Data processing by the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a method and a system for preventing video from losing packets irrelevant to encoding and decoding, belonging to the field of information processing, wherein the method comprises the following steps: s1: dividing the video frame to obtain subframes, and independently encoding each subframe by using a corresponding encoder; s2: after each subframe is coded, the coded subframes are subjected to network subpacket in sequence according to the sequence of subframe sequence numbers and are sent to a receiving side; s3: the receiving side prepares corresponding number of decoders for decoding respectively according to the segmentation mode, and caches the subframes; s4: and putting the decoded sub-frames into corresponding positions in the original video frames, and synthesizing a final video. The invention can dynamically divide the original video into N multiplied by N sub-videos to be respectively encoded, can reduce the length of video frames and improve the probability of complete transmission of the video frames.

Description

Method and system for preventing video from losing packets irrelevant to encoding and decoding
Technical Field
The invention belongs to the field of information processing, and particularly relates to a method and a system for preventing video packet loss irrelevant to encoding and decoding.
Background
In a general multimedia conference or multimedia communication system, the processing flow of the video subsystem is as follows: a transmitting side: video acquisition, video coding, coding frame packetization and network transmission; the receiving side: network receiving, assembling complete frames, video decoding and video rendering.
Currently, most video coding algorithms perform predictive coding on video frames according to a space domain or a time domain; the frame coded according to the space domain does not need to refer to other frames (key frames for short), but the compression rate is lower and the code rate is higher; the frame coded according to the time domain needs to refer to other frames (reference frames for short), the compression rate is higher, and the code rate is lower. Because the video coding has great relativity in terms of semantics after compression, once network packet loss occurs, the decoding of the current frame is affected, error code diffusion is caused, and the reference frames referring to the frame and the following frames cannot be decoded normally, so that the video quality is reduced sharply.
In the prior art, in the aspect of resisting network packet loss, forward error correction and backward error correction are roughly classified. A typical backward error correction method is a packet loss retransmission technology, and after a receiving side detects network packet loss, the receiving side applies for retransmitting the lost packet to a transmitting side; the receiving side waits for the arrival of the retransmission packet, and after the complete frame is assembled, the subsequent processing such as decoding is carried out; the common forward error correction method is that a transmitting side transmits a redundant packet in advance, and after a receiving side detects packet loss, the receiving side may recover the packet loss by using the redundant packet.
For backward error correction, the receiving side and the transmitting side are required to interact, a certain time delay exists under the influence of network bidirectional delay, and in the packet loss network environment, retransmission application and retransmission response packets have a certain probability of packet loss again, namely multiple rounds of interaction are required, and the delay is larger; for forward error correction, the protection efficiency of the redundant packets is low because it cannot predict which packets are lost in advance, and the receiving side cannot generally ensure that the lost packets can be recovered by using the redundant packets.
In addition, for specific codec algorithms, some specific forward error correction methods may be provided, such as the common h.264 algorithm, and flexible macroblock reordering (FMO) techniques are provided to disperse different macroblocks into different slices (slices), and each Slice is independently coded and decoded, but since its partition unit is a macroblock, the recovery effect of the whole macroblock is not good in the case that some slices cannot be decoded normally. H.264 also provides Scalable Video Coding (SVC) techniques, where the encoder generates a code stream comprising one or more separately decodable subcode streams, which may have different code rates, frame rates (time domain) and spatial resolutions (spatial domain); however, the method is divided into a base layer and an extension layer in both time domain and space domain, when the extension layer cannot be decoded normally due to packet loss, the output (lower frame rate or resolution) of the base layer can be used, but if the base layer cannot be decoded normally due to packet loss, the extension layer cannot be decoded either, so that the base layer is relied on.
In view of this, the present invention has been made.
Disclosure of Invention
The invention aims to provide a method and a system for preventing packet loss of video irrelevant to encoding and decoding, which dynamically divide an original video into N multiplied by N sub-videos to be respectively encoded, so that the length of a video frame can be reduced, and the probability of complete transmission of the video frame can be improved.
In order to achieve the above object, the present invention provides a method for preventing packet loss of video irrelevant to encoding and decoding, comprising the following steps:
s1: dividing the video frame to obtain subframes, and independently encoding each subframe by using a corresponding encoder;
s2: after each subframe is coded, the coded subframes are subjected to network subpacket in sequence according to the sequence of subframe sequence numbers and are sent to a receiving side;
s3: the receiving side prepares corresponding number of decoders for decoding respectively according to the segmentation mode, and caches the subframes;
s4: and putting the decoded sub-frames into corresponding positions in the original video frames, and synthesizing a final video.
Further, in the step S1, the dividing the video frame includes: the pixels of the video frame are segmented according to an N multiplied by N segmentation mode, and the pixels at the same position are extracted from each segmented image and sequentially sequenced to be synthesized into a subframe.
Further, in the step S2, before each subframe is transmitted, frame-level separation parameters are transmitted to the receiving side, which include: segmentation mode and video original resolution; and, the network packetization carries packet-level separation parameters, which include: frame number, subframe number, total number of packets of a subframe, and packet number.
Further, the step S3 includes the steps of:
s301: after receiving the network sub-packet, sending the sub-packet into a corresponding sub-frame receiving buffer for frame splicing according to the sub-frame sequence number carried by the network sub-packet; the framing operation includes: after non-coding data such as the segmentation parameters are removed, sorting is carried out according to the sub-packet sequence numbers;
s302: and calculating the packet loss rate according to the received packet sequence number, and feeding back to the transmitting side at regular time.
Further, in the step S4, the decoded sub-frame is put into the operation of the corresponding position in the original video frame, and the method further includes an original bitmap checking process, and when a certain sub-frame fails to be decoded normally, each sub-frame around the sub-frame is checked; if there is only one decoded subframe around, directly copying the pixel value of the decoded subframe as the pixel value of the subframe which is not decoded normally; if there are a plurality of decoded subframes, one decoded subframe is arbitrarily selected from among them to copy, or an average value of pixel values of the subframes is calculated as a pixel value of a subframe which is not decoded normally.
The invention also provides a system for preventing the video from losing packets irrelevant to encoding and decoding, which comprises a video processing module, a sending module, a receiving module and a video synthesizing module;
the video processing module is used for dividing the video frames to obtain subframes, and each subframe is independently encoded by using a corresponding encoder;
the transmitting module is used for sequentially carrying out network subpackaging on each coded subframe according to the sequence of the subframe sequence numbers after coding each subframe and transmitting the coded subframe to the receiving module;
the receiving module is used for preparing a corresponding number of decoders for decoding respectively according to the segmentation mode and buffering subframes;
the video synthesis module is used for placing the decoded sub-frames into corresponding positions in the original video frames and synthesizing the final video.
Further, the video processing module is further configured to divide pixels of the video frame according to an nxn division mode, extract pixels at the same position from each of the divided images, and sequentially sequence and synthesize the pixels into a subframe.
Further, the transmitting module transmits frame-level separation parameters to a receiving side before transmitting each subframe, including: segmentation mode and video original resolution; and, the network packetization carries packet-level separation parameters, which include: frame number, subframe number, total number of packets of a subframe, and packet number.
Further, in the receiving module, the following operations can be performed:
after receiving the network sub-packet, sending the sub-packet into a corresponding sub-frame receiving buffer for frame splicing according to the sub-frame sequence number carried by the network sub-packet; the framing operation includes: after non-coding data such as the segmentation parameters are removed, sorting is carried out according to the sequence numbers of the packets;
and calculating the packet loss rate according to the received packet sequence number, and feeding back to the sending module at regular time.
Further, the video synthesis module is further configured to perform an original bitmap checking process, and when a certain subframe fails to be decoded normally, check each subframe around the subframe; if there is only one decoded subframe around, directly copying the pixel value of the decoded subframe as the pixel value of the subframe which is not decoded normally; if there are a plurality of decoded subframes, one decoded subframe is arbitrarily selected from among them to copy, or an average value of pixel values of the subframes is calculated as a pixel value of a subframe which is not decoded normally.
The invention provides a method and a system for preventing video packet loss irrelevant to encoding and decoding, which select 1×1 (normal encoding), 2×2, 3×3 and … to uniformly divide pixel level into 1, 4, 9 and … subframes according to the current network condition by video frames (pixel bitmaps) before encoding, wherein each subframe is independently encoded; decoding the sub-frame bitmaps respectively and independently, assembling the sub-frame bitmaps into a complete bitmap, and performing subsequent rendering and other treatments; and each key frame and reference frame group can dynamically adjust the segmentation mode according to the packet loss rate fed back by the receiving side before starting encoding; because each subframe is independently encoded and decoded, the dependency relationship similar to the base layer and the extension layer does not exist, so that only one subframe code stream can be normally decoded, one video frame can be output, the possible resolution is lower, and the delay is smoother relative to the delay of backward error correction.
Drawings
Fig. 1 is a flow chart of a method of codec independent video anti-lost according to the present invention.
Fig. 2 is an exemplary diagram of video frames to be segmented for a codec independent video anti-lost method according to the present invention.
Fig. 3 is an exemplary diagram of a segmented subframe of a codec independent video anti-lost method according to the present invention.
Fig. 4 is an exemplary diagram of a decoded complete frame of a codec independent video anti-lost method according to the present invention.
Fig. 5 is a schematic structural diagram of a video packet loss prevention system irrelevant to coding and decoding in this embodiment.
Detailed Description
In order that those skilled in the art will better understand the present invention, the present invention will be described in further detail with reference to specific embodiments.
As shown in fig. 1, an embodiment of the present invention is a method for preventing packet loss of video unrelated to encoding and decoding, which dynamically divides an original video into n×n sub-videos according to a real-time packet loss rate, and encodes the N sub-videos respectively, so that the length of a video frame can be reduced, and the probability of complete transmission of the video frame can be improved; and then, the decoded sub-frame pixels are used for calculating the sub-frame corresponding pixels which cannot be decoded, so that a low-resolution renderable video frame can be provided when the packet is lost, and the video fluency is improved.
Specifically, the method comprises the following steps:
s1: and dividing the video frame to obtain subframes, and independently encoding each subframe by using a corresponding encoder.
Specifically, the pixels of the video frame are divided according to n×n division modes (1×1, 2×2, 3×3, …, etc.), and the pixels at the same position are extracted from each divided image and sequentially ordered and synthesized into one subframe, and the specific division mode can be adjusted according to different packet loss rates.
If the width and height of the video resolution is not divisible by the segmentation mode, padding can be applied to the left and lower sides of the video, specifically, the pixels padded to the left and lower side edge portions are generally padded with black, because nothing therein affects the final decoding effect, and similar processes exist for general video codec algorithms (e.g., H264).
The segmentation example in this step is as follows: referring to fig. 2, taking 2×2 division as an example, assuming that the original resolution of the video is 4×4, each table represents 1 pixel, it is divided into 4 images according to 2×2 (different background colors), the pixels at the same position are extracted from each divided image, and are sequentially synthesized into one sub-frame, and the divided sub-frames are shown in fig. 3.
According to the general principle of the current coding and decoding algorithm, for coding based on space intra-prediction, more sub-frames are divided, so that the lower the correlation between adjacent pixels in each sub-frame is, the lower the intra-frame prediction coding efficiency is; conversely, the more sub-frames are divided, the lower the resolution of the sub-frames is, the smaller the length of the coded code stream of the next frame is, the fewer the network packets are, and the higher the probability that one sub-frame code stream can be completely received and decoded is; therefore, the selection of different segmentation modes is actually a trade-off between coding efficiency and network packet loss resistance, and the segmentation modes of more subframes can be selected according to the change of the network packet loss rate before each key frame and reference frame group are coded.
For example: packet loss rate < = 20%, adopting a 2 x 2 segmentation mode; the packet loss rate is >20% and < = 40%, and a 3 x 3 segmentation mode is adopted; packet loss rate >40%, 4×4 split mode, etc.
The encoder for encoding the sub-frames may be selected as desired, and is the same as the normal video encoding process, and will not be described in detail in this disclosure.
S2: after each subframe is encoded, each encoded subframe is subjected to network packetization in sequence according to the sequence of the subframe sequence numbers and is sent to a receiving side.
Before each subframe is transmitted, frame-level separation parameters are transmitted to a receiving side, and the frame-level separation parameters at least comprise: segmentation mode and video original resolution.
And, each network packet carries packet-level separation parameters, which at least include: frame number (+1 per frame), subframe number, total number of packets of a subframe, and packet number.
S3: the reception prepares a corresponding number of decoders to decode respectively according to the division mode, and buffers the sub-frames.
Specifically, this step S3 includes the steps of:
s301: after receiving the network sub-packet, sending the sub-packet into a corresponding sub-frame receiving buffer for frame splicing according to the sub-frame sequence number carried by the network sub-packet, wherein the receiving buffer is used for sequencing and storing each packet according to the packet sequence number of each sub-packet of the sub-frame and is used for splicing each packet into a complete frame in sequence; the framing operation includes: and after non-coding data such as the segmentation parameters are removed, sorting is performed according to the sequence numbers of the packets.
S302: and calculating the packet loss rate according to the received packet sequence number, and feeding back to the transmitting side at fixed time. And then the segmentation mode can be replaced in real time according to the change of the packet loss rate.
Wherein, according to the total number of sub-frames and the sequence number of each currently received packet, the integrity of one sub-frame can be judged; when one sub-frame is complete, the sub-frame is sent to a corresponding decoder for decoding, and the decoder uses the same decoder as the encoding test and has the same decoding flow as the normal decoding flow.
S4: and putting the decoded sub-frames into corresponding positions in the original video frames, and synthesizing a final video.
And when the assembly of the last subframe is complete and the decoding output is carried out, or a packet of the next frame is received even though the assembly is not complete, the original bitmap checking processing stage is entered.
If each subframe is decoded normally, the frame is complete and directly enters the subsequent rendering and other processes.
If a subframe fails to decode normally, each subframe around the subframe is checked, and if only one decoded subframe exists around the subframe, the pixel value of the decoded subframe can be directly copied as the pixel value of the subframe which fails to decode normally; if there are a plurality of decoded subframes, one decoded subframe may be arbitrarily selected therefrom for copying, or an average value of pixel values of the subframes may be calculated as pixel values of subframes that fail to be decoded normally.
Still taking the 2 x 2 division shown in fig. 2-3 as an example, assuming that only subframe 2 is received intact, each pixel of subframes 1 and 4 may replicate a corresponding pixel of subframe 2, each pixel of subframe 3 may replicate each pixel of either subframe 1 or subframe 4, and the decoded frame is as shown in fig. 4, where each table represents 1 pixel. The end result corresponds to an enlarged 2 x 2 pixel image.
After the original bitmap inspection processing is completed, the left or lower filling area of the synthesized final video is cut off according to the width and height of the original video carried by the frame-level separation parameters if necessary.
When a subframe incapable of decoding due to packet loss occurs, the receiving side feeds back a request encoding key frame to the transmitting side, and at least a low-resolution video can be displayed before the next key frame arrives, so that the blocking phenomenon caused by waiting for the key frame is reduced as much as possible.
As shown in fig. 5, an embodiment of the present invention is a system for video packet loss prevention independent of codec, which includes: a video processing module 1, a transmitting module 2, a receiving module 3 and a video composing module 4.
The video processing module 1 is configured to divide a video frame to obtain subframes, where each subframe is independently encoded by using a corresponding encoder.
Specifically, the pixels of the video frame are segmented according to an n×n segmentation mode, each segmented image includes n×n pixels, in each segmented image, the pixels at the same position are extracted and sequentially ordered and synthesized into a subframe, and the specific segmentation mode can be adjusted according to different packet loss rates.
The encoder encoding the sub-frames may be selected as desired, which is the same as the normal video encoding flow.
And the sending module 2 is used for encoding each subframe, and sequentially carrying out network packetization on the encoded subframes according to the sequence of the subframe sequence numbers and sending the encoded subframes to the receiving module 3.
Specifically, before each subframe is sent, a frame-level separation parameter is sent to a receiving module, where the frame-level separation parameter includes at least: segmentation mode and video original resolution.
And, each network packet carries packet-level separation parameters, which at least include: current frame number (per frame+1), current subframe number, current total number of packets for the subframe, and current packet number.
And the receiving module 3 is used for preparing a corresponding number of decoders for respectively decoding according to the segmentation mode after receiving the network subpackets, and buffering the subframes.
Specifically, in the receiving module 3, the following operations can also be performed:
after receiving the network sub-packet, sending the sub-packet into a corresponding sub-frame receiving buffer for frame splicing according to the sub-frame sequence number carried by the network sub-packet; the framing operation includes: after non-coding data such as the segmentation parameters are removed, sorting is carried out according to the sequence numbers of the packets;
and calculating the packet loss rate according to the received packet sequence number, and feeding back to the transmitting side at fixed time.
Wherein, according to the total number of sub-frames and the sequence number of each currently received packet, the integrity of one sub-frame can be judged; when one sub-frame is complete, the sub-frame is sent to a corresponding decoder for decoding, and the decoder uses the same decoder as the encoding test and has the same decoding flow as the normal decoding flow.
And the video synthesis module 4 is used for placing the decoded subframes into corresponding positions in the original video frames according to the subframe sequence numbers and synthesizing the final video.
The method comprises the steps of receiving a frame of a video frame, synthesizing and decoding the frame of the video frame, and outputting the frame of the video frame, wherein when the last subframe of the frame of the video frame is synthesized completely and decoded, or receiving a subpacket of the next frame of the video frame even if the frame of the video frame is not synthesized completely, the frame of the video frame enters an original bitmap checking processing stage.
If each current subframe is decoded normally, the frame is complete and directly enters the subsequent rendering and other processes.
If the current subframe fails to decode normally, checking each subframe around the current subframe, and if only one decoded subframe exists around the current subframe, directly copying the pixel value of the decoded subframe to serve as the pixel value of the subframe which fails to decode normally; if there are a plurality of decoded subframes, one decoded subframe may be arbitrarily selected therefrom for copying, or an average value of pixel values of the subframes may be calculated as pixel values of subframes that fail to be decoded normally.
After the original bitmap inspection processing is completed, the left or lower filling area of the synthesized final video is cut off according to the width and height of the original video carried by the frame-level separation parameters if necessary.
When a subframe incapable of decoding due to packet loss occurs, the receiving module 3 feeds back a request for encoding a key frame to the transmitting module 2, and at least a low-resolution video can be displayed before the next key frame arrives, so that the blocking phenomenon caused by waiting for the key frame is reduced as much as possible.
Specific examples are set forth herein to illustrate the invention in detail, and the description of the above examples is only for the purpose of aiding in understanding the core concept of the invention. It should be noted that any obvious modifications, equivalents, or other improvements to those skilled in the art without departing from the inventive concept are intended to be included in the scope of the present invention.

Claims (6)

1. The method for preventing the video from losing the packets irrelevant to encoding and decoding is characterized by comprising the following steps of:
s1: dividing the video frame to obtain subframes, and independently encoding each subframe by using a corresponding encoder; wherein segmenting the video frame comprises: dividing pixels of a video frame according to an N multiplied by N dividing mode, extracting pixels at the same position from each divided image, and sequentially sequencing and synthesizing the pixels into a subframe;
s2: after each subframe is coded, the coded subframes are subjected to network subpacket in sequence according to the sequence of subframe sequence numbers and are sent to a receiving side;
s3: the receiving side prepares corresponding number of decoders for decoding respectively according to the segmentation mode, and caches the subframes;
s4: putting the decoded sub-frames into corresponding positions in the original video frames, and synthesizing a final video;
in step S4, the decoded subframe is put into the operation of the corresponding position in the original video frame, and the method further includes an original bitmap checking process, and when a certain subframe fails to be decoded normally, each subframe around the subframe is checked; if there is only one decoded subframe around, directly copying the pixel value of the decoded subframe as the pixel value of the subframe which is not decoded normally; if there are a plurality of decoded subframes, one decoded subframe is arbitrarily selected from among them to copy, or an average value of pixel values of the subframes is calculated as a pixel value of a subframe which is not decoded normally.
2. The method for preventing packet loss in a codec-independent video according to claim 1, wherein in step S2, frame-level separation parameters are transmitted to the receiving side before each subframe is transmitted, comprising: segmentation mode and video original resolution; and, the network packetization carries packet-level separation parameters, which include: frame number, subframe number, total number of packets of a subframe, and packet number.
3. The method and system for codec independent video anti-lost according to claim 2, wherein the step S3 comprises the steps of:
s301: after receiving the network sub-packet, sending the sub-packet into a corresponding sub-frame receiving buffer for frame splicing according to the sub-frame sequence number carried by the network sub-packet; the framing operation includes: after the segmentation parameter data are removed, sequencing is carried out according to the sub-package serial numbers;
s302: and calculating the packet loss rate according to the received packet sequence number, and feeding back to the transmitting side at regular time.
4. The system is characterized by comprising a video processing module, a sending module, a receiving module and a video synthesizing module;
the video processing module is used for dividing the video frames to obtain subframes, and each subframe is independently encoded by using a corresponding encoder; the video processing module is also used for dividing pixels of the video frame according to an N multiplied by N dividing mode, extracting pixels at the same position from each divided image and sequentially sequencing and synthesizing the pixels into a subframe;
the transmitting module is used for sequentially carrying out network subpackaging on each coded subframe according to the sequence of the subframe sequence numbers after coding each subframe and transmitting the coded subframe to the receiving module;
the receiving module is used for preparing a corresponding number of decoders for decoding respectively according to the segmentation mode and buffering subframes;
the video synthesis module is used for placing the decoded sub-frames in the corresponding positions in the original video frames and synthesizing the final video;
the video synthesis module is also used for carrying out original bitmap checking processing, and checking all subframes around a certain subframe when the subframe fails to be decoded normally; if there is only one decoded subframe around, directly copying the pixel value of the decoded subframe as the pixel value of the subframe which is not decoded normally; if there are a plurality of decoded subframes, one decoded subframe is arbitrarily selected from among them to copy, or an average value of pixel values of the subframes is calculated as a pixel value of a subframe which is not decoded normally.
5. The codec-independent video anti-lost packet system according to claim 4, wherein the transmitting module transmits frame-level separation parameters to the receiving side before each subframe is transmitted, comprising: segmentation mode and video original resolution; and, the network packetization carries packet-level separation parameters, which include: frame number, subframe number, total number of packets of a subframe, and packet number.
6. The codec independent video anti-lost system according to claim 5, wherein the receiving module is further configured to:
after receiving the network sub-packet, sending the sub-packet into a corresponding sub-frame receiving buffer for frame splicing according to the sub-frame sequence number carried by the network sub-packet; the framing operation includes: after the segmentation parameter data are removed, sequencing is carried out according to the sequence number of the packet;
and calculating the packet loss rate according to the received packet sequence number, and feeding back to the sending module at regular time.
CN202110953039.5A 2021-08-19 2021-08-19 Method and system for preventing video from losing packets irrelevant to encoding and decoding Active CN113630597B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110953039.5A CN113630597B (en) 2021-08-19 2021-08-19 Method and system for preventing video from losing packets irrelevant to encoding and decoding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110953039.5A CN113630597B (en) 2021-08-19 2021-08-19 Method and system for preventing video from losing packets irrelevant to encoding and decoding

Publications (2)

Publication Number Publication Date
CN113630597A CN113630597A (en) 2021-11-09
CN113630597B true CN113630597B (en) 2024-01-23

Family

ID=78386603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110953039.5A Active CN113630597B (en) 2021-08-19 2021-08-19 Method and system for preventing video from losing packets irrelevant to encoding and decoding

Country Status (1)

Country Link
CN (1) CN113630597B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114257349A (en) * 2021-12-16 2022-03-29 北京数码视讯技术有限公司 Data processing system and method
CN114363576B (en) * 2022-01-12 2024-11-22 厦门市思芯微科技有限公司 A method for dynamic image transmission of wifi visual products
CN115134629B (en) * 2022-05-23 2023-10-31 阿里巴巴(中国)有限公司 Video transmission method, system, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1581972A (en) * 2004-05-20 2005-02-16 复旦大学 Error-corcealed video decoding method
CN103888780A (en) * 2009-03-04 2014-06-25 瑞萨电子株式会社 Dynamic image encoding device, dynamic image decoding device, dynamic image encoding method and dynamic image decoding method
CN105681342A (en) * 2016-03-08 2016-06-15 随锐科技股份有限公司 Anti-error code method and system of multi-channel video conference system based on H264
CN106060582A (en) * 2016-05-24 2016-10-26 广州华多网络科技有限公司 Video transmission system, video transmission method and video transmission apparatus
CN108833932A (en) * 2018-07-19 2018-11-16 湖南君瀚信息技术有限公司 A kind of method and system for realizing the ultralow delay encoding and decoding of HD video and transmission

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6990151B2 (en) * 2001-03-05 2006-01-24 Intervideo, Inc. Systems and methods for enhanced error concealment in a video decoder
US9154799B2 (en) * 2011-04-07 2015-10-06 Google Inc. Encoding and decoding motion via image segmentation
US10027982B2 (en) * 2011-10-19 2018-07-17 Microsoft Technology Licensing, Llc Segmented-block coding

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1581972A (en) * 2004-05-20 2005-02-16 复旦大学 Error-corcealed video decoding method
CN103888780A (en) * 2009-03-04 2014-06-25 瑞萨电子株式会社 Dynamic image encoding device, dynamic image decoding device, dynamic image encoding method and dynamic image decoding method
CN105681342A (en) * 2016-03-08 2016-06-15 随锐科技股份有限公司 Anti-error code method and system of multi-channel video conference system based on H264
CN106060582A (en) * 2016-05-24 2016-10-26 广州华多网络科技有限公司 Video transmission system, video transmission method and video transmission apparatus
CN108833932A (en) * 2018-07-19 2018-11-16 湖南君瀚信息技术有限公司 A kind of method and system for realizing the ultralow delay encoding and decoding of HD video and transmission

Also Published As

Publication number Publication date
CN113630597A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN113630597B (en) Method and system for preventing video from losing packets irrelevant to encoding and decoding
US7751324B2 (en) Packet stream arrangement in multimedia transmission
US6490705B1 (en) Method and apparatus for receiving MPEG video over the internet
US6317462B1 (en) Method and apparatus for transmitting MPEG video over the internet
CA2646870C (en) Method for protecting multimedia data using additional network abstraction layers (nal)
EP2285122B1 (en) A method and device for reconstructing a sequence of video data after transmission over a network
US7779336B2 (en) Assembling forward error correction frames
JPH07322248A (en) Method and apparatus for transmitting moving image data
KR20080095833A (en) Method and system for splitting and encoding uncompressed video for transmission over wireless media
US20090304070A1 (en) Method allowing compression and protection parameters to be determined for the transmission of multimedia data over a wireless data channel
KR19980024351A (en) Image coding apparatus, image decoding apparatus and image transmission method
CN107995493B (en) A Multi-Description Video Coding Method for Panoramic Video
EP1488645B1 (en) Video coding and transmission
CN111064962B (en) Video transmission system and method
EP1599047B1 (en) Video (de)coding device with frequency band retransmission
EP1501228A1 (en) Encoded packet transmission/reception method, device thereof, and program
US8767840B2 (en) Method for detecting errors and recovering video data
EP1158812A2 (en) Method for detecting errors in a video signal
CN101192903B (en) Data frame coding and decoding control method
CA2474931A1 (en) Video processing
CN102710943A (en) Real-time video transmission method based on forward error correction encoding window expanding
JP2002027483A (en) Image encoding device, image decoding device, and storage medium
JPH09116903A (en) Hierarchical coding device and hierarchical decoding device
JPH07193811A (en) Picture transmission equipment
CN102231837A (en) Forward error correction real-time video transmission method based on sub-picture group

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant