[go: up one dir, main page]

CN111954034B - Video coding method and system based on terminal equipment parameters - Google Patents

Video coding method and system based on terminal equipment parameters Download PDF

Info

Publication number
CN111954034B
CN111954034B CN202011116351.0A CN202011116351A CN111954034B CN 111954034 B CN111954034 B CN 111954034B CN 202011116351 A CN202011116351 A CN 202011116351A CN 111954034 B CN111954034 B CN 111954034B
Authority
CN
China
Prior art keywords
video
display device
coding
stream data
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011116351.0A
Other languages
Chinese (zh)
Other versions
CN111954034A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Bairui Network Technology Co ltd
Original Assignee
Guangzhou Bairui Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Bairui Network Technology Co ltd filed Critical Guangzhou Bairui Network Technology Co ltd
Priority to CN202011116351.0A priority Critical patent/CN111954034B/en
Publication of CN111954034A publication Critical patent/CN111954034A/en
Application granted granted Critical
Publication of CN111954034B publication Critical patent/CN111954034B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention relates to the technical field of video coding, in particular to a video coding method and a video coding system based on terminal equipment parameters. When the method is applied, first video stream data and second video stream data of first display equipment are firstly obtained, then current terminal equipment parameters of the first display equipment are extracted, pixel distribution information of the first display equipment is extracted from the current terminal equipment parameters when the current terminal equipment parameters represent that coding and decoding defect marks exist in the first display equipment, then video coding and decoding defect distribution of the second display equipment is determined, and the video stream data displayed by the second display equipment is coded to ensure that the second display equipment normally displays videos. The invention can send different audio and video coding parameter information according to different terminal display devices and send different audio and video streams to different types of terminals, thereby ensuring that different terminal devices can realize smooth audio and video playing on the premise of different parameters and improving the audio and video communication quality.

Description

Video coding method and system based on terminal equipment parameters
Technical Field
The present invention relates to the field of video coding technologies, and in particular, to a video coding method and system based on terminal device parameters.
Background
With the continuous development and popularization of computer technology and internet technology, video communication provides convenience for daily work and amateur communication of people. Today, video communication has supported the sharing of video images between multiple ends. For example, in a video conference, the conference host may share the conference video to other participants in real-time, thereby breaking geographic limitations. However, in the prior art, the same encoding technology is adopted, so that audio and video streams received by different terminal display devices are the same, and the purpose of smooth audio and video playing of different terminal devices under the premise of different parameters cannot be ensured; when the technology for sharing the video images is implemented, the images on some equipment sides are not displayed clearly or are blocked, and the audio and video communication quality is reduced.
Disclosure of Invention
The present specification provides a video encoding method and system based on terminal device parameters, so as to solve or partially solve the technical problems in the background art.
In a first aspect, a video encoding method based on terminal device parameters is provided, and is applied to a video encoding device, where the video encoding device communicates with multiple display devices, and the multiple display devices communicate with each other to form a video networking, and the method includes:
obtaining first video stream data of a first display device in the video networking in a current time period;
obtaining second video stream data of the first display device in an idle period corresponding to a second display device in the video networking; the second display device is a display device which has video sharing authority with the first display device in the video networking, and the idle period is used for representing that a video coding and decoding thread of the second display device is in a closed state;
extracting terminal equipment parameters of the first display equipment according to the first video stream data and the second video stream data to obtain current terminal equipment parameters corresponding to the first display equipment; when the current terminal equipment parameters corresponding to the first display equipment represent that the first display equipment has coding and decoding defect marks, extracting pixel distribution information of the first display equipment from the current terminal equipment parameters;
determining video coding and decoding defect distribution of the second display device based on the pixel distribution information and a first video decoding quality factor of the second display device in the current time interval and a second video decoding quality factor of the second display device in the idle time interval, and coding shared video stream data sent by the first display device to the second display device according to the video coding and decoding defect distribution.
In an alternative embodiment, encoding the shared video stream data sent by the first display device to the second display device according to the video codec defect distribution includes:
determining a gray level co-occurrence matrix of a defect area corresponding to the video coding and decoding defect distribution;
if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a first set interval, encoding the shared video stream data according to the display configuration parameter corresponding to the second display device in the video coding and decoding defect distribution;
if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a second set interval, encoding the shared video stream data according to the encoding unit with the depth of zero in the video encoding and decoding defect distribution;
and if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a third set interval, encoding the shared video stream data according to the encoding unit with the depth of one in the video encoding and decoding defect distribution.
In an alternative embodiment, encoding the shared video stream data according to coding units with a depth of zero in the video codec defect distribution includes:
reading video output format information uploaded by the second display device when the second display device establishes the video sharing right with the first display device from a preset coding library, and acquiring queue characteristic factors of a plurality of pixel gray level queues corresponding to the second display device according to the video output format information;
calculating the gray defect distribution of each pixel gray queue through the queue characteristic factors; sequencing each pixel gray level queue according to the sequence of the gray level defect distribution from large to small, and selecting a set number of pixel gray level queues which are sequenced in the front as target gray level queues;
and extracting a coding unit with the depth of zero from the video coding and decoding defect distribution based on the target gray level queue, and coding the shared video stream data according to the video frame decoding logic information corresponding to the coding unit.
In an alternative embodiment, encoding the shared video stream data according to the display configuration parameter corresponding to the second display device in the video codec defect distribution includes:
acquiring first video synchronization information between the first display device and the second display device from the first display device, and acquiring second video synchronization information between the first display device and the second display device from the second display device;
performing feature similarity calculation on video frames in the same time period in the first video synchronization information and the second video synchronization information to obtain a feature similarity calculation result; judging whether the first video synchronization information and the second video synchronization information are synchronous according to the feature similarity calculation result;
if the first video synchronization information is synchronous with the second video synchronization information, extracting a coding compression ratio of the first display device from the first video synchronization information, determining a pixel gray scale defect list of the first display device according to the coding compression ratio, calibrating a pixel boundary value of the first display device corresponding to the pixel gray scale defect list to obtain a target gray scale defect list, determining a display configuration parameter corresponding to the second display device from the video coding and decoding defect distribution according to the target gray scale defect list, and encoding the shared video stream data by using the display configuration parameter and the target gray scale defect list;
if the first video synchronization information is not synchronous with the second video synchronization information, extracting a video frame list which is not synchronous with the second video synchronization information from the first video synchronization information, and determining video frame output information and video frame coding information of the video frame list; mapping field features corresponding to any group of first information fields in the video frame output information to a vector list of coding time domain distribution information corresponding to a second information field with the largest coding loss rate in the video frame coding information, so as to obtain mapping features corresponding to the field features in the vector list; and extracting set encoding and decoding parameters corresponding to the shared video frames in the video encoding and decoding defect distribution based on the mapping characteristics, determining display configuration parameters corresponding to the second display device through the direction identification corresponding to the set encoding and decoding parameters, and encoding the shared video stream data by adopting the display configuration parameters and the video frame encoding information.
In an alternative embodiment, determining the video codec impairment distribution of the second display device based on the pixel distribution information and a first video decoding quality factor of the second display device in the current time period and a second video decoding quality factor of the second display device in the idle time period, respectively, includes:
determining a parameter calibration duration corresponding to a current terminal device parameter corresponding to the first display device;
extracting a plurality of pieces of first video quality index information corresponding to the first video decoding quality factor and a plurality of pieces of second video quality index information corresponding to the second video decoding quality factor, and determining a set number of pieces of target video quality index information from the plurality of pieces of first video quality index information and the plurality of pieces of second video quality index information according to the parameter calibration duration;
determining the correlation weight of each quality index coefficient of the target video quality index information, and determining the number of quality index coefficients of which the current correlation quantity of the correlation weights is less than or equal to the preset correlation quantity according to the correlation weight of each quality index coefficient; calculating the proportion of the quality index coefficient number to the total quality index coefficient number of the target video quality index information to obtain video quality defect information of the target video quality index information; and determining the video coding and decoding defect distribution of the second display device according to the video quality defect information.
In an alternative embodiment, performing terminal device parameter extraction on the first display device according to the first video stream data and the second video stream data to obtain a current terminal device parameter corresponding to the first display device includes:
determining a same video frame between the first video stream data and the second video stream data;
drawing a pixel comparison list of the same video frame;
and determining the current terminal equipment parameter corresponding to the first display equipment according to the characteristic description information difference between the pixels at the same position in the pixel comparison list.
In an alternative embodiment, when a current terminal device parameter corresponding to the first display device indicates that the first display device has a coding/decoding defect identifier, extracting pixel distribution information of the first display device from the current terminal device parameter includes:
extracting target equipment parameters of the first display equipment from the current terminal equipment parameters according to the parameter types corresponding to the coding and decoding defect marks;
and displaying and outputting the first display equipment by adopting the target equipment parameters to obtain the pixel distribution information.
In an alternative embodiment of the method of the present invention,
obtaining first video stream data of a first display device in the video networking within a current time period, comprising: extracting first video stream data in the current time period from a video playing record of the first display device;
obtaining second video stream data of the first display device in an idle period corresponding to a second display device in the video networking, including: and extracting second video stream data in the idle period from the video playing record of the first display device.
In a second aspect, a video coding system based on terminal device parameters is provided, comprising a video coding device and a plurality of display devices, wherein the video coding device is communicated with the plurality of display devices, and the plurality of display devices are communicated with each other to form a video networking; wherein the video encoding device is specifically configured to:
obtaining first video stream data of a first display device in the video networking in a current time period;
obtaining second video stream data of the first display device in an idle period corresponding to a second display device in the video networking; the second display device is a display device which has video sharing authority with the first display device in the video networking, and the idle period is used for representing that a video coding and decoding thread of the second display device is in a closed state;
extracting terminal equipment parameters of the first display equipment according to the first video stream data and the second video stream data to obtain current terminal equipment parameters corresponding to the first display equipment; when the current terminal equipment parameters corresponding to the first display equipment represent that the first display equipment has coding and decoding defect marks, extracting pixel distribution information of the first display equipment from the current terminal equipment parameters;
determining video coding and decoding defect distribution of the second display device based on the pixel distribution information and a first video decoding quality factor of the second display device in the current time interval and a second video decoding quality factor of the second display device in the idle time interval, and coding shared video stream data sent by the first display device to the second display device according to the video coding and decoding defect distribution.
In an alternative embodiment, the encoding, by the video encoding device, the shared video stream data sent by the first display device to the second display device according to the video coding and decoding defect distribution specifically includes:
determining a gray level co-occurrence matrix of a defect area corresponding to the video coding and decoding defect distribution;
if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a first set interval, encoding the shared video stream data according to the display configuration parameter corresponding to the second display device in the video coding and decoding defect distribution;
if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a second set interval, encoding the shared video stream data according to the encoding unit with the depth of zero in the video encoding and decoding defect distribution;
and if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a third set interval, encoding the shared video stream data according to the encoding unit with the depth of one in the video encoding and decoding defect distribution.
Through one or more technical schemes of this description, this description has following beneficial effect or advantage:
the method comprises the steps of firstly obtaining first video stream data of a first display device in a current time period and second video stream data of the first display device in an idle time period, secondly extracting current terminal device parameters of the first display device according to the first video stream data and the second video stream data, extracting pixel distribution information of the first display device from the current terminal device parameters when the current terminal device parameters represent that the first display device has coding and decoding defect marks, then determining video coding and decoding defect distribution of a second display device based on the pixel distribution information and first video decoding quality factors of the second display device in the current time period and second video decoding quality factors in the idle time period, and coding shared video stream data sent by the first display device to the second display device according to the video coding and decoding defect distribution. The invention can improve the video coding rate on the premise of not influencing the coding accuracy and integrity, not only can ensure the transmission rate of the shared video stream data to avoid the pause of the second display equipment when outputting the shared video stream data, but also can ensure that the second display equipment outputs clear shared video stream data.
Therefore, the invention can send different audio and video coding parameter information according to different terminal display devices and send different audio and video streams to different types of terminals, thereby ensuring that different terminal devices can realize smooth audio and video playing on the premise of different parameters and improving the audio and video communication quality.
The above description is only an outline of the technical solution of the present specification, and the detailed description of the present specification is given below in order to make the technical means of the present specification more clearly understood, further to make the present specification implementable in accordance with the content of the specification, and to make the above and other objects, features, and advantages of the present specification more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the specification. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 shows a schematic diagram of a video encoding system based on terminal device parameters according to one embodiment of the present description;
FIG. 2 shows a flow diagram of a method for video encoding based on terminal device parameters according to one embodiment of the present description;
fig. 3 is a block diagram illustrating an apparatus for video encoding based on terminal device parameters according to an embodiment of the present specification.
Fig. 4 shows a schematic diagram of a video encoding device according to an embodiment of the present description.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The inventor finds, through research and study, that the problem of unclear image display or blockage when the device end plays the shared video image is caused by the problem of the shared video image output end in the encoding process, and in detail, when the output end encodes the shared video image, the encoding accuracy and integrity are reduced due to communication interference on the output end, and the encoding speed is also influenced. For this reason, the inventors have innovatively found that to solve the problem of unclear image display or stuttering when the device plays the shared video image, the video encoding speed needs to be increased without affecting the encoding accuracy and integrity.
To achieve this, please first refer to fig. 1, which is a schematic diagram of a communication architecture of a video encoding system 100 based on terminal device parameters, wherein the video encoding system 100 may include a video encoding device 200 and a plurality of display devices 400. In which the video encoding apparatus 200 communicates with a plurality of display apparatuses 400, and the plurality of display apparatuses 400 communicate with each other to form a video networking.
On the basis of the above, please refer to fig. 2 in combination, a flowchart of a video encoding method based on terminal device parameters is provided, and the method may be applied to the video encoding device 200 in fig. 1, and specifically may include the contents described in the following steps S21 to S24.
Step S21, obtaining first video stream data of the first display device in the video networking in the current time period.
Step S22, obtaining first video stream data of the first display device in the video networking in the current time period.
In this embodiment, the second display device is a display device in the video networking, which has a video sharing right with the first display device, and the idle period is used to represent that a video codec thread of the second display device is in an off state.
Step S23, according to the first video stream data and the second video stream data, performing terminal device parameter extraction on the first display device to obtain a current terminal device parameter corresponding to the first display device; and when the current terminal equipment parameters corresponding to the first display equipment represent that the first display equipment has coding and decoding defect marks, extracting pixel distribution information of the first display equipment from the current terminal equipment parameters.
Step S24, determining video coding and decoding defect distribution of the second display device based on the pixel distribution information and a first video decoding quality factor of the second display device in the current time period and a second video decoding quality factor of the second display device in the idle time period, respectively, and encoding shared video stream data sent by the first display device to the second display device according to the video coding and decoding defect distribution.
It is understood that, through the above-mentioned steps S21-S24, first the first video stream data of the first display device in the current period and the second video stream data in the idle period are obtained, secondly, extracting the current terminal equipment parameter of the first display equipment according to the first video stream data and the second video stream data, extracting the pixel distribution information of the first display equipment from the current terminal equipment parameter when the current terminal equipment parameter represents that the first display equipment has coding and decoding defect identification, then determining the video coding and decoding defect distribution of the second display device based on the pixel distribution information and a first video decoding quality factor of the second display device in the current time interval and a second video decoding quality factor of the second display device in the idle time interval respectively, and encoding the shared video stream data sent by the first display device to the second display device according to the video coding and decoding defect distribution.
Therefore, the video coding rate can be improved on the premise of not influencing the coding accuracy and integrity, the transmission rate of the shared video stream data can be ensured, the second display device can be prevented from being blocked when outputting the shared video stream data, and the second display device can be ensured to output clear shared video stream data. Furthermore, the invention can send different audio and video coding parameter information according to different terminal display devices and send different audio and video streams to different types of terminals, thereby ensuring that different terminal devices can realize smooth audio and video playing on the premise of different parameters and improving the audio and video communication quality.
In an alternative embodiment, in order to ensure the integrity of encoding the shared video stream data, it is necessary to consider the different situations of the video codec defect distribution, and for this purpose, in step S24, the shared video stream data sent by the first display device to the second display device is encoded according to the video codec defect distribution, specifically including the contents described in the following steps S241 to S244.
Step S241, determining a gray level co-occurrence matrix of the defect area corresponding to the video encoding and decoding defect distribution.
Step S242, if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in the first setting interval, encoding the shared video stream data according to the display configuration parameter corresponding to the second display device in the video coding and decoding defect distribution.
In step S243, if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a second set interval, the shared video stream data is encoded according to the coding unit with the depth of zero in the video coding and decoding defect distribution.
In step S244, if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in the third setting section, the shared video stream data is encoded according to the encoding unit with the depth of one in the video coding and decoding defect distribution.
In this way, based on the above steps S241 to S244, different situations of video codec defect distribution can be considered, thereby ensuring the integrity of encoding the shared video stream data.
Further, the encoding of the shared video stream data according to the coding unit with the depth of zero in the video codec defect distribution described in step S243 may specifically include the contents described in the following steps S2431 to S433.
Step S2431, reading, from a preset encoding library, video output format information uploaded by the second display device when the video sharing right is established with the first display device, and obtaining, according to the video output format information, queue characteristic factors of a plurality of pixel grayscale queues corresponding to the second display device.
Step S2432, calculating the gray defect distribution of each pixel gray queue through the queue characteristic factors; and sequencing each pixel gray level queue according to the sequence of the gray level defect distribution from large to small, and selecting a set number of pixel gray level queues which are sequenced at the front as target gray level queues.
And step S2433, extracting a coding unit with a depth of zero from the video coding and decoding defect distribution based on the target gray level queue, and coding the shared video stream data according to the video frame decoding logic information corresponding to the coding unit.
It is understood that the encoding of the shared video stream data according to the coding unit with depth of one in the video codec defect distribution described in step S244 is similar to the implementation principle of the above-mentioned steps S2431-S2433, and therefore will not be further described here.
Further, in order to ensure the synchronization when encoding the shared video stream data based on the display configuration parameters, the encoding of the shared video stream data according to the display configuration parameters corresponding to the second display device in the video codec defect distribution described in step S242 may further include the following steps S2421 to S2424.
Step S2421 of acquiring first video synchronization information between the first display device and the second display device from the first display device, and acquiring second video synchronization information between the first display device and the second display device from the second display device.
Step S2422, performing feature similarity calculation on video frames in the same time period in the first video synchronization information and the second video synchronization information to obtain a feature similarity calculation result; and judging whether the first video synchronization information and the second video synchronization information are synchronized according to the feature similarity calculation result.
Step S2423, if the first video synchronization information is synchronized with the second video synchronization information, extracting a coding compression ratio of the first display device from the first video synchronization information, determining a pixel gray scale defect list of the first display device according to the coding compression ratio, calibrating a pixel boundary value of the first display device corresponding to the pixel gray scale defect list to obtain a target gray scale defect list, determining a display configuration parameter corresponding to the second display device from the video coding and decoding defect distribution according to the target gray scale defect list for the target gray scale defect list, and encoding the shared video stream data by using the display configuration parameter and the target gray scale defect list.
Step S2424, if the first video synchronization information is not synchronous with the second video synchronization information, extracting a video frame list which is not synchronous with the second video synchronization information from the first video synchronization information, and determining video frame output information and video frame coding information of the video frame list; mapping field features corresponding to any group of first information fields in the video frame output information to a vector list of coding time domain distribution information corresponding to a second information field with the largest coding loss rate in the video frame coding information, so as to obtain mapping features corresponding to the field features in the vector list; and extracting set encoding and decoding parameters corresponding to the shared video frames in the video encoding and decoding defect distribution based on the mapping characteristics, determining display configuration parameters corresponding to the second display device through the direction identification corresponding to the set encoding and decoding parameters, and encoding the shared video stream data by adopting the display configuration parameters and the video frame encoding information.
When the contents described in the above-described step S2421 to step S2424 are applied, the synchronism in encoding the shared video stream data based on the display configuration parameters can be ensured.
In an implementation manner, in order to accurately determine the video codec defect distribution of the second display device, the determining the video codec defect distribution of the second display device based on the pixel distribution information and the first video decoding quality factor of the second display device in the current time period and the second video decoding quality factor of the second display device in the idle time period, which are described in step S24, may specifically include the following steps a to c.
Step a, determining a parameter calibration duration corresponding to the current terminal equipment parameter corresponding to the first display equipment.
And b, extracting a plurality of pieces of first video quality index information corresponding to the first video decoding quality factors and a plurality of pieces of second video quality index information corresponding to the second video decoding quality factors, and determining a set number of pieces of target video quality index information from the plurality of pieces of first video quality index information and the plurality of pieces of second video quality index information according to the parameter calibration duration.
Step c, determining the correlation weight of each quality index coefficient of the target video quality index information, and determining the number of quality index coefficients of which the current correlation quantity of the correlation weights is less than or equal to the preset correlation quantity according to the correlation weight of each quality index coefficient; calculating the proportion of the quality index coefficient number to the total quality index coefficient number of the target video quality index information to obtain video quality defect information of the target video quality index information; and determining the video coding and decoding defect distribution of the second display device according to the video quality defect information.
Through the steps a-c, the video coding and decoding defect distribution of the second display device can be accurately determined.
In a possible example, the extracting of the terminal device parameter from the first display device according to the first video stream data and the second video stream data described in step S23 may specifically include the following contents described in steps (11) to (13).
(11) Determining a same video frame between the first video stream data and the second video stream data.
(12) And drawing a pixel comparison list of the same video frame.
(13) And determining the current terminal equipment parameter corresponding to the first display equipment according to the characteristic description information difference between the pixels at the same position in the pixel comparison list.
Therefore, the real-time performance and the accuracy of the current terminal equipment parameters can be ensured.
Optionally, when the current terminal device parameter corresponding to the first display device indicates that the first display device has a coding/decoding defect identifier, which is described in step S23, the extracting of the pixel distribution information of the first display device from the current terminal device parameter may specifically include the content described in the following step (21) and step (22).
(21) And extracting the target equipment parameters of the first display equipment from the current terminal equipment parameters according to the parameter types corresponding to the coding and decoding defect marks.
(22) And displaying and outputting the first display equipment by adopting the target equipment parameters to obtain the pixel distribution information.
In this way, it can be ensured that the pixel distribution information can accurately reflect the display quality and the display size of the first display device.
In an alternative embodiment, the obtaining of the first video stream data of the first display device in the video networking in the current time period as described in step S21 includes: and extracting first video stream data in the current time period from the video playing record of the first display device. The obtaining of the second video stream data of the first display device in the idle period corresponding to the second display device in the video networking described in step S22 includes: and extracting second video stream data in the idle period from the video playing record of the first display device.
Based on the same inventive concept, a video coding system based on terminal device parameters is also provided, which is described in detail as follows.
A video coding system based on terminal device parameters comprises a video coding device and a plurality of display devices, wherein the video coding device is communicated with the display devices, and the display devices are communicated with each other to form a video networking; wherein the video encoding device is specifically configured to:
obtaining first video stream data of a first display device in the video networking in a current time period;
obtaining second video stream data of the first display device in an idle period corresponding to a second display device in the video networking; the second display device is a display device which has video sharing authority with the first display device in the video networking, and the idle period is used for representing that a video coding and decoding thread of the second display device is in a closed state;
extracting terminal equipment parameters of the first display equipment according to the first video stream data and the second video stream data to obtain current terminal equipment parameters corresponding to the first display equipment; when the current terminal equipment parameters corresponding to the first display equipment represent that the first display equipment has coding and decoding defect marks, extracting pixel distribution information of the first display equipment from the current terminal equipment parameters;
determining video coding and decoding defect distribution of the second display device based on the pixel distribution information and a first video decoding quality factor of the second display device in the current time interval and a second video decoding quality factor of the second display device in the idle time interval, and coding shared video stream data sent by the first display device to the second display device according to the video coding and decoding defect distribution.
Optionally, the encoding, by the video encoding device, the shared video stream data sent by the first display device to the second display device according to the video coding and decoding defect distribution specifically includes:
determining a gray level co-occurrence matrix of a defect area corresponding to the video coding and decoding defect distribution;
if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a first set interval, encoding the shared video stream data according to the display configuration parameter corresponding to the second display device in the video coding and decoding defect distribution;
if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a second set interval, encoding the shared video stream data according to the encoding unit with the depth of zero in the video encoding and decoding defect distribution;
and if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a third set interval, encoding the shared video stream data according to the encoding unit with the depth of one in the video encoding and decoding defect distribution.
Based on the same inventive concept as the previous embodiment, please refer to fig. 3 in combination, a block diagram of functional blocks of a video encoding apparatus 300 based on terminal device parameters is provided, and the apparatus may include the following functional blocks:
a first obtaining module 310, configured to obtain first video stream data of a first display device in the video networking in a current time period;
a second obtaining module 320, configured to obtain second video stream data of the first display device in an idle period corresponding to a second display device in the video networking; the second display device is a display device which has video sharing authority with the first display device in the video networking, and the idle period is used for representing that a video coding and decoding thread of the second display device is in a closed state;
a parameter extraction module 330, configured to perform terminal device parameter extraction on the first display device according to the first video stream data and the second video stream data, so as to obtain a current terminal device parameter corresponding to the first display device; when the current terminal equipment parameters corresponding to the first display equipment represent that the first display equipment has coding and decoding defect marks, extracting pixel distribution information of the first display equipment from the current terminal equipment parameters;
a video encoding module 340, configured to determine, based on the pixel distribution information and a first video decoding quality factor of the second display device in the current time period and a second video decoding quality factor of the second display device in the idle time period, a video coding and decoding defect distribution of the second display device, and encode, according to the video coding and decoding defect distribution, shared video stream data sent by the first display device to the second display device.
For the description of the functional modules, please refer to the description of the method shown in fig. 2, which is not described herein again.
Based on the same inventive concept as in the previous embodiments, the present specification further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of any of the methods described above.
Based on the same inventive concept as in the previous embodiment, an embodiment of the present specification further provides an information video encoding apparatus 200, as shown in fig. 4, including a memory 204, a processor 202, and a computer program stored in the memory 204 and executable on the processor 202, wherein the processor 202 implements the steps of any one of the methods described above when executing the program.
Through one or more embodiments of the present description, the present description has the following advantages or advantages:
the method comprises the steps of firstly obtaining first video stream data of a first display device in a current time period and second video stream data of the first display device in an idle time period, secondly extracting current terminal device parameters of the first display device according to the first video stream data and the second video stream data, extracting pixel distribution information of the first display device from the current terminal device parameters when the current terminal device parameters represent that the first display device has coding and decoding defect marks, then determining video coding and decoding defect distribution of a second display device based on the pixel distribution information and first video decoding quality factors of the second display device in the current time period and second video decoding quality factors in the idle time period, and coding shared video stream data sent by the first display device to the second display device according to the video coding and decoding defect distribution.
Obviously, the invention can improve the video coding rate on the premise of not influencing the coding accuracy and integrity, not only can ensure the transmission rate of the shared video stream data to avoid the pause of the second display equipment when outputting the shared video stream data, but also can ensure the second display equipment to output clear shared video stream data; in addition, the invention can automatically adjust the audio and video coding parameters according to the capability of the terminal equipment during multi-terminal audio and video communication, and send different video streams to different types of terminals, thereby improving the audio and video communication quality.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, this description is not intended for any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present specification and that specific languages are described above to disclose the best modes of the specification.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present description may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the specification, various features of the specification are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, the present specification as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this specification.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the description and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of this description may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components of a gateway, proxy server, system in accordance with embodiments of the present description. The present description may also be embodied as an apparatus or device program (e.g., computer program and computer program product) for performing a portion or all of the methods described herein. Such programs implementing the description may be stored on a computer-readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the specification, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The description may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. A video coding method based on terminal device parameters, applied to a video coding device, the video coding device communicating with a plurality of display devices, the plurality of display devices communicating with each other to form a video network, the method comprising:
obtaining first video stream data of a first display device in the video networking in a current time period;
obtaining second video stream data of the first display device in an idle period corresponding to a second display device in the video networking; the second display device is a display device which has video sharing authority with the first display device in the video networking, and the idle period is used for representing that a video coding and decoding thread of the second display device is in a closed state;
extracting terminal equipment parameters of the first display equipment according to the first video stream data and the second video stream data to obtain current terminal equipment parameters corresponding to the first display equipment; when the current terminal equipment parameters corresponding to the first display equipment represent that the first display equipment has coding and decoding defect marks, extracting pixel distribution information of the first display equipment from the current terminal equipment parameters;
determining video coding and decoding defect distribution of the second display device based on the pixel distribution information and a first video decoding quality factor of the second display device in the current time interval and a second video decoding quality factor of the second display device in the idle time interval, and coding shared video stream data sent by the first display device to the second display device according to the video coding and decoding defect distribution.
2. The method of claim 1, wherein encoding the shared video stream data sent by the first display device to the second display device according to the video codec defect distribution comprises:
determining a gray level co-occurrence matrix of a defect area corresponding to the video coding and decoding defect distribution;
if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a first set interval, encoding the shared video stream data according to the display configuration parameter corresponding to the second display device in the video coding and decoding defect distribution;
if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a second set interval, encoding the shared video stream data according to the encoding unit with the depth of zero in the video encoding and decoding defect distribution;
and if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a third set interval, encoding the shared video stream data according to the encoding unit with the depth of one in the video encoding and decoding defect distribution.
3. The method of claim 2, wherein encoding the shared video stream data according to coding units with a depth of zero in the video codec defect distribution comprises:
reading video output format information uploaded by the second display device when the second display device establishes the video sharing right with the first display device from a preset coding library, and acquiring queue characteristic factors of a plurality of pixel gray level queues corresponding to the second display device according to the video output format information;
calculating the gray defect distribution of each pixel gray queue through the queue characteristic factors; sequencing each pixel gray level queue according to the sequence of the gray level defect distribution from large to small, and selecting a set number of pixel gray level queues which are sequenced in the front as target gray level queues;
and extracting a coding unit with the depth of zero from the video coding and decoding defect distribution based on the target gray level queue, and coding the shared video stream data according to the video frame decoding logic information corresponding to the coding unit.
4. The method of claim 2, wherein encoding the shared video stream data according to the display configuration parameters in the video codec defect distribution corresponding to the second display device comprises:
acquiring first video synchronization information between the first display device and the second display device from the first display device, and acquiring second video synchronization information between the first display device and the second display device from the second display device;
performing feature similarity calculation on video frames in the same time period in the first video synchronization information and the second video synchronization information to obtain a feature similarity calculation result; judging whether the first video synchronization information and the second video synchronization information are synchronous according to the feature similarity calculation result;
if the first video synchronization information is synchronous with the second video synchronization information, extracting a coding compression ratio of the first display device from the first video synchronization information, determining a pixel gray scale defect list of the first display device according to the coding compression ratio, calibrating a pixel boundary value of the first display device corresponding to the pixel gray scale defect list to obtain a target gray scale defect list, determining a display configuration parameter corresponding to the second display device from the video coding and decoding defect distribution according to the target gray scale defect list, and encoding the shared video stream data by using the display configuration parameter and the target gray scale defect list;
if the first video synchronization information is not synchronous with the second video synchronization information, extracting a video frame list which is not synchronous with the second video synchronization information from the first video synchronization information, and determining video frame output information and video frame coding information of the video frame list; mapping field features corresponding to any group of first information fields in the video frame output information to a vector list of coding time domain distribution information corresponding to a second information field with the largest coding loss rate in the video frame coding information, so as to obtain mapping features corresponding to the field features in the vector list; and extracting set encoding and decoding parameters corresponding to the shared video frames in the video encoding and decoding defect distribution based on the mapping characteristics, determining display configuration parameters corresponding to the second display device through the direction identification corresponding to the set encoding and decoding parameters, and encoding the shared video stream data by adopting the display configuration parameters and the video frame encoding information.
5. The method of any of claims 1-4, wherein determining the video codec impairment distribution of the second display device based on the pixel distribution information and a first video decoding quality factor of the second display device during the current time period and a second video decoding quality factor of the second display device during the idle time period, respectively, comprises:
determining a parameter calibration duration corresponding to a current terminal device parameter corresponding to the first display device;
extracting a plurality of pieces of first video quality index information corresponding to the first video decoding quality factor and a plurality of pieces of second video quality index information corresponding to the second video decoding quality factor, and determining a set number of pieces of target video quality index information from the plurality of pieces of first video quality index information and the plurality of pieces of second video quality index information according to the parameter calibration duration;
determining the correlation weight of each quality index coefficient of the target video quality index information, and determining the number of quality index coefficients of which the current correlation quantity of the correlation weights is less than or equal to the preset correlation quantity according to the correlation weight of each quality index coefficient; calculating the proportion of the quality index coefficient number to the total quality index coefficient number of the target video quality index information to obtain video quality defect information of the target video quality index information; and determining the video coding and decoding defect distribution of the second display device according to the video quality defect information.
6. The method according to claim 5, wherein performing terminal device parameter extraction on the first display device according to the first video stream data and the second video stream data to obtain a current terminal device parameter corresponding to the first display device comprises:
determining a same video frame between the first video stream data and the second video stream data;
drawing a pixel comparison list of the same video frame;
and determining the current terminal equipment parameter corresponding to the first display equipment according to the characteristic description information difference between the pixels at the same position in the pixel comparison list.
7. The method according to claim 1, wherein when a current terminal device parameter corresponding to the first display device indicates that the first display device has a coding/decoding defect identifier, extracting pixel distribution information of the first display device from the current terminal device parameter comprises:
extracting target equipment parameters of the first display equipment from the current terminal equipment parameters according to the parameter types corresponding to the coding and decoding defect marks;
and displaying and outputting the first display equipment by adopting the target equipment parameters to obtain the pixel distribution information.
8. The method of claim 1,
obtaining first video stream data of a first display device in the video networking within a current time period, comprising: extracting first video stream data in the current time period from a video playing record of the first display device;
obtaining second video stream data of the first display device in an idle period corresponding to a second display device in the video networking, including: and extracting second video stream data in the idle period from the video playing record of the first display device.
9. A video coding system based on terminal device parameters is characterized by comprising a video coding device and a plurality of display devices, wherein the video coding device is communicated with the display devices, and the display devices are communicated with each other to form a video networking; wherein the video encoding device is specifically configured to:
obtaining first video stream data of a first display device in the video networking in a current time period;
obtaining second video stream data of the first display device in an idle period corresponding to a second display device in the video networking; the second display device is a display device which has video sharing authority with the first display device in the video networking, and the idle period is used for representing that a video coding and decoding thread of the second display device is in a closed state;
extracting terminal equipment parameters of the first display equipment according to the first video stream data and the second video stream data to obtain current terminal equipment parameters corresponding to the first display equipment; when the current terminal equipment parameters corresponding to the first display equipment represent that the first display equipment has coding and decoding defect marks, extracting pixel distribution information of the first display equipment from the current terminal equipment parameters;
determining video coding and decoding defect distribution of the second display device based on the pixel distribution information and a first video decoding quality factor of the second display device in the current time interval and a second video decoding quality factor of the second display device in the idle time interval, and coding shared video stream data sent by the first display device to the second display device according to the video coding and decoding defect distribution.
10. The system according to claim 9, wherein the video encoding device encoding the shared video stream data sent by the first display device to the second display device according to the video codec defect distribution specifically comprises:
determining a gray level co-occurrence matrix of a defect area corresponding to the video coding and decoding defect distribution;
if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a first set interval, encoding the shared video stream data according to the display configuration parameter corresponding to the second display device in the video coding and decoding defect distribution;
if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a second set interval, encoding the shared video stream data according to the encoding unit with the depth of zero in the video encoding and decoding defect distribution;
and if the correlation coefficient of the adjacent pixel corresponding to the gray level co-occurrence matrix is located in a third set interval, encoding the shared video stream data according to the encoding unit with the depth of one in the video encoding and decoding defect distribution.
CN202011116351.0A 2020-10-19 2020-10-19 Video coding method and system based on terminal equipment parameters Active CN111954034B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011116351.0A CN111954034B (en) 2020-10-19 2020-10-19 Video coding method and system based on terminal equipment parameters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011116351.0A CN111954034B (en) 2020-10-19 2020-10-19 Video coding method and system based on terminal equipment parameters

Publications (2)

Publication Number Publication Date
CN111954034A CN111954034A (en) 2020-11-17
CN111954034B true CN111954034B (en) 2021-01-19

Family

ID=73357116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011116351.0A Active CN111954034B (en) 2020-10-19 2020-10-19 Video coding method and system based on terminal equipment parameters

Country Status (1)

Country Link
CN (1) CN111954034B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115529491B (en) * 2022-01-10 2023-06-06 荣耀终端有限公司 A method for decoding audio and video, an apparatus for decoding audio and video, and a terminal device
CN115643427B (en) * 2022-12-23 2023-04-07 广州佰锐网络科技有限公司 Ultra-high-definition audio and video communication method and system and computer readable storage medium
CN120416482B (en) * 2025-07-01 2025-09-02 中科方寸知微(南京)科技有限公司 Device self-adaptive image coding and decoding method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6823394B2 (en) * 2000-12-12 2004-11-23 Washington University Method of resource-efficient and scalable streaming media distribution for asynchronous receivers
US10530574B2 (en) * 2010-03-25 2020-01-07 Massachusetts Institute Of Technology Secure network coding for multi-description wireless transmission
US9538137B2 (en) * 2015-04-09 2017-01-03 Microsoft Technology Licensing, Llc Mitigating loss in inter-operability scenarios for digital video

Also Published As

Publication number Publication date
CN111954034A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN111954034B (en) Video coding method and system based on terminal equipment parameters
CN114554211B (en) Content-adaptive video encoding method, device, equipment and storage medium
CN109286825B (en) Method and apparatus for processing video
CN111918066A (en) Video encoding method, device, equipment and storage medium
CN107801093B (en) Video rendering method and device, computer equipment and readable storage medium
CN113556582A (en) Video data processing method, device, equipment and storage medium
CN110751649A (en) Video quality evaluation method and device, electronic equipment and storage medium
US9967581B2 (en) Video quality adaptation with frame rate conversion
CN113411218B (en) Method and device for evaluating instant messaging quality and electronic equipment
CN103108160B (en) Monitor video data capture method, server and terminal
CN109120924A (en) A kind of quality evaluating method of live video communication
CN110944200A (en) A method for evaluating immersive video transcoding schemes
CN111031032A (en) Cloud video transcoding method and device, decoding method and device, and electronic device
CN105677270A (en) Method and device for post processing of a video stream
CN113286146B (en) Media data processing method, device, equipment and storage medium
CN114630139A (en) Quality evaluation method of live video and related equipment thereof
CN109429070A (en) A kind of mobile terminal video coding method, device and mobile terminal
CN106658095A (en) Webcasting video transmission method, server and user equipment
CN113949870B (en) Method and device for detecting screen content in encoding process
CN115379229A (en) Content adaptive video coding method and device
CN108989905A (en) Media stream control method, calculates equipment and storage medium at device
CN113452996A (en) Video coding and decoding method and device
CN112560552B (en) Video classification method and device
CN118055240A (en) Video coding method, device, computer equipment and medium
CN116980604A (en) Video encoding method, video decoding method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant