CN117956177A - Cloud rendering method, cloud rendering device, medium and equipment - Google Patents
Cloud rendering method, cloud rendering device, medium and equipment Download PDFInfo
- Publication number
- CN117956177A CN117956177A CN202410170887.2A CN202410170887A CN117956177A CN 117956177 A CN117956177 A CN 117956177A CN 202410170887 A CN202410170887 A CN 202410170887A CN 117956177 A CN117956177 A CN 117956177A
- Authority
- CN
- China
- Prior art keywords
- client
- encoder
- network
- server
- cloud rendering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 152
- 238000000034 method Methods 0.000 title claims abstract description 85
- 238000007906 compression Methods 0.000 claims abstract description 82
- 230000006835 compression Effects 0.000 claims abstract description 82
- 238000011156 evaluation Methods 0.000 claims abstract description 64
- 230000005540 biological transmission Effects 0.000 claims abstract description 43
- 230000008569 process Effects 0.000 claims abstract description 37
- 238000012544 monitoring process Methods 0.000 claims abstract description 28
- 230000001965 increasing effect Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims 3
- 238000004891 communication Methods 0.000 abstract description 15
- 230000000694 effects Effects 0.000 abstract description 9
- 230000000903 blocking effect Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000003993 interaction Effects 0.000 description 5
- 238000005457 optimization Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000002715 modification method Methods 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/24—Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
- H04N21/2402—Monitoring of the downstream path of the transmission network, e.g. bandwidth available
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The disclosure relates to a cloud rendering method, a cloud rendering device, a medium and equipment. The method provided by the embodiment of the disclosure is suitable for the server, and comprises the following steps: initializing an encoder of a server based on an optimal video compression algorithm transmitted by a client; encoding the first rendering data according to an optimal video compression algorithm by an initialized encoder; pushing the encoded first rendering data to a client, monitoring a pushing transmission state, and acquiring an evaluation result of the current network bandwidth according to the monitoring result; and dynamically adjusting parameters of the encoder according to the evaluation result. According to the method, the capacity of the client and the condition of network bandwidth are considered, so that the network communication quality in the cloud rendering process is guaranteed, the phenomenon of video delay or blocking does not occur, and a better cloud rendering effect is achieved.
Description
The present application claims priority from chinese patent application No. 2023118622338 filed on day 29 12 of 2023, entitled "cloud rendering method, cloud rendering apparatus, medium, and device", the entire contents of which are incorporated herein by reference.
Technical Field
The disclosure relates to the technical field of digital twinning, in particular to a cloud rendering method, a cloud rendering device, a medium and equipment.
Background
In the cloud rendering (cloudrender) technology, the rendering mode is similar to that of conventional cloud computing, namely, a 3D (three dimensions) program is placed in a remote server for rendering, a user terminal clicks a 'cloud rendering' button through global wide area network (Web) software or directly in a local 3D program and accesses resources by means of high-speed Internet, an instruction is sent from the user terminal, the server executes a corresponding rendering task according to the instruction, and a rendering result picture is transmitted back to the user terminal for display. At present, the codec is used as an important device in the cloud rendering service and is divided into a soft codec and a hard codec, wherein the soft codec is encoded and decoded by using a central processing unit (central processing unit, CPU), the realization is simple, but the load on the CPU is heavier, and the performance is lower; the hard coding and decoding mainly uses a graphics processor (graphics processing unit, GPU) and the like for coding, and has high performance and high speed. The most commonly used video compression algorithm standard at present is H.264, and a better transmission effect is achieved by combining a dynamic code rate adjustment strategy.
However, the existing h.264 algorithm standard is too old, and the coding effect cannot adapt to the cloud rendering service requirement. When the code rate is dynamically regulated in the dynamic code rate regulation strategy, the condition that the video is clear for a while and is fuzzy for a while can occur. That is, when the estimated bandwidth is small, the encoder reduces the code stream, at which time the video becomes blurred; when the estimated bandwidth is large, the encoder amplifies the code stream, so the video becomes clear. When the code rate is too small, the too high data compression ratio influences the definition and quality of the video, and meanwhile, the condition of the video picture which is blurred and clear when the code rate is too large in change also seriously influences the user experience.
Therefore, how to provide a high-quality cloud rendering method to avoid the phenomena of video delay and jamming in cloud rendering is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the disclosure aims to provide a cloud rendering method, a cloud rendering device, a medium and equipment, so as to solve the problem that the existing cloud rendering has obvious phenomena of video delay, clamping and the like, and cannot meet the high-quality 3D scene interaction.
In a first aspect, an embodiment of the present disclosure provides a cloud rendering method, where the cloud rendering method is applicable to a server, and the cloud rendering method includes:
initializing an encoder of a server based on an optimal video compression algorithm transmitted by a client;
Encoding the first rendering data according to an optimal video compression algorithm by an initialized encoder;
pushing the encoded first rendering data to a client, monitoring a pushing transmission state, and acquiring an evaluation result of the current network bandwidth according to the monitoring result;
and dynamically adjusting parameters of the encoder according to the evaluation result.
In an alternative embodiment, initializing an encoder of a server based on an optimal video compression algorithm transmitted by a client includes:
Obtaining the image quality requirement of a client;
and initializing an encoder of the server according to the optimal video compression algorithm and the image quality requirement transmitted by the client.
In an alternative embodiment, monitoring a push stream transmission status includes:
And monitoring network packet loss rate and/or network delay in the process of pushing the encoded first rendering data to the client.
In an alternative embodiment, obtaining the evaluation result of the current network bandwidth according to the monitoring result includes:
If the network packet loss rate is smaller than or equal to the first preset packet loss rate and the network delay is continuously reduced, determining that the current network bandwidth is good; or alternatively
If the network packet loss rate is larger than the first preset packet loss rate and smaller than or equal to the second preset packet loss rate, and the network delay is in a preset delay interval, determining that the current network bandwidth is normal; or alternatively
If the network packet loss rate is larger than the second preset packet loss rate or the network delay is continuously increased, determining that the current network bandwidth is poor.
In an alternative embodiment, the method further comprises:
And if the current network bandwidth is determined to be good or bad according to the network packet loss rate and/or the network delay, the network bandwidth is adjusted to be normal.
In an alternative embodiment, dynamically adjusting parameters of the encoder based on the evaluation result includes:
and setting a target code rate value of the encoder according to the network bandwidth evaluation value when the network bandwidth is normal.
In an alternative embodiment, the method further comprises:
and encoding the second rendering data through the encoder after parameter adjustment, and pushing the encoded second rendering data to the client.
In a second aspect, an embodiment of the present disclosure further provides a cloud rendering device, where the cloud rendering device is applicable to a server, and the cloud rendering device includes:
the initialization module is used for initializing an encoder of the server based on an optimal video compression algorithm transmitted by the client;
the coding module is used for coding the first rendering data according to an optimal video compression algorithm through the initialized coder;
the pushing module is used for pushing the coded rendering data to the client, monitoring the pushing transmission state and acquiring an evaluation result of the current network bandwidth according to the monitoring result;
and the adjusting module is used for dynamically adjusting the parameters of the encoder according to the evaluation result.
In an optional implementation manner, the initialization module is configured to initialize an encoder of the server based on an optimal video compression algorithm transmitted by the client, specifically: the initialization module is used for:
Obtaining the image quality requirement of a client;
and initializing an encoder of the server according to the optimal video compression algorithm and the image quality requirement transmitted by the client.
In an alternative embodiment, the push module is configured to monitor a push transmission status, specifically: the plug flow module is used for:
And monitoring network packet loss rate and/or network delay in the process of pushing the encoded first rendering data to the client.
In an optional implementation manner, the plug flow module is configured to obtain an evaluation result of the current network bandwidth according to the monitoring result, specifically: the plug flow module is used for:
If the network packet loss rate is smaller than or equal to the first preset packet loss rate and the network delay is continuously reduced, determining that the current network bandwidth is good; or alternatively
If the network packet loss rate is larger than the first preset packet loss rate and smaller than or equal to the second preset packet loss rate, and the network delay is in a preset delay interval, determining that the current network bandwidth is normal; or alternatively
If the network packet loss rate is larger than the second preset packet loss rate or the network delay is continuously increased, determining that the current network bandwidth is poor.
In an alternative embodiment, the cloud rendering apparatus further includes:
A bandwidth optimization module, configured to: and if the current network bandwidth is determined to be good or bad according to the network packet loss rate and/or the network delay, the network bandwidth is adjusted to be normal.
In an alternative embodiment, the adjusting module is configured to dynamically adjust parameters of the encoder according to the evaluation result, specifically: the adjusting module is used for:
and setting a target code rate value of the encoder according to the network bandwidth evaluation value when the network bandwidth is normal.
In an alternative embodiment, the plug flow module is further configured to:
and encoding the second rendering data through the encoder after parameter adjustment, and pushing the encoded second rendering data to the client.
In a third aspect, an embodiment of the present disclosure provides an electronic device including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the cloud rendering method of the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the cloud rendering method of the first aspect.
The scheme of the present disclosure at least comprises the following beneficial effects:
according to the scheme, the server initializes an encoder of the server based on an optimal video compression algorithm transmitted by the client, then the server encodes rendering data according to the video optimal compression algorithm through the initialized encoder, the encoded rendering data is pushed to the client, the push transmission state is monitored, an evaluation result of network bandwidth is obtained according to the monitoring result, and then parameters of the encoder are dynamically adjusted according to the evaluation result. In this way, the optimal video compression algorithm adapted to the client can be selected according to the capability of the client, for example, the decoding capability, the encoder of the server is initialized according to the capability of the client, and in the process of pushing the code stream output by the encoder of the server to the client, the encoder parameters of the server can be dynamically adjusted through the evaluation result of the network bandwidth, so that the reasonable setting of the parameters of the encoder of the server is realized, the capability of the client and the condition of the network bandwidth are considered in the process of dynamically outputting the corresponding video, the network communication quality in the cloud rendering process is ensured, the video delay or the clamping phenomenon cannot occur, and the better cloud rendering effect is achieved.
Drawings
FIG. 1 illustrates a flow chart of a cloud rendering method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a selection video compression algorithm provided by an embodiment of the present disclosure;
FIG. 3 illustrates a flow chart for adjusting current network bandwidth provided by an embodiment of the present disclosure;
fig. 4 illustrates a schematic structural diagram of a cloud rendering apparatus provided by an embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure;
Fig. 6 shows a schematic hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The application is further described in detail below by means of the figures and examples. The features and advantages of the present application will become more apparent from the description.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
In addition, the technical features described below in the different embodiments of the present application may be combined with each other as long as they do not collide with each other.
Aiming at the problems that the existing cloud rendering has obvious phenomena of video delay, clamping and the like, so that high-quality 3D scene interaction cannot be met, the embodiment of the disclosure provides a cloud rendering method, a cloud rendering device, media and equipment, so that an encoder of a server is initialized according to the image quality requirement of a client by selecting an optimal video compression algorithm, and meanwhile, the evaluation result of network bandwidth in the video transmission process is monitored, the transmission data volume is controlled to adapt to the real-time bandwidth condition, and the network communication quality in the cloud rendering process is guaranteed.
The embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings by way of specific embodiments and application scenarios thereof.
Fig. 1 is a flowchart of a cloud rendering method provided in an embodiment of the present disclosure. Referring to fig. 1, the method may include the steps of:
step 101, the client selects an optimal video compression algorithm according to the decoding capability of the client.
It is contemplated that in cloud rendering services, the interaction of the client and the server is critical to the selection and adjustment of video compression algorithms. The variety of commonly used video coding compression algorithms is large, such as H.264/VP9/AV1/H265/H266, etc., which provide higher compression rates, which allow more data to be transmitted under limited network conditions, thus guaranteeing better quality of service. In this step, the decoding capability of the client is first determined, and the video compression algorithm adapted to the client is selected based on the decoding capability of the client, so that the user experience degree can be better improved. If the codec type of the client and the video codec algorithm configured by the client are limited, the hard decoding is preferably used when the client has both the soft decoding and the hard decoding functions.
Step 102, the server initializes the encoder of the server according to the optimal video compression algorithm.
After the client selects the optimal video compression algorithm according to the decoding capability, the client can transmit the optimal video compression algorithm to a cloud rendering server (also simply called as a server), and after the server receives the optimal video compression algorithm, an encoder of the server can be initialized based on the optimal video compression algorithm. In the process of initializing the encoder of the server, the image quality requirement of the client needs to be considered, and corresponding encoder parameters are configured according to the user preference, network bandwidth, content type and other factors of the client, so that the use experience of the user in the initial stage can be improved.
That is, in one possible implementation manner, after receiving the optimal video compression algorithm transmitted by the client, the server may further obtain the image quality requirement of the client, and then, the server may initialize the encoder of the server according to the optimal video compression algorithm transmitted by the client and the image quality requirement of the client.
And 103, the server encodes the rendering data according to an optimal video compression algorithm through the initialized encoder.
After the service end initializes the encoder, the initialized encoder can be used to encode the rendering data according to an optimal video compression algorithm. For convenience of distinction, in the present application, the rendering data in the current process is denoted as first rendering data. That is, the server may encode the first rendering data according to an optimal video compression algorithm using the initialized encoder.
Step 104, the server side pushes the encoded rendering data to the client side, and monitors the push transmission state in real time to obtain an evaluation result of the current network bandwidth.
After the server encodes the first rendering data, the encoded first rendering data may be pushed to the client to be rendered and displayed at the client for viewing by the user. In the process of pushing the encoded first rendering data to the client, the server can monitor the pushing transmission state and acquire an evaluation result of the current network bandwidth according to the monitoring result.
In the step, the state of pushing streaming data from the server to the client is obtained in real time, so that the current network bandwidth condition can be accurately estimated.
Step 105, the server dynamically adjusts the encoder parameters of the server according to the evaluation result of the network bandwidth.
In this step, based on the evaluation result of the current network bandwidth obtained in step 104, the evaluation result is used as a basis for adjusting the encoder parameters, so that the problems of obvious video delay, clamping and the like in the cloud rendering service process can be effectively avoided or reduced, and high-quality 3D scene interaction is realized.
According to the scheme, the client can select the optimal video compression algorithm matched with the client according to the decoding capability of the client, and then the optimal video compression algorithm is transmitted to the server. The server side can initialize an encoder of the server side based on an optimal video compression algorithm transmitted by the client side, then the server side encodes the rendering data according to the video optimal compression algorithm through the initialized encoder, the encoded rendering data is pushed to the client side, the push transmission state is monitored, an evaluation result of the network bandwidth is obtained according to the monitoring result, and then parameters of the encoder are dynamically adjusted according to the evaluation result.
In this way, the optimal video compression algorithm adapted to the client can be selected according to the capability of the client, for example, the decoding capability, the encoder of the server is initialized according to the capability of the client, and in the process of pushing the code stream output by the encoder of the server to the client, the encoder parameters of the server can be dynamically adjusted through the evaluation result of the network bandwidth, so that the reasonable setting of the parameters of the encoder of the server is realized, the capability of the client and the condition of the network bandwidth are considered in the process of dynamically outputting the corresponding video, the network communication quality in the cloud rendering process is ensured, the video delay or the clamping phenomenon cannot occur, and the better cloud rendering effect is achieved.
In some alternative implementations, referring to fig. 2, fig. 2 shows a flow chart for selecting an optimal video compression algorithm provided by embodiments of the present disclosure. As shown in fig. 2, the selection of the optimal video compression algorithm according to the decoding capability of the client may be implemented as follows:
step 201, obtaining first codec information from a communication protocol between a client and a server.
Wherein the first codec information includes: the decoding capability of the client and the image quality requirements of the client.
Because the cloud rendering server encoding capability and the client decoding capability are different, the cloud rendering service needs to combine the relevant capabilities of the server and the client. In this embodiment, the video compression algorithm selected by the client is firstly notified to the cloud rendering server through a communication protocol with the cloud rendering server, the decoding capability of the client is determined, the most reasonable video compression algorithm used for current transmission, namely the optimal video compression algorithm, is selected according to the decoding capability of the client so as to achieve the best cloud rendering transmission quality, and then the cloud rendering server adjusts image quality parameters in real time according to the network transmission quality condition so as to achieve the cloud rendering service with high quality.
Step 202, determining an optimal codec algorithm of the client based on the video compression algorithm type and the decoder type of the client in the first codec information.
Wherein the video compression algorithm type and decoder type of the client are used to determine the decoding capability of the client.
And 203, selecting an optimal video compression algorithm adapted to the client according to the optimal coding algorithm of the client.
Optionally, in the process of selecting the optimal video compression algorithm, the client determines the optimal video compression algorithm and decoder (preferably hard decoding) adapted to the client, and informs the cloud rendering server of the selected optimal video compression algorithm through a communication protocol with the cloud rendering service; and initializing a corresponding hardware encoder by the cloud rendering server according to the optimal video compression algorithm selected by the client.
In another possible implementation, the server may also select an optimal video compression algorithm. Specifically, the server may obtain the first codec information from a communication protocol between the client and the server. The server may then determine the decoding capability of the client based on the first codec information. And then the server can select a video compression algorithm which is matched with the decoding capability of the client according to the decoding capability of the client, and the video compression algorithm is used as an optimal video compression algorithm.
In cloud rendering services, the interaction of a client and a server is critical to the selection and adjustment of an optimal video compression algorithm. Wherein the client typically considers the following factors when selecting the optimal video compression algorithm: hardware compatibility, ensuring that the selected optimal video compression algorithm is compatible with client hardware; the performance requirement is that a proper optimal video compression algorithm is selected according to the processing capacity of the equipment so as to ensure smooth playing; and (3) network conditions, considering network bandwidth and stability, and selecting an optimal video compression algorithm adapting to network changes. It should be noted that, the optimal video compression algorithm is determined by combining the hardware performance of the client, the network bandwidth and stability, and the image quality requirement of the user, so when one or more of the factors are changed, the optimal video compression algorithm may be changed. Thus, to ensure user experience, dynamic switching may also be performed for optimal video compression algorithms, such as: during transmission, if the network conditions or client hardware conditions change, it may be necessary to switch to a different compression algorithm to maintain image quality and smoothness.
Similarly, since the client preferentially uses hard decoding, to ensure the user experience and the transmission quality, the decoder can also be switched for use, as follows: 1. hardware incompatibility: if the client hardware does not support a particular hard decoding algorithm, a switch to soft decoding may be required; 2. high performance requirements: in case the hardware processing power is insufficient to meet the high quality video stream, soft decoding may be employed; 3. special coding format: some special or new encoding formats may be temporarily supported by only the software decoder, etc. In summary, in the cloud rendering service, the selection and adjustment of the optimal video compression algorithm is a dynamic process, and needs to comprehensively consider the hardware capability, network condition and actual video content requirement of the client. The switching between hard and soft decoding depends on the specific scenario and requirements to ensure a smooth and high quality video experience under different conditions.
In some optional ways, the cloud rendering service presets coding parameters corresponding to different maximum resolutions, minimum resolutions, maximum frame rates, minimum frame rates, maximum code rates, minimum code rates and the like according to different image quality requirements such as high definition, standard definition, fluency and the like. When initializing the encoder of the server according to the image quality requirement of the client, the cloud rendering server selects one of the image quality parameters to initialize the encoder according to the specific use requirement.
Specifically, in the cloud rendering service, "selecting one of the image quality parameters to initialize the encoder" is an important step, and in the actual use process, there are the following factors that affect the initialization of the encoder: 1. different client devices, such as smartphones, tablets, computers, etc. have different display capabilities, newer devices may support higher resolutions and frame rates; 2. the network bandwidths are different, and in the case of lower network bandwidths, a lower image quality setting may need to be selected to ensure smooth transmission; 3. different user preferences may be used to select different image quality according to personal preferences, for example, to select high definition image quality for better visual experience; 4. different types of content, such as movies, games, document presentations, etc., may have different image quality requirements, where games may require higher frame rates to ensure fluency. Thus, the specific application scenario and the manner of determining the image quality requirements can be understood as follows:
Automatically detecting display capabilities of the client device, such as resolution and supported maximum frame rate, when determining the image quality requirements based on the above factors; performing network speed test to determine the maximum data transmission rate supportable in the current network environment; providing a user interface for a user to select a desired image quality according to personal preferences; an appropriate image quality setting or the like is determined according to the nature and demand of the provided content. Based on this, the encoder parameters are initialized.
It should be noted that, in the post-push stream transmission process, the cloud rendering server may also adjust the encoder parameters according to the actual influence, which is specifically as follows: if the user changes the image quality setting in the transmission process, the cloud rendering server needs to correspondingly adjust the coding parameters so as to adapt to the new image quality requirement; if the bandwidth estimation result is changed, the encoder parameters can be adjusted to ensure the transmission efficiency. In summary, image quality parameter selection and encoder initialization in cloud rendering services is a multivariate decision process that requires comprehensive consideration of device capabilities, network conditions, user preferences, and content requirements. The user changing the image quality setting during transmission directly affects the bandwidth estimation and the adjustment of the coding parameters to ensure an optimal balance between image quality and transmission efficiency.
In some optional manners, the cloud rendering server may monitor the push transmission state in real time to obtain an evaluation result of the current network bandwidth. Optionally, the monitoring of the push stream transmission state in real time may be implemented in the following manner: and in the process that the server side pushes the encoded first rendering data to the client side, monitoring the network packet loss rate and/or the network delay in real time.
Further, the cloud rendering server monitors the network packet loss rate and the network delay, and can adopt the following strategies to improve the accuracy of the acquired data: 1. enhancing data acquisition accuracy, such as using high accuracy time stamps to record the time of transmission and reception of data packets; finer granularity data acquisition is achieved, for example by increasing acquisition frequency to more accurately capture network fluctuations. 2. Intelligent data analysis, such as analyzing the collected data by using a machine learning algorithm to predict the change trend of the network condition; and realizing a self-adaptive algorithm, dynamically adjusting network parameters according to historical and real-time data, and the like. 3. Redundancy and error recovery mechanisms: the data packet redundant sending strategy is implemented, and the influence caused by packet loss is reduced; more efficient error recovery algorithms are introduced, such as Forward Error Correction (FEC) techniques, etc.
As can be seen from the above examples, in this embodiment, the evaluation result of the corresponding network bandwidth is obtained by analyzing at least one of the network packet loss rate and the network delay. It should be noted that, when the cloud rendering server obtains the evaluation result of the current network bandwidth, the cloud rendering server may implement more dimensional network quality evaluation besides the network packet loss rate and the network delay: such as combining Jitter rate (Jitter) with bandwidth variation, etc., to obtain a more comprehensive network quality assessment; and data analysis crossing the network layer is realized, and information of an application layer, a transmission layer, the network layer and the like are combined.
In some optional implementations, when the server determines that the evaluation result of the current network bandwidth is good or bad according to parameters such as the network packet loss rate and the network delay, it is indicated that the transmission performance of the current network bandwidth does not reach the optimal transmission requirement yet, and the current network bandwidth needs to be further adjusted. Based on this, as shown in fig. 3, after the server performs step 104, before performing step 105, the following steps may be further included:
step 301, if it is determined that the evaluation result of the current network bandwidth is good or bad according to the network packet loss rate, the current bandwidth evaluation value is increased according to a first preset rule until the evaluation result of the network bandwidth is normal;
Step 302, if the evaluation result of the current network bandwidth is good or bad according to the network delay, the current bandwidth evaluation value is reduced according to the second preset rule until the evaluation result of the network bandwidth is normal.
That is, if the server determines that the current network bandwidth is good or bad according to the network packet loss rate and/or the network delay, the server may adjust the network bandwidth to be normal.
In an alternative embodiment of the present disclosure, step 301 may further include:
and judging the evaluation result of the network bandwidth based on the packet loss rate, namely calculating the current bandwidth according to the real-time packet loss rate. If the evaluation result of the current bandwidth is good or bad, the evaluation value of the current bandwidth is improved according to a first preset rule until the evaluation result of the network bandwidth is normal.
Optionally, increasing the current bandwidth assessment value according to the first preset rule includes: 2 packet loss rates, namely a normal packet loss rate X% and an abnormal packet loss rate Y%, are preset, wherein X and Y are constants. If the packet loss rate is less than or equal to X%, the packet loss rate belongs to normal packet loss, which indicates that the current network quality is good, the bandwidth does not reach the upper limit yet, and the estimated network bandwidth should be increased, namely the current estimated network bandwidth value is multiplied by a growth coefficient A, wherein A is a constant; judging the network packet loss rate after multiplying by the growth coefficient A, if the network packet loss rate is less than or equal to Y, indicating that the current network quality is normal, and the bandwidth evaluation is more accurate, wherein the currently evaluated bandwidth value is kept unchanged at the moment; if the packet loss rate is greater than Y%, the current network quality is poor, and serious network congestion occurs, and the estimated bandwidth is immediately reduced at this time, namely, the current estimated bandwidth value is multiplied by a reduction coefficient B, and B is a constant.
In an alternative embodiment of the present disclosure, step 302 may further include:
and (3) evaluating the bandwidth based on the network delay, namely judging the current network transmission delay condition. And if the evaluation result of the current bandwidth is good or bad, the evaluation value of the current bandwidth is improved according to a second preset rule until the evaluation result of the network bandwidth is normal.
Optionally, increasing the current bandwidth assessment value according to the second preset rule includes: if the network transmission delay is continuously reduced, the network communication quality is good, namely the current estimated bandwidth value is multiplied by a growth coefficient A, if the network transmission delay is continuously increased, the network communication quality is poor, the bandwidth estimated value is required to be reduced, namely the current estimated bandwidth value is multiplied by a degradation coefficient B, and otherwise, the network communication quality is normal, and the current estimated bandwidth is kept unchanged.
In cloud rendering services, it is critical to determine the growth and degradation coefficients of the bandwidth assessment, which affect the dynamic adjustment of the network bandwidth to ensure optimal transmission efficiency and quality. How these coefficients and their adjustment strategies are determined is as follows:
Determining growth and descent coefficients
1. Based on empirical values: the initial coefficients may be set according to empirical values of network engineering, such as standard coefficients used in general network optimization practice.
2. Adaptive algorithm: and dynamically adjusting the coefficients according to the historical data by using an adaptive algorithm to adapt to different network environments and traffic modes.
3. Machine learning optimization: the coefficients are automatically optimized according to the network performance data using a machine learning model.
4. User requirements and content types: these coefficients are adjusted taking into account the user's sensitivity to quality of service and the type of content (e.g., high definition video, games, etc.).
In summary, the growth coefficient and the descent coefficient may be fixed constants or may be non-fixed constants obtained according to a corresponding policy, and network environment, user requirements, content types and experience values need to be comprehensively considered when determining the growth coefficient and the descent coefficient, and bandwidth evaluation is optimized through continuous monitoring and adjustment. This dynamic adjustment strategy helps to maintain optimal quality of service under varying network conditions.
Optionally, the judging that the network bandwidth is good according to the network packet loss rate and the network delay may include: if the network packet loss rate is smaller than or equal to the first preset packet loss rate and the current network delay is continuously reduced, judging that the network bandwidth is good.
Optionally, the judging that the network bandwidth is normal according to the network packet loss rate and the network delay may include: if the first preset packet loss rate is smaller than the network packet loss rate, the network packet loss rate is smaller than or equal to the second preset packet loss rate, and the current network delay is within the preset delay interval, judging that the network bandwidth is normal. That is, if the network packet loss rate is greater than the first preset packet loss rate and less than or equal to the second preset packet loss rate, and the current network delay is within the preset delay interval, it may be determined that the current network bandwidth is normal.
Optionally, determining that the network bandwidth is bad according to the network packet loss rate and the network delay includes: if the network packet loss rate is larger than the second preset packet loss rate or the current network delay is continuously increased, judging that the network bandwidth is poor.
In some optional manners, the server dynamically adjusts the encoder parameters of the server according to the evaluation result of the network bandwidth, which may include: setting a target code rate value of an encoder according to a network bandwidth evaluation value when the network bandwidth is normal, wherein the target code rate value meets the following conditions: the minimum code rate of the current image quality code is smaller than or equal to the target code rate value, and the target code rate value is smaller than or equal to the requirement of the maximum code rate of the current image quality code, namely the target code rate value is in the following range [ the minimum code rate of the current image quality code and the maximum code rate of the current image quality code ].
Furthermore, the server can dynamically adjust coding parameters (such as code rate, resolution, frame rate, etc.) according to new image quality requirements and network bandwidth evaluation results obtained in real time, so as to reduce delay and packet loss to the greatest extent while ensuring image quality.
Specifically, in the cloud rendering service process, a target code rate value of the encoder is set based on the network bandwidth evaluation value evaluated in real time, wherein the target code rate value must meet the requirement that the minimum code rate of the current image quality encoding is smaller than or equal to the target code rate value, and the target code rate value is smaller than or equal to the maximum code rate of the current image quality encoding. Meanwhile, in the process of dynamically adjusting the encoder parameters, the cloud rendering service can also execute a dynamic image quality switching strategy according to the evaluation condition of the network bandwidth, namely, the current image quality is switched to higher image quality when the network bandwidth is improved, and the current image quality is switched to lower image quality when the network bandwidth is reduced. After setting new coding parameters, the cloud rendering server can use the encoder with the adjusted coding parameters to encode the rendering data, push the encoded rendering data to the client, and simultaneously continue to execute real-time bandwidth evaluation.
That is, after the parameters of the encoder of the server are dynamically adjusted, the rendering data can be encoded by the encoder after parameter adjustment, and the encoded rendering data is pushed to the client. For convenience of distinction, the rendering data in the current process is noted as second rendering data. That is, after the parameters of the encoder of the server are dynamically adjusted, the parameter-adjusted encoder may be used to encode the second rendering data, and then the encoded second rendering data is pushed to the client. In this way, the parameters of the encoder of the server can be dynamically adjusted, the encoder of the server can be timely adjusted to be in a state suitable for the current network bandwidth, and the encoder after parameter adjustment is dynamically adopted to encode the current rendering data, so that the rendering data pushed to the client is more suitable for the network transmission state, the phenomenon that video delay or blocking cannot occur is further guaranteed, and the user experience is better.
In the process of dynamic coding parameter adjustment in cloud rendering service, the determination of the highest code rate and the lowest code rate is a key ring. The determination of these parameters requires consideration of a number of factors, including the nature of the compression algorithm, target image quality requirements, network bandwidth conditions, and the like. The following is a detailed description of determining an adjustment strategy based on the corresponding factors:
determining factors of highest code rate and lowest code rate
1. Target image quality requirements: the highest and lowest code rates are determined according to the video image quality (such as standard definition, high definition, 4K, etc.) to be provided. Where high image quality generally requires higher code rates to maintain sharpness and detail.
2. Compression algorithm characteristics: different video compression algorithms (e.g., h.264, h.265, VP9, etc.) may provide different image quality at the same code rate. Efficient compression algorithms may still provide good image quality at lower code rates.
3. Network bandwidth conditions: the upper limit and stability of the network bandwidth affects the maximum amount of data available for video transmission. In bandwidth limited situations, it may be desirable to reduce the code rate to accommodate network conditions.
4. User equipment capability: the ability of the client device to decode and display is considered to determine the appropriate code rate range. The weak performance of the device may not smoothly play the high rate video.
5. Content type: the requirements for code rates are different for different types of content (e.g., motion pictures, video games, text images, etc.). Dynamic scenes or high detail content typically require higher code rates to maintain sharpness.
6. Real-time requirements: for applications requiring high real-time performance (such as cloud games), the rate setting also needs to take delay and response speed into account.
Relation between code rate and compression algorithm and network bandwidth
1. Relationship to compression algorithm: different compression algorithms provide different video quality at the same code rate. The efficient algorithm can maintain higher image quality at lower code rates.
2. Relation to bandwidth: bandwidth is an important factor limiting code rate selection. The high bandwidth environment allows for higher code rates to be used, while in the case of lower bandwidths it is necessary to reduce the code rate to prevent jammers or delays.
In summary, the determination of the highest code rate and the lowest code rate is a result of a multi-factor consideration, which aims to ensure the image quality and adapt to the network condition and the capability of the user terminal. Dynamic adjustment of these parameters is critical to achieving efficient, stable cloud rendering transmission.
It is understood that the foregoing embodiments are merely examples, and modifications may be made to the foregoing embodiments in actual implementation, and those skilled in the art may understand that the modification methods of the foregoing embodiments without performing any inventive effort fall within the protection scope of the present disclosure, and are not repeated in the embodiments.
All the above optional solutions may be mutually referred to or combined to form an optional embodiment of the disclosure, which is not described herein in detail.
Based on the same inventive concept, the embodiments of the present disclosure further provide a cloud rendering device, and because the principle of the problem solved by the cloud rendering device is similar to that of the foregoing cloud rendering method, implementation of the cloud rendering device may refer to implementation of the foregoing cloud rendering method, and repeated parts will not be repeated.
Fig. 4 is a schematic structural diagram of a cloud rendering apparatus 400 according to an embodiment of the present disclosure, referring to fig. 4, the cloud rendering apparatus 400 includes:
an initialization module 401, configured to initialize an encoder of a server based on an optimal video compression algorithm transmitted by a client;
an encoding module 402, configured to encode, by an initialized encoder, the first rendering data according to an optimal video compression algorithm;
The pushing module 403 is configured to push the encoded rendering data to the client, monitor a push transmission state, and obtain an evaluation result of the current network bandwidth according to the monitoring result;
And the adjusting module 404 is configured to dynamically adjust parameters of the encoder according to the evaluation result.
In an alternative embodiment, the initialization module 401 is configured to initialize an encoder of the server based on an optimal video compression algorithm transmitted by the client, specifically: the initialization module 401 is configured to:
Obtaining the image quality requirement of a client;
and initializing an encoder of the server according to the optimal video compression algorithm and the image quality requirement transmitted by the client.
In an alternative embodiment, the push module 403 is configured to monitor a push transmission status, specifically: the plug flow module 403 is configured to:
And monitoring network packet loss rate and/or network delay in the process of pushing the encoded first rendering data to the client.
In an optional implementation manner, the push module 403 is configured to obtain, according to the monitoring result, an evaluation result of the current network bandwidth, specifically: the plug flow module 403 is configured to:
If the network packet loss rate is smaller than or equal to the first preset packet loss rate and the network delay is continuously reduced, determining that the current network bandwidth is good; or alternatively
If the network packet loss rate is larger than the first preset packet loss rate and smaller than or equal to the second preset packet loss rate, and the network delay is in a preset delay interval, determining that the current network bandwidth is normal; or alternatively
If the network packet loss rate is larger than the second preset packet loss rate or the network delay is continuously increased, determining that the current network bandwidth is poor.
In an alternative embodiment, the cloud rendering apparatus 400 further includes:
A bandwidth optimization module, configured to: and if the current network bandwidth is determined to be good or bad according to the network packet loss rate and/or the network delay, the network bandwidth is adjusted to be normal.
In an alternative embodiment, the adjustment module 404 is configured to dynamically adjust parameters of the encoder according to the evaluation result, specifically: the adjustment module 404 is configured to:
and setting a target code rate value of the encoder according to the network bandwidth evaluation value when the network bandwidth is normal.
In an alternative embodiment, the plug flow module 403 is further configured to:
and encoding the second rendering data through the encoder after parameter adjustment, and pushing the encoded second rendering data to the client.
According to the device provided by the embodiment of the disclosure, an encoder of a server is initialized based on an optimal video compression algorithm transmitted by a client, then rendering data is encoded according to the video optimal compression algorithm through the initialized encoder, the encoded rendering data is pushed to the client, the push transmission state is monitored, an evaluation result of network bandwidth is obtained according to the monitoring result, and then parameters of the encoder are dynamically adjusted according to the evaluation result. In this way, the optimal video compression algorithm adapted to the client can be selected according to the capability of the client, for example, the decoding capability, the encoder of the server is initialized according to the capability of the client, and in the process of pushing the code stream output by the encoder of the server to the client, the encoder parameters of the server can be dynamically adjusted through the evaluation result of the network bandwidth, so that the reasonable setting of the parameters of the encoder of the server is realized, the capability of the client and the condition of the network bandwidth are considered in the process of dynamically outputting the corresponding video, the network communication quality in the cloud rendering process is ensured, the video delay or the clamping phenomenon cannot occur, and the better cloud rendering effect is achieved.
It should be noted that: in the cloud rendering device provided in the foregoing embodiment, only the division of the foregoing functional modules is used as an example, and in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to perform all or part of the functions described above. In addition, embodiments of the cloud rendering device and the cloud rendering method provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the embodiments are described in method embodiments, which are not repeated herein.
A cloud rendering device in the embodiments of the present disclosure may be a virtual device, or may be a component, an integrated circuit, or a chip in a server or a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (UMPC), netbook or Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the disclosure are not limited in particular.
A cloud rendering device in an embodiment of the present disclosure may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiments of the present disclosure are not limited specifically.
The cloud rendering device provided in the embodiments of the present disclosure can implement each process implemented by the embodiments of the methods of fig. 1 to 3, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 5, the embodiment of the present disclosure further provides an electronic device 900, including a processor 901, a memory 902, and a program or an instruction stored in the memory 902 and capable of being executed on the processor 901, where the program or the instruction implements each process of the embodiment of the method when executed by the processor 901, and the process can achieve the same technical effect, and for avoiding repetition, a description is omitted herein. It should be noted that, the electronic device in the embodiment of the disclosure includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic hardware structure of an electronic device implementing an embodiment of the disclosure.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
It should be appreciated that in embodiments of the present disclosure, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, the graphics processor 10041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 1009 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 1010 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiments of the present disclosure further provide a computer readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the foregoing embodiments, and the same technical effects can be achieved, and in order to avoid repetition, a detailed description is omitted here.
The processor is a processor in the electronic device in the above embodiment. Readable storage media include computer readable storage media such as Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic or optical disks, and the like.
The embodiment of the disclosure further provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, implement each process of the foregoing method embodiment, and achieve the same technical effect, so that repetition is avoided, and no further description is given here.
It should be understood that the chips referred to in the embodiments of the present disclosure may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present disclosure is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present disclosure may be embodied essentially or in part in the form of a computer software product stored on a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) including instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods of the various embodiments of the present disclosure.
The embodiments of the present disclosure have been described above with reference to the accompanying drawings, but the present disclosure is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the disclosure and the scope of the claims, which are all within the protection of the present disclosure.
Claims (10)
1. A cloud rendering method, wherein the method is applicable to a server, the method comprising:
Initializing an encoder of the server based on an optimal video compression algorithm transmitted by the client;
Encoding, by the initialized encoder, the first rendering data according to the optimal video compression algorithm;
Pushing the encoded first rendering data to the client, monitoring the pushing transmission state, and acquiring an evaluation result of the current network bandwidth according to the monitoring result;
And dynamically adjusting parameters of the encoder according to the evaluation result.
2. The cloud rendering method of claim 1, wherein initializing the encoder of the server based on an optimal video compression algorithm transmitted by the client comprises:
Acquiring the image quality requirement of the client;
and initializing an encoder of the server according to the optimal video compression algorithm transmitted by the client and the image quality requirement.
3. The cloud rendering method of claim 1, wherein the monitoring of a push transmission state comprises:
And monitoring network packet loss rate and/or network delay in the process of pushing the encoded first rendering data to the client.
4. The cloud rendering method according to claim 3, wherein the obtaining the evaluation result of the current network bandwidth according to the monitoring result includes:
if the network packet loss rate is smaller than or equal to a first preset packet loss rate and the network delay is continuously reduced, determining that the current network bandwidth is good; or alternatively
If the network packet loss rate is larger than the first preset packet loss rate and smaller than or equal to the second preset packet loss rate, and the network delay is in a preset delay interval, determining that the current network bandwidth is normal; or alternatively
And if the network packet loss rate is larger than the second preset packet loss rate or the network delay is continuously increased, determining that the current network bandwidth is poor.
5. The cloud rendering method according to claim 3 or 4, characterized in that the method further comprises:
and if the current network bandwidth is determined to be good or bad according to the network packet loss rate and/or the network delay, adjusting the network bandwidth to be normal.
6. The cloud rendering method of claim 5, wherein the dynamically adjusting the parameters of the encoder according to the evaluation result comprises:
And setting a target code rate value of the encoder according to the network bandwidth evaluation value when the network bandwidth is normal.
7. The cloud rendering method of claim 1, wherein the method further comprises:
And encoding the second rendering data through the encoder after parameter adjustment, and pushing the encoded second rendering data to the client.
8. A cloud rendering device, wherein the cloud rendering device is suitable for a server, the cloud rendering device comprising:
the initialization module is used for initializing an encoder of the server based on an optimal video compression algorithm transmitted by the client;
The coding module is used for coding the first rendering data according to the optimal video compression algorithm through the initialized coder;
the pushing module is used for pushing the encoded first rendering data to the client, monitoring the pushing transmission state and acquiring an evaluation result of the current network bandwidth according to the monitoring result;
and the adjusting module is used for dynamically adjusting the parameters of the encoder according to the evaluation result.
9. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the method of any of claims 1-7.
10. An electronic device, comprising:
A memory for storing a computer program product;
A processor for executing a computer program product stored in said memory, which, when executed, implements the method of any of the preceding claims 1-7.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311862233 | 2023-12-29 | ||
| CN2023118622338 | 2023-12-29 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN117956177A true CN117956177A (en) | 2024-04-30 |
Family
ID=90798260
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410170887.2A Pending CN117956177A (en) | 2023-12-29 | 2024-02-06 | Cloud rendering method, cloud rendering device, medium and equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117956177A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119068547A (en) * | 2024-08-26 | 2024-12-03 | 湖南芒果融创科技有限公司 | Gesture recognition method, system, device and medium based on VR cloud rendering |
-
2024
- 2024-02-06 CN CN202410170887.2A patent/CN117956177A/en active Pending
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119068547A (en) * | 2024-08-26 | 2024-12-03 | 湖南芒果融创科技有限公司 | Gesture recognition method, system, device and medium based on VR cloud rendering |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7099954B2 (en) | Congestion control mechanism for streaming media | |
| US8254685B2 (en) | Detecting content change in a streaming image system | |
| EP3840390A1 (en) | Method and device for controlling video transcoding code rate | |
| US12192478B2 (en) | Adaptively encoding video frames using content and network analysis | |
| WO2022194140A1 (en) | Remote video transmitting method and transmitting apparatus, storage medium, and electronic device | |
| US9819715B2 (en) | Client side control of adaptive streaming | |
| US8725947B2 (en) | Cache control for adaptive stream player | |
| CN116017037B (en) | Method and system for dynamic parameter adjustment of adaptive bitrate algorithm | |
| JP2022545623A (en) | Prediction-Based Drop Frame Handling Logic in Video Playback | |
| CN114501014A (en) | Video coding parameter processing method, system, device and storage medium | |
| CN117956177A (en) | Cloud rendering method, cloud rendering device, medium and equipment | |
| EP4591554A1 (en) | Sender based adaptive bit rate control | |
| Li et al. | JUST360: Optimizing 360-degree video streaming systems with joint utility | |
| Rahman et al. | SABA: Segment and buffer aware rate adaptation algorithm for streaming over HTTP | |
| CN119545111A (en) | A streaming media video encoding method, device, equipment, storage medium and product | |
| WO2014066975A1 (en) | Methods and systems for controlling quality of a media session | |
| CN119255022A (en) | Video playback control method, device and electronic equipment | |
| Zhang et al. | A QOE-driven approach to rate adaptation for dynamic adaptive streaming over http | |
| US10135896B1 (en) | Systems and methods providing metadata for media streaming | |
| Wu et al. | Adaptive Bandwidth Prediction and Smoothing Glitches in Low‐Latency Live Streaming | |
| Yan et al. | Research on QoE-driven panoramic video bit rate selection strategy under edge computing | |
| CN114416236B (en) | A method, device, equipment and medium for processing virtual desktop data | |
| CN113938681B (en) | System and method for controlling video code rate in multiple modes | |
| Ye et al. | Backward-shifted strategies based on SVC for HTTP adaptive video streaming | |
| EP4471675A1 (en) | Machine learning anomaly detection on quality of service networking metrics |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |