Disclosure of Invention
The present disclosure provides a system that manages the uploading of third party content to a watch folder of a CDN for further distribution. Based on the weighted pre-established policy, an ingestion controller in the CDN controls the speed for uploading content from various content sources. When multiple content sources contend for network bandwidth and the total required bandwidth is greater than the maximum bandwidth capacity of the inbound content pipe, the ingestion controller may limit the upload speed of each content source such that the total ingestion bandwidth utilized by the content sources is equal to the maximum bandwidth capacity of the inbound content pipe. The ingestion controller may allow the requesting content source to upload its content at normal speed when the demand for network bandwidth drops to a level equal to or less than the maximum bandwidth capacity of the inbound content pipe.
It is an object of the present disclosure to minimize or avoid any modifications to the content source. Thus, in one exemplary embodiment, the ingestion controller may limit the upload speed of the content sources by, for example, limiting the speed at which content is retrieved from the associated input buffer queue associated with each content source. The acquisition rate from each buffer queue is based at least on the assigned bandwidth weighting of each associated content source. In one embodiment, the ingestion controller limits the disk write speed that allows the Operating System (OS) monitoring the folders to fetch the contents of each source from its associated input buffer queue and write the contents to memory in the monitoring folder. This causes the input buffer queues associated with the content sources to become full, further causing flow control mechanisms in the transmission network, e.g., the internet, to control the flow of packets from each content source. When the demand for network bandwidth drops to a level equal to or less than the maximum bandwidth capacity of the inbound content pipe, the ingestion controller may allow the OS to write to memory at normal speed, thereby allowing the content source to send its content at its full rate. During network contention time, advanced users may be allocated a faster disc write speed than non-advanced users. All users on a particular service layer may be assigned equal disc writing speeds.
In one embodiment, a method for managing ingestion of electronic content in a CDN is disclosed. Content is received from one or more content sources through an inbound content pipe having a maximum bandwidth capacity. The method comprises the following steps: obtaining, by the ingestion controller, a bandwidth weighting assigned to each of the one or more content sources, wherein the bandwidth weighting assigned to each content source corresponds to a fraction of a maximum bandwidth capacity of the inbound content pipe; and controlling, by the ingestion controller, an upload rate from each content source based at least on the assigned bandwidth weighting for each content source.
The ingestion controller may include an input buffer connected to the memory, wherein the input buffer is configured to receive content from one or more content sources in one or more input buffer queues associated with the content sources, and the memory is configured to store the received content upon retrieval from the input buffer. The controller may control the upload rate by controlling the speed at which content is retrieved from each queue of the input buffer based at least on the assigned bandwidth weighting for each associated content source. Slowing the acquisition speed of a given input queue causes the given input queue to become full, thereby triggering a network flow control mechanism that causes the content source associated with the given input queue to slow its transmission rate as instructed from the flow control mechanism.
In another embodiment, an ingestion controller for managing ingestion of electronic content in a CDN is disclosed. Content is received from one or more content sources through an inbound content pipe having a maximum bandwidth capacity. The ingestion controller includes a database of bandwidth weightings assigned to each of the one or more content sources, wherein the bandwidth weighting assigned to each content source corresponds to a fraction of a maximum bandwidth capacity of the inbound content pipe. The controller further comprises: an input buffer configured to receive content from one or more content sources in one or more input buffer queues associated with the content sources; and a content retrieval mechanism configured to receive the bandwidth weightings from the database and retrieve content from the input queue at an upload rate for each content source based at least on the assigned bandwidth weighting for each content source.
In another embodiment, a system for managing ingestion of electronic content in a CDN is disclosed. Content is received from one or more content sources through an inbound content pipe having a maximum bandwidth capacity. The system includes an operator Policy Management System (PMS) configured to assign bandwidth weightings to one or more content sources, wherein the bandwidth weighting assigned to each content source corresponds to a fraction of a maximum bandwidth capacity of the inbound content pipe. The system also includes an ingestion controller configured to control an upload bit rate of each of the one or more content sources. The ingestion controller includes: a database configured to store bandwidth weightings assigned to each of one or more content sources; an input buffer configured to receive content from one or more content sources in one or more input queues associated with the content sources; and a content retrieval mechanism configured to receive the bandwidth weightings from the database and retrieve content from the input queue at an upload rate for each content source based at least on the assigned bandwidth weighting for each content source.
The system enables CDN owners to provide high-level services to content sources by providing a prioritized upload speed to them. In addition, the system prevents content sources from utilizing a disproportionate share of bandwidth at the expense of other content sources at the same layer. Other features and benefits of embodiments of the present invention will become apparent from the detailed description below.
Detailed Description
The present invention now will be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. In the drawings, like reference numerals designate like elements. Additionally, it should be understood that the present invention can be implemented in hardware or a combination of hardware and software stored on a non-transitory memory and executed by a general purpose computer or microprocessor.
Fig. 2 is a simplified block diagram of a network architecture 25 for uploading content to CDN 11, in accordance with certain embodiments. Content sources CS-1 through CS-412-15 may compete to upload content over a packet data network, such as the internet 16, to a surveillance file chucker farm 17 in a data center 18 commonly accessible by the content sources and CDNs. It should be understood that in the context of the present disclosure, the term "disk farm" is not intended to limit monitoring folder functionality to a particular hardware implementation. The monitoring folder may be any suitable memory device with sufficient memory capacity and response time to store content from multiple content sources before forwarding the content to the CDN. For example, the memory device may be one or more of the following: CPU registers, cache on CPU, Random Access Memory (RAM), cache on disk, Compact Disk (CD), Solid State Disk (SSD), in-network (cloud) storage, tapes, and other backup media. In the remainder of the description herein, the watch folder functionality will be referred to simply as a "watch folder".
In the illustrated example, CS-1 to CS-3 each upload associated content using an Internet connection having a maximum bandwidth of 100 Mbs. CS-4 utilizes an internet connection with a maximum bandwidth of 30 Mbs. The inbound content virtual pipe 19 from the internet to the monitoring folder 17 has a maximum bandwidth capacity of 200Mbs in the illustrated example. From the monitoring folder, CDN delivery system 21 delivers ingested content to CDN 11, which may include origin server 22 and a plurality of regional servers 23.
When multiple content sources contend for network bandwidth and the total required bandwidth is greater than the maximum bandwidth capacity of the inbound content pipe 19, the ingestion controller 26 is configured to control the upload speed of each content source such that the total ingestion bandwidth utilized by the content source is equal to or less than the maximum bandwidth capacity of the inbound content pipe. The ingestion controller may allow the requesting content source to upload its content at normal speed when the demand for network bandwidth drops to a level equal to or less than the maximum bandwidth capacity of the inbound content pipe. The operation of the ingestion controller is described in more detail below in conjunction with fig. 4.
Fig. 3 is a flow diagram of an exemplary embodiment of an overall method for controlling the upload of content from two or more content sources 12-15 to a CDN 11. At step 31, the ingestion controller 26 obtains a bandwidth weighting 42 assigned to each of the two or more content sources, where the bandwidth weighting assigned to each content source corresponds to a fraction of the maximum bandwidth capacity of the inbound content pipe 19. In step 32, the ingestion controller 26 controls the upload bit rate from each content source based at least on the assigned bandwidth weighting 42 for each content source.
Fig. 4 is a simplified block diagram of an exemplary embodiment of the ingestion controller 26 of fig. 2. In the illustrated embodiment, the ingestion controller 26 includes: an input buffer 41 comprising a flow control mechanism 55; a set of predefined Weighted Fair Queuing (WFQ) weights 42 associated with the content sources 12-15; and a disk writer 43 that monitors an Operating System (OS) of the folder 17. The operation of the ingestion controller may be controlled, for example, by a processor 44 running computer program instructions stored on a memory 45.
The WFQ weights 42 include bandwidth weightings assigned to each of the content sources 12-15 by the operator Policy Management System (PMS)46 based on network policies and the service layer of the respective content source. Each WFQ weight corresponds to a fraction of the maximum bandwidth capacity of the inbound content pipe 19 allocated to the associated content source. The PMS may dynamically assign WFQ weights to content sources as the contention level for network bandwidth increases and decreases. As described in more detail below, each content source may also assign weights to individual content streams within the content source's allocated bandwidth.
Fig. 5 is a simplified block diagram of an exemplary embodiment of the input buffer 41 of fig. 4. Input buffer 41 may be configured to receive content from content sources in a plurality of input queues 51-54, where each input queue is associated with a different one of content sources CS1-CS 412-15. The size of each queue may be determined in proportion to the allocated bandwidth of the associated content source. The flow control mechanism 55 interacts with the content sources to control the upload bit rate of each source using flow control instructions 56, as described in more detail below.
Fig. 6 is a functional block diagram of an exemplary embodiment of a table of WFQ weights 42 applied to an input buffer 41 for controlling bandwidth utilization of various content sources. The left column shows WFQ weights that may be applied by the PMS 46 to each content source according to network policy and the service layer of the respective content source. Without weight, each content source would attempt to transmit at its maximum rate, requiring an inbound content pipe with a bandwidth of 330 Mbs. However, the maximum capacity of the inbound content pipe is 200Mbs, so the weight must reduce the maximum transmission rate to the allowable transmission rate, the sum of which does not exceed 200 Mbs. In the illustrated example, CS-1 and CS-2 are assigned a rank "3 weight" (66.667Mbs), CS-3 is assigned a rank "2 weight" (44.444Mbs), and CS-4 is assigned a rank "1 weight" (22.222 Mbs). If a content source stops transmitting, or if another content source starts transmitting, the WFQ weights may be adjusted up or down to keep the sum of the transmission rates within 200Mbs capacity.
The content source may upload multiple content streams within the content source's allocated bandwidth. Such a multi-stream content source may also allocate bandwidth to individual content streams according to priorities assigned by the multi-stream content source. For example, CS-112 may have three content streams 61-63 to upload to the CDN, and may divide its 66.667Mbs among the three streams. As shown in this example, CS-1 assigns priority levels 1, 2, and 3 to the three content streams that are associated with bandwidths of 33.334Mbs, 22.222Mbs, and 11.111Mbs, respectively. The multi-stream content sources CS-2 and CS-3 also assign varying priority levels to their multiple content streams.
Fig. 7 is a flow diagram illustrating in more detail an exemplary embodiment of a method for controlling the upload of content from two or more content sources 12-15 to a CDN 11. At step 71, the PMS 46 assigns a bandwidth weight to the CS with the content to be uploaded to the CDN 11, for example. The weights may be preconfigured and assigned to the network connections of the CS and the content flowing through these connections before the content is actually transferred. The weights may also change during transmission. At step 72, the CS starts uploading its content. At step 73, the OS/disk writer 43 retrieves the content from the input queues 51-54 of the input buffer 41 according to the assigned bandwidth weights and writes the content to the watch folder 17. The ingest controller causes the OS/disk writer to reduce its speed of fetching content from the buffer queue of each content source in proportion to the WFQ weights 42 assigned to each content source and each individual stream (if applicable). The bandwidth weight does not affect the upload bit rate unless there is a competing content transmission that exceeds or is expected to exceed the maximum bandwidth capacity of the inbound content virtual pipe 19. Only then does the intake controller 26 start controlling the intake speed.
Even if each content source desires to transfer content at the maximum bit rate of its connection to the internet 16, the present disclosure controls the average upload bit rate of each content source with the speed at which the OS/disk writer 43 retrieves content from the buffer queues 51-54 and writes the content to the monitoring folder 17 so that the maximum bandwidth capacity of the inbound content pipe 19 is not exceeded. When the OS/disc writer 43 slows down the acquisition speed of a particular content source, the queue in the input buffer 41 associated with that particular content source fills up immediately. At step 74, the flow control mechanism 55 associated with the input buffer is triggered to slow down the upload bit rate of the associated content source. The flow control mechanism may be a standard flow control mechanism such as the Transaction Control Protocol (TCP) used for signaling over the internet and other Internet Protocol (IP) based networks. The flow control instructions 56 may be included in acknowledgement messages sent from the input buffer 41 to the various content sources. The flow control instructions may indicate, for example, that the content source in question is authorized to transmit a given number of data packets, but then must wait for further authorization before transmitting additional packets. When the content source completes its upload or a new content source is added, step 75, the method returns to step 71 where the sum is recalculated and then re-evaluated, step 72.
In this way, by controlling the buffer acquisition rate of each of the content sources 12-15, the ingestion controller 26 controls the amount of network bandwidth utilized by each content source. When the buffer retrieval speed of a particular content source is further limited, the network bandwidth utilization of that content source is reduced because the input queue of that source remains full for a longer period of time, and the flow control mechanism 55 also limits the number of packets permitted to be transmitted by that particular content source. During network contention time, advanced users may be allocated faster buffer acquisition and disk write speeds than non-advanced users. All users on a particular service layer may be assigned equal buffer acquisition and disc write speeds.
FIG. 8 is a block diagram of an exemplary embodiment of a nested data structure in an approval framework for writing received content to a watch folder, thereby limiting the rate of retrieval from an input buffer.
FIG. 9 is a flow diagram that schematically illustrates an embodiment of a method of writing a thread (lightweight process) to a monitoring folder 17 (e.g., RAM, disk, etc.).
Fig. 8 and 9 collectively illustrate an exemplary method of nesting data structures used to determine which data flows to service and the times at which they are to be serviced. Referring to FIG. 8, a file writer thread 81 loads a token 82 into a leaf sequencer (leaf sequencer) 83. The leaf sequencer holds the token from the write thread. In various embodiments, the leaf sequencer may represent a separate device in the subscriber's home or a video on demand asset from a content source. A Weighted Fair Queuing (WFQ) sequencer 84 pulls tokens from the leaf sequencers and applies a weighted write queuing algorithm to the weighted file write queues 51-54. In various embodiments, a WFQ sequencer may be used to divide bandwidth among various devices within a subscriber's home, or among video on demand assets from a content source. The weighted write queuing algorithm is described in more detail below.
A set of rate limiters 85 control the input rate into the weighted file write pipe queue 86. In various embodiments, rate limiter 85 may represent a home or content provider. The common WFQ sequencer 86, shown as a weighted file write pipe queue, also applies a weighted write queuing algorithm to the data. The WFQ sequencer 86 has internal logic that selects one of the rate limiters 85 to pull a token from. A final rate limiter 87, having a different value than rate limiter 85, takes tokens from WFQ sequencer 86 and delays the tokens to keep the data rate below its configured limit. The grant loop pulls tokens from the rate limiter 87 and grants them. The rate limiter 87 essentially calculates the time when the content packet should be transmitted by the CS. When there is no congestion in the inbound content virtual pipe 19, the rate limiter calculates the past transmission time, thus approving the immediate transmission. When there is network congestion in the virtual pipe, the rate limiter calculates a future transmission time, thus delaying transmission approval.
It should be noted that the rate limiter, WFQ sequencer, and leaf sequencer can be set to a number of different configurations to model different business needs.
Referring to FIG. 9, a single thread write process is shown, with multiple threads being handled by the loop shown. At step 91, the disk writer 43 monitoring the folder OS generates a list of bytes of a single thread to be written to memory. At step 92, the OS generates an N byte token 82, where N is equal to the byte block size. At step 93, the token for the thread is loaded into one of the approval frameworks, i.e., leaf sequencers 83. At step 94, the thread waits for its token to be granted. Once the token is approved, N bytes are written to memory at step 95. At step 96, the OS determines whether there is still a payload to be transferred. If not, the method moves to step 97 where the file descriptor is closed. If there is still payload present, the method loops back to step 92 and the process repeats for another thread.
In one embodiment, the weighted write queuing algorithm may be expressed by:
xi=(Ai+Tbi)/wi
wherein:
xiis an intermediate value used to account for (account) queue i in the hypothetical bitwise cyclic model, which represents the estimated completion time of the transmission;
Ai=sum(Qaik) Wherein QaikIs from the write queue QiThe amount of bytes of the previously approved token k;
Tbifrom write queue QiThe number of bytes of the current token; and
wiwrite queue QiThe weighting factor of (2).
Thus, AiIs the sum of the bytes of queue i that have been transmitted within the "specification window". A. thei+TbiAnd adding the byte number of the candidate token. Weight wiCorresponding to the "speed" of the transmit queue relative to other weighted streams.
By comparing x of various queuesiThe value can determine which packet will complete the transmission first in the hypothetical bitwise round robin scheme. That packet is selected for transmission. Approval from queue QiA token of (2), wherein xi=min(xi). The (reconcile) tokens are reconciled by weight.
In the drawings and specification, there have been disclosed typical preferred embodiments of the invention and, although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation.