HK1207719B - A method and an apparatus for virtualization of a quality-of-service - Google Patents
A method and an apparatus for virtualization of a quality-of-service Download PDFInfo
- Publication number
- HK1207719B HK1207719B HK15108376.0A HK15108376A HK1207719B HK 1207719 B HK1207719 B HK 1207719B HK 15108376 A HK15108376 A HK 15108376A HK 1207719 B HK1207719 B HK 1207719B
- Authority
- HK
- Hong Kong
- Prior art keywords
- ambience
- aura
- levels
- determined
- threshold
- Prior art date
Links
Description
Technical Field
The present disclosure relates to memory management. More particularly, the present invention relates to virtualization of quality of service.
Background
Receiving a packet over a network at a node (e.g., a switch) in a communication system requires allocating memory to process the packet. However, memory is a limited resource; accordingly, many techniques have been developed to guarantee quality of service (QoS) levels. QoS is the overall performance of network communications as seen by users of the network and is quantified by measuring various parameters such as error rate, bandwidth, throughput, transmission delay, availability, jitter, and other parameters known to those of ordinary skill in the art.
In order to guarantee the required QoS, several methods are commonly used in view of limited memory. The tail drop method is a simple queue management technique used to decide when to drop a packet. When the allocated queue reaches a first predetermined capacity, newly arriving packets are discarded until the queue capacity is increased to a second predetermined capacity. This method does not distinguish between packets, all packets being treated equally. Queues are structures that organize data into structures in which entities, i.e., data comprising packets, are maintained. The data structure may be implemented as a buffer pool, i.e. the part of the memory that can be allocated to a hardware or software entity may serve the same purpose. Backpressure refers to a queue management method that requests a packet source to abort packet transmission when an allocated queue reaches a first predetermined capacity until the queue capacity is increased to a second predetermined capacity. A Random Early Discard (RED) method monitors the average queue size and discards packets based on statistical probability. If the queue is almost empty, all incoming packets are accepted. As the queue grows, the probability for dropping incoming packets increases.
In a more sophisticated implementation, various QoS management methods are combined; the techniques are thus varied according to changes in parameters associated with each of the techniques. The parameters are often selected based on the properties characterizing the packet that will use the memory, such as the physical interface on which the packet is received, the fields selected from the packet (e.g., differentiated services (DIFFSRV), IEEE 802.1QVLAN priority), and other characteristics known to those of ordinary skill in the art.
The current trend towards virtualization requires reconsidering QoS management. Virtualization is the process by which a virtual version of a computing resource, such as hardware and software resources, i.e., central processing units, storage systems, input/output resources, network resources, operating systems, and other resources known in the art, is emulated by a computer system called a host machine. A typical host machine may include a hardware platform that optionally operates, along with a software entity (i.e., an operating system), a hypervisor, which is software or firmware that creates and operates virtual machines, which are also referred to as guest machines. Through hardware virtualization, the hypervisor provides a virtual hardware operating platform to each virtual machine. By interfacing with the virtual hardware operating platform, the virtual machine accesses the computing resources of the host machine to perform the corresponding operations of the virtual machine. As a result, a single host machine may support multiple virtual machines, each operating an operating system and/or other software entities (i.e., applications) simultaneously through virtualization.
Thus, virtualization is likely to increase the pressure on memory resource management due to the increased need for many virtual address spaces in memory. This results in the need to direct the received packets to one of these virtual address spaces, each of which may have a different QoS. The presence of many virtual address spaces, associated QoS levels, and many QoS methods potentially requires a large, complex structure.
Thus, there is a need in the art for a method and apparatus embodying the method that provides a solution to the complexity problems identified above that achieves flexibility and additional advantages.
Disclosure of Invention
In one aspect of the disclosure, an apparatus and method for quality of service according to the appended independent claims are disclosed. Additional aspects are disclosed in the dependent claims.
Drawings
The foregoing aspects described herein will become more readily apparent by reference to the following description when taken in conjunction with the accompanying drawings, wherein:
FIG. 1 depicts a conceptual architecture of a virtualization system according to one aspect of the present disclosure;
FIG. 2a depicts a first part of a flow chart of a process for virtualization of quality of service;
FIG. 2b depicts a second part of the flowchart of a process for virtualization of quality of service; and
fig. 3 depicts a conceptual structure and information flows among elements of the conceptual structure that enable virtualization of quality of service.
Detailed Description
Unless defined otherwise, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure.
As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term "and/or" includes any and all combinations of one or more of the associated listed items.
Various disclosed aspects may be illustrated with reference to one or more example configurations. As used herein, the term "exemplary" means "serving as an example, instance, or illustration," and should not necessarily be construed as preferred or advantageous over other configurations disclosed herein.
Unless explicitly stated otherwise, various aspects of the present invention will be described herein with reference to the accompanying drawings, which are schematic illustrations of conceptual configurations of the invention. Various aspects of the disclosure are provided to enable one of ordinary skill in the art to practice the invention. Modifications to the various aspects presented throughout this disclosure will be readily apparent to those of ordinary skill in the art, and the concepts disclosed herein may be extended to other applications.
FIG. 1 depicts a conceptual architecture of a virtualization system 100 according to one aspect of the present disclosure. The hardware platform 102, along with an optional hardware entity 104 (i.e., an operating system), includes a host machine that operates a type 2 hypervisor, also referred to as a hosted hypervisor 106. As is well known to those of ordinary skill in the art, the optional software entity 104 is not necessary for a type 1 hypervisor, also known as a native hypervisor. Aspects of the disclosure are equally applicable to both types of hypervisors.
The hardware platform 102 comprises all physical entities embodying the computing resources required by a particular host machine, i.e., central processor units, input/output resources, storage systems, network resources, and other resources known to those of ordinary skill in the art. To avoid undue complexity, only the storage system 108 and network resources 110 are shown. The memory system 108 may include hard drives, semiconductor-based memory, and other types of memory known in the art. The network resource 110 includes at least one NIC.
The hypervisor 106 creates and operates at least one virtual machine 112. Although three virtual machines 112 are shown, one skilled in the art will appreciate that any number, including a single virtual machine, may exist. Parameters are defined via the fabric 114 that configure the operation of the virtual machine 112. In one aspect, the structure 114 may include at least one register.
Reference is made to fig. 2 which depicts a flowchart of a process 200 for virtualization of quality of service. To further clarify the relationship between certain elements of the conceptual structure and the flow of information between elements of the structure that enable the virtualization of the quality of service depicted in fig. 3, in the description of fig. 2, references to the structural elements of fig. 3 are in parentheses.
In step 202, a hypervisor (not shown) initiates structures that configure the hypervisor and the operation of all subordinate entities (i.e., interfaces (302), virtual machines of the VNICs (not shown), and other entities). Although some of the enumerated entities are not shown in FIG. 3, one of ordinary skill in the art will appreciate that the architecture depicted in FIG. 3 is implemented by the virtualization system 100. The process continues in step 204.
In step 204, the incoming packet arrives at an interface (302) (e.g., a NIC, ethernet Media Access Control (MAC), or other interface known to those of ordinary skill in the art) and is parsed by a parser (304). In one aspect, a parser is implemented at an interface (302). Information from fields of the parsed packet is evaluated by the parser (304) along with configuration information and any additional information. Based on the evaluation, the group is associated with the ambience by being assigned an ambience (aura) identifier (306). The term "ambience" is used to identify how to record and process the flow control for grouping. The information fields may include, for example, a source Internet Protocol (IP) address, a source MAC address, a destination IP address, a destination MAC address, a VLAN header, and other fields known to those skilled in the art. The configuration information may for example include mapping the source port number to a number of buffers that may be allocated to the atmosphere identified by the atmosphere identifier (306) and adding an IEEE 802.1Q VLAN priority and/or a DIFFSRV priority and other configuration information known to those skilled in the art. The additional information may include, for example, the port on which the packet arrived, the interface on which the packet arrived, and other information known to those skilled in the art. The process continues in step 206.
In step 206, an ambience identifier (306) is provided to an ambience management block (307). In the ambience management block (307), a resource control block (308) determines resources controlling a quality of service (QoS) for packets associated with the ambience identified by the ambience identifier (306).
Such resources include a pool of available buffers from which buffers are to be allocated to atmospheres identified by the atmosphere identifier (306). The maximum number of buffers that can be allocated to the ambience identified by the ambience identifier (306) is determined by the configuration parameter AURA _ CNT _ LIMIT; the number of buffers allocated at a particular time is determined by the parameter AURA _ CNT _ LEVELS. The term buffer includes a portion of memory that may be allocated to a hardware or software entity. The term pool is used to identify a plurality of buffers. In one aspect, the buffer pool is implemented AND managed as disclosed in a co-pending application filed at 12/25/2013 under application No. 14/140494 entitled "a METHOD AND apparatus FOR MEMORY ADDRESS ALLIGNMENT". A buffer may not belong to more than one pool.
To allocate resources, the ambience identifier (306) is used to determine an ambience pool number (310) and an ambience level (312). Such a determination may be performed by mapping the atmosphere identifier to an atmosphere pool number and an atmosphere level with a structure (e.g., a lookup table). The mapping may map one or more atmosphere identifiers to a single pool. In case more than one ambience is mapped onto a single pool, the ambience buffer allocation may be the same or different for different ambience identifiers.
The ambience pool number (310) identifies a pool of available buffers from which buffers are to be allocated to packets associated with the ambience identified by the ambience identifier (306). The ambience level (312) comprises values of the configuration parameters. The following notations are used for clarity of disclosure. The configuration parameters of the ambience level related to the POOL of available buffers are identified by the symbolic representation AURA _ POOL _ LEVELS [ ]; the configuration parameters of the ambience level related to the buffer allocated to the ambience are identified by the symbolic representation AURA _ CNT _ LEVELS [ ]. The expression in brackets [ ] indicates the QoS action to be taken at that level.
In one aspect, the expression may include a DROP, the expression indicating a rejection ("DROP") of the packet. Another expression may include PASS, which indicates acceptance of the packet. Yet another expression may include BP, which indicates acceptance of the packet and application of backpressure. However, one of ordinary skill in the art will appreciate that other expressions indicating other QoS actions may be used.
Based on the foregoing, by way of example, the notation AURA _ POOL _ LEVELS [ PASS ] indicates a value of a configuration parameter atmosphere level related to a POOL of available buffers, including the number of available buffers in the POOL, including a threshold for communicating packets associated with the atmosphere or performing QoS actions. By way of another example, the notation denotes a value AURA _ CNT _ LEVELS [ PASS ] indicating a configuration parameter atmosphere level related to a buffer allocated to the atmosphere. The value includes the number of buffers allocated to the allocation, which includes a threshold for communicating packets associated with the atmosphere or performing QoS actions.
The process continues in step 208.
In step 208, the status of the pool identified by the pool number (310) is determined. In one aspect, the status of the pool is retrieved from memory (314). The status of the pool includes buffer levels available in the pool (316) and atmosphere buffer levels assigned to atmospheres identified by the atmosphere identifier (306). In one aspect, the level may comprise an instantaneous level. In another aspect, the instantaneous level can be further processed. Such processing may include, for example, normalization and/or time averaging. Thus, the term "determined level" is used to refer collectively to both the instantaneous level and the level of treatment, unless explicitly stated otherwise.
The normalization rescales the determined level with respect to the maximum normalized level. In one aspect, normalizing comprises calculating a constant as a ratio of the maximum normalized level to the maximum level of the normalized entity; wherein the normalized entity comprises a buffer level available in the pool or an ambience buffer level assigned to the ambience. The determined value is then multiplied by a constant to produce a normalized value.
In one aspect, the constants are limited to powers of two, which may reduce the rescaled values to intervals as small as <0, 128 >. By limiting the constant to a power of two, an alternative, more computationally efficient calculation may be produced by replacing the multiplication with a shift, since a power of two multiplied or divided by two may be equivalent to a shift.
The time average may be calculated over a programmable time, wherein the programmable time comprises a zero time; thus producing an instantaneous value. Temporal averaging may include, for example, moving averages, averaging over several recent time samples, averaging recent maxima over several time samples, averaging recent minima over several time samples, and other averaging methods known to those of ordinary skill in the art. The process continues in step 210 and step 212, either sequentially or in parallel.
The determined level is performed by a block (320) for each of the atmospheres and by a pool management block (318) for the pool according to a schedule. In one aspect, the schedule is periodic. In another aspect, the schedule is triggered by an event, such as a buffer allocation change due to regular processing of the packet (e.g., accepting and moving packets to the buffer, moving packets from the buffer to the target application, and other processing known to those of ordinary skill in the art). Alternatively, a hardware (344) or software (346) entity may increase or decrease the atmosphere buffer level allocated to the atmosphere identified by the atmosphere identifier (306) as disclosed below.
In step 210, the determined buffer level currently available in the pool (316) is compared to a value of a first predetermined threshold in block (322). In one aspect, the determined buffer level is normalized. The first threshold is a configurable parameter whose value is determined, for example, according to the percentage of the total number of buffers comprising the pool. If the currently available buffer level in the pool (316) crosses the value of the first predetermined threshold, the block (322) generates an indicator (324) that is provided to the block (322) and instructs the block to generate an interrupt (334), and the process continues in step 214; otherwise, the process continues in step 218.
In step 214, block 332 provides the generated interrupt to the hypervisor (334). The hypervisor may take action to change the QoS of packets associated with the atmosphere. By way of example, the actions may add or remove resources from the pool depending on the direction in which the first threshold is crossed. In one aspect, resources may be removed from the pool and vice versa when a threshold is crossed from a value where the buffer level currently available in the pool is less than a first predetermined threshold to a value where the buffer level currently available in the pool is greater than the first predetermined threshold. The action may be performed via a hardware (344) or software (346) entity as disclosed below. The process continues in step 218.
In step 212, the determined ambience buffer level assigned to the ambience identified by the ambience identifier (306) is compared with a value of a second predetermined threshold in block (322). The second threshold is a configurable parameter whose value is determined, for example, according to the percentage of the number of ambience buffers allocated to the ambience. If the number of ambience buffers allocated to the ambience crosses a value of a second predetermined threshold, the block (322) generates an indicator (324), which is provided to the block (332) and instructs the block to generate an interrupt (334). The process continues in step 214.
In step 214, block (332) provides the generated interrupt (334) to the hypervisor. The hypervisor may take action to change the QoS of packets associated with the atmosphere. By way of example, the action may add or remove resources from the pool depending on the direction in which the second threshold is crossed. In one aspect, resources may be added to the pool when a threshold is crossed from a value where the buffer level allocated to the atmosphere identified by the atmosphere identifier is less than a second predetermined threshold to a value where the buffer level allocated to the atmosphere identified by the atmosphere identifier is greater than the second predetermined threshold and vice versa. The action may be performed via a hardware (344) or software (346) entity as disclosed below. The process continues in step 216.
In step 216, one or more comparison determinations of buffer LEVELS available in the POOL (316) and values of determined ambience level configuration parameters (i.e., AURA _ POOL _ LEVELS [ ]) related to the available buffer POOL are performed. Whether the next comparison is to be performed depends on the results of the previous comparisons as disclosed below. As alluded to previously and disclosed in more detail below, the configurable parameters may be adjusted. Thus, in one aspect, the level may comprise a transient level. In another aspect, the instantaneous level can be further processed. Such processing may include, for example, normalization and/or time averaging. Thus, the term "determined configuration parameter" is used to refer collectively to both the instantaneous level of the configuration parameter and the level of processing, unless explicitly indicated otherwise.
If the determined buffer level available in the pool (316) is less than the determined configuration parameter AURA _ POOLS _ LEVELS [ DROP ], block (322) generates an indicator (326) that is provided to block (336) and instructs the block to DROP the packet. In one aspect, the buffer levels available in the pool are normalized and time averaged. In one aspect, the block (336) is implemented as part of the interface (302). The process continues in step 220; otherwise, the next comparison is performed.
If the determined buffer level available in the pool (316) is between the determined configuration parameter AURA _ POOLS _ LEVELS [ DROP ] and the determined configuration parameter AURA _ POOLS _ LEVELS [ PASS ], block (322) generates an indicator 328 that is provided to block (332) and instructs the block to perform random early discard. In one aspect, the buffer levels available in the pool are normalized and time averaged. The process continues in step 222; otherwise, the next comparison is performed.
If the determined buffer level available in the pool (316) is less than the determined configuration parameter AURA _ POOLS _ LEVELS [ BP ], block 322 generates an indicator 330 that is provided to block (332) and instructs the block to apply backpressure. In one aspect, the buffer levels available in the pool are normalized. The process continues in step 224.
In step 220, the packet is discarded. The process continues in step 204.
In step 222, a random early discard is performed as disclosed in more detail below. The process continues in step 204.
In step 224, backpressure is applied as disclosed in more detail below. The process continues in step 204.
In step 218, one or more comparison determinations of the atmosphere buffer level allocated to the atmosphere identified by the atmosphere identifier (306) and the value of a determined configuration parameter (i.e., AURA _ CNT _ LEVELS [ ]) related to the atmosphere allocated buffer are performed. Whether the next comparison is to be performed depends on the results of the previous comparisons as disclosed below. As alluded to previously and disclosed in more detail below, the configurable parameters may be adjusted. Thus, in one aspect, the level may comprise a transient level. In another aspect, the instantaneous level can be further processed. Such processing may include, for example, normalization and/or time averaging. Thus, the term "determined configuration parameter" is used to refer collectively to both the instantaneous level of the configuration parameter and the level of processing, unless explicitly indicated otherwise.
If the determined atmosphere buffer level allocated to the atmosphere identified by the atmosphere identifier (306) is greater than the determined configuration parameter AURA _ CNT _ LIMIT, the block (322) generates an indicator (326) which is provided to the block (336) and instructs the block to drop packets. In one aspect, the atmosphere buffer level assigned to the atmosphere identified by the atmosphere identifier is normalized. The process continues in step 220; otherwise, the next comparison is performed. If the determined atmosphere buffer level allocated to the atmosphere identified by the atmosphere identifier (306) is greater than the determined configuration parameter AURA _ CNT _ LEVELS [ DROP ], then a block (322) generates an indicator (324) that is provided to the block (332) and instructs the block to DROP packets. In one aspect, the atmosphere buffer level assigned to the atmosphere identified by the atmosphere identifier is normalized. The process continues in step 220; otherwise, the next comparison is performed. If the determined atmosphere buffer level allocated to the atmosphere identified by the atmosphere identifier (306) is between the determined configuration parameter AURA _ CNT _ LEVELS [ PASS ] and the determined configuration parameter AURA _ CNT _ LEVELS [ DROP ], the block (322) generates an indicator (328) that is provided to the block (332) and instructs the block to perform a random early discard. In one aspect, the atmosphere buffer levels assigned to atmospheres identified by the atmosphere identifier are normalized and time averaged. The process continues in step 222; otherwise, the next comparison is performed.
If the determined allocated ambience buffer level to the ambience identified by the ambience identifier (306) is less than the determined configuration parameter AURA _ CNT _ LEVELS [ BP ], then a block (322) generates an indicator (330) which is provided to the block (332) and instructs the block to apply backpressure. In one aspect, the atmosphere buffer levels assigned to atmospheres identified by the atmosphere identifier are normalized and time averaged. The process continues in step 224.
In step 220, the packet is discarded. The process continues in step 204.
In step 222, a random early discard is performed as disclosed in more detail below. The process continues in step 204.
In step 224, backpressure is applied as disclosed in more detail below. The process continues in step 204.
The determined status of the pool identified by the pool number (310) and the determined ambience buffer level assigned to the ambience identified by the ambience identifier (306) are recalculated at any time the status of the buffer levels available in the pool (316) and/or the ambience buffer level assigned to the ambience identified by the ambience identifier (306) is changed due to a change in buffer usage. Such changes in buffer usage may be due to regular processing of packets, such as accepting and moving packets to the buffer, moving packets from the buffer to the target application, and other processing tasks known to those of ordinary skill in the art.
Additionally, a hardware (344) or software (346) entity may increase or decrease the atmosphere buffer level allocated to the atmosphere identified by the atmosphere identifier (306). This allows for QoS adjustment. Such hardware entities may include co-processors that perform additional processing, such as compression engines (not shown) that perform packet compression, parsers (304), and other hardware entities known to those of ordinary skill in the art. The software entities may include, for example, software processes that allocate or reclaim buffers or multicast packets and other software entities known to those of ordinary skill in the art.
In one aspect, co-processors using the same buffer pool may allocate or de-allocate buffers to atmospheres identified by the atmosphere identifier (306) along with a corresponding increase or decrease in the atmosphere buffer level allocated for the atmospheres identified by the atmosphere identifier (306). This allows for QoS adjustment.
In another aspect, since the ambience count need not correspond to a physical resource (i.e., buffer), the co-processor may increase or decrease the ambience buffer level allocated for the ambience identified by the ambience identifier (306) without allocating or deallocating the buffer, thus adjusting the QoS.
In yet another aspect, parser (304) may increase or decrease an atmosphere buffer level allocated for atmospheres identified by atmosphere identifier (306). This allows QoS to be adjusted without consuming buffers.
In yet another aspect, the software entity may increase or decrease the normalized buffer level allocated for the ambience identified by the ambience identifier (306). This allows QoS to be adjusted without consuming buffers.
As disclosed previously, in step 222, a random early discard is performed. In one aspect, in block (332), a comparator (337) compares the pre-computed drop probability to a pseudo-random number to determine whether to drop the packet or to allocate a buffer for further processing of the packet.
For a DROP probability according to the determined buffer level available in the pool (316), the block (332) calculates a first DROP probability as a function of the determined configuration parameter AURA _ POOLS _ LEVELS [ PASS ] and the determined configuration parameter AURA _ POOLS _ LEVELS [ DROP ]. In one aspect, the function is linear from 0 when the value of the normalized and time-averaged buffer level available in the pool (316) is greater than or equal to the value of the determined configuration parameter AURA _ POOLS _ LEVELS [ PASS ] to 1 when the determined buffer level available in the pool (316) is less than or equal to the value of the determined configuration parameter AURA _ POOLS _ LEVELS [ DROP ]. However, an interval <0 is envisaged; 1> any function defined.
For a DROP probability according to the determined atmosphere buffer level allocated to the atmosphere identified by the atmosphere identifier (306), the block (332) calculates a second DROP probability as a function of the determined configuration parameter AURA _ POOLS _ LEVELS [ PASS ] and the determined configuration parameter AURA _ POOLS _ LEVELS [ DROP ]. In one aspect, the function is linear from 0 when the determined atmosphere buffer level allocated to the atmosphere identified by the atmosphere identifier (306) is less than or equal to the value of the determined configuration parameter AURA _ CNT _ LEVELS [ PASS ] to 1 when the determined atmosphere buffer level allocated to the atmosphere identified by the atmosphere identifier (306) is greater than or equal to the value of the determined configuration parameter AURA _ POOLS _ LEVELS [ DROP ]. However, an interval <0 is envisaged; 1> any function defined.
After calculating the first drop probability and the second drop probability, an entity (e.g., block (322)) in the ambience management block (308) combines the first drop probability and the second drop probability by taking the greater of the first drop probability and the second drop probability. Block (322) then generates a signal from or scaled to the interval < 0; 1> and in one aspect, the packet is discarded when the pseudo-random number is less than the combined discard probability value. In another aspect, packets are discarded when the pseudo-random number is less than or equal to the value of the combined discard probability.
The different slopes of the linear function are due to the fact that a higher determined buffer level available in the pool (316) is better, whereas a lower determined ambience buffer level assigned to the ambience identified by the ambience identifier (306) is insufficient, wherein lower values are better.
As previously disclosed, in step 224, back pressure is applied. Under certain circumstances, pools may be assigned to atmospheres corresponding to packets at different interfaces. Thus, it is necessary to apply back pressure at all interfaces. In order to enable ambiance to affect many interfaces, a two-level mapping process is used. Referring back to FIG. 3, a first level mapping is performed in block 338. Upon receiving the indicator 330 that backpressure is to be applied, block 338 maps the atmosphere identifier 306 of the atmosphere requesting backpressure onto the backpressure identifier 340. One or more atmosphere identifiers 306 may be mapped to a single backpressure identifier 340. A second level of mapping is performed in block 342. Block 342 includes a structure that maps backpressure identifiers 340 to one or more channels in interface 302. Thus, when any ambience with the ambience identifier 306 mapped on the backpressure identifier 340 is in a backpressure state, then all interface 302 channels mapped on the backpressure identifier 340 are in a backpressure state.
Various aspects of the disclosure are provided to enable one of ordinary skill in the art to practice the invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the concepts disclosed herein may be applied to other aspects without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
All structural and functional equivalents to the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Such exemplary logic blocks, modules, circuits, and algorithm steps may be implemented as electronic hardware, computer software, or combinations of both.
Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. Claim elements are also not to be construed in accordance with the provisions of 35u.s.c. § 112 sixth paragraph unless the element is explicitly recited using the phrase "means for … …" or, in the case of method claims, the element is recited using the phrase "step for … …".
Claims (24)
1. A method for virtualization of quality of service, comprising:
associating a packet with an atmosphere via an atmosphere identifier by evaluating information of an internal field of a structure of the packet received at an interface and information external to the structure of the packet;
determining configuration parameters for the ambience, including a parameter AURA _ CNT _ LIMIT identifying a maximum number of buffers that can be allocated to the ambience, a parameter AURA _ CNT _ LEVELS identifying an ambience level related to buffers allocated to the ambience, and a parameter AURA _ POOLS _ LEVELS identifying an ambience level related to a pool of available buffers;
determining a pool of buffers for the ambience;
determining a status of a resource of the pool, the resource comprising a buffer level available in the pool and a buffer level allocated to the ambience; and
determining a quality of service for the group according to the determined state of the resources of the pool and the configuration parameters for the ambience.
2. The method of claim 1, wherein the determining a quality of service for the packet comprises:
comparing the determined buffer level available in the pool to a first threshold;
comparing the determined buffer level allocated to the ambience with a second threshold; and
providing an interrupt when the determined buffer level available in the pool crosses the first threshold and/or when the determined buffer level allocated to the ambience crosses the second threshold.
3. The method of claim 2, further comprising:
adding or removing resources to or from the pool in accordance with the provided interruption and the direction crossing the first threshold and/or the second threshold; wherein
The resource is removed from the pool when the first threshold is crossed from a value less than the first threshold to a level greater than the first threshold and/or when the second threshold is crossed from a value less than the second threshold to a level greater than the second threshold; and is
Otherwise the resource is added.
4. The method of claim 1, wherein the determining a quality of service for the packet comprises:
comparing the determined buffer level available in the pool with values of configuration parameters of an ambience level related to the pool of buffers;
comparing the determined value of the configuration parameter of the ambience buffer level assigned to the ambience and the ambience level related to the buffer assigned to the ambience; and
determining the quality of service for the packet based on a result of the comparison.
5. The method of claim 4, wherein determining the quality of service for the packet according to the result of the comparison comprises:
discarding the packet when:
the determined buffer level available in the pool is less than a value of a determined configuration parameter AURA _ POOLS _ LEVELS [ DROP ] indicating a threshold for the parameter AURA _ POOLS _ LEVELS [ DROP ] for dropping packets, or
The determined buffer level allocated to the ambience is larger than the determined value of the configuration parameter AURA _ CNT _ LIMIT, or
The determined buffer level allocated to the ambience is greater than a value of a determined configuration parameter AURA _ CNT _ LEVELS [ DROP ], the configuration parameter AURA _ CNT _ LEVELS [ DROP ] indicating a threshold of the parameter AURA _ CNT _ LEVELS for dropping packets.
6. The method of claim 4, wherein determining the quality of service for the packet according to the result of the comparison comprises:
the random early discard is performed when:
the determined buffer level available in the pool is between a value of a determined configuration parameter AURA _ POOLS _ LEVELS [ DROP ] indicating a threshold of the parameter AURA _ POOLS _ LEVELS for dropping packets and a value of a determined configuration parameter AURA _ POOLS _ LEVELS [ PASS ] indicating a threshold of the parameter AURA _ POOLS _ LEVELS for passing packets, or
The determined buffer level allocated to the ambience is between a value of a determined configuration parameter AURA _ CNT _ LEVELS [ DROP ] indicating a threshold of the parameter AURA _ CNT _ LEVELS for dropping packets and a value of a determined configuration parameter AURA _ CNT _ LEVELS [ PASS ] indicating a threshold of the parameter AURA _ CNT _ LEVELS for delivering packets.
7. The method of claim 6, wherein the performing random early dropping of the packet comprises:
calculating a first DROP probability as a first function of the determined value of the configuration parameter AURA _ POOLS _ LEVELS [ PASS ] and the determined value of the configuration parameter AURA _ POOLS _ LEVELS [ DROP ];
calculating a second DROP probability as a second function of the determined value of the configuration parameter AURA _ CNT _ LEVELS [ PASS ] and the determined value of the configuration parameter AURA _ POOLS _ LEVELS [ DROP ];
combining the calculated first drop probability and the second drop probability;
generating a pseudo random number; and
performing a random early discard based on a result of the comparison of the combined discard probability and the pseudorandom number.
8. The method of claim 7, wherein the combining the calculated first drop probability and the second drop probability comprises:
taking the greater of the first drop probability and the second drop probability.
9. The method of claim 7, wherein the performing random early dropping of the packet comprises:
discarding the packet when the pseudo random number is less than or equal to the combined discard probability.
10. The method of claim 7, wherein the performing random early dropping of the packet comprises:
discarding the packet when the pseudo random number is less than the discard probability of the combination.
11. The method of claim 4, wherein said determining the quality of service for the packet according to the result of the comparison comprises:
the back pressure was applied under the following conditions:
the determined buffer level available in the pool is less than a value of a determined configuration parameter AURA _ POOLS _ LEVELS [ BP ] indicating a threshold of the parameter AURA _ POOLS _ LEVELS [ BP ] for applying backpressure, or
The determined buffer level allocated to the ambience is less than a value of a determined configuration parameter AURA _ CNT _ LEVELS [ BP ], the configuration parameter AURA _ CNT _ LEVELS [ BP ] indicating a threshold of the parameter AURA _ CNT _ LEVELS for applying backpressure.
12. The method of claim 11, wherein the applying back pressure comprises:
mapping the atmosphere identifiers of all the atmospheres requesting backpressure to backpressure indicators;
mapping the backpressure indicator onto one or more channels of the interface; and
applying the backpressure in accordance with the backpressure indicator.
13. An apparatus for virtualization of quality of service, comprising:
a parser configured to associate a packet received at an interface with an atmosphere via an atmosphere identifier by evaluating information of an internal field of a structure of the packet and information outside of the structure of the packet;
an ambience management entity communicatively connected to the parser, the ambience management entity being configured to determine configuration parameters for the ambience, including a parameter AURA _ CNT _ LIMIT identifying a maximum number of buffers that can be allocated to the ambience, a parameter AURA _ CNT _ LEVELS identifying an ambience level related to buffers allocated to the ambience, and a parameter AURA _ POOLS _ LEVELS identifying an ambience level related to a pool of available buffers,
determining a pool for the ambience,
determining a status of resources of the pool, the resources comprising buffer levels available in the pool and buffer levels allocated to the ambience, and
determining a quality of service for the group according to the determined state of the resources of the pool and the configuration parameters for the ambience.
14. The apparatus of claim 13, wherein the ambience management entity determines the quality of service for the packet by being configured to:
comparing the determined buffer level available in the pool to a first threshold;
comparing the determined buffer level allocated to the ambience with a second threshold; and
providing an interrupt when the determined buffer level available in the pool crosses the first threshold and/or when the determined buffer level allocated to the ambience crosses the second threshold.
15. The apparatus of claim 14, further comprising:
means for adding or removing resources from the pool in accordance with the interrupt provided and the direction of crossing the first threshold and/or the second threshold; wherein
The resource is removed from the pool when the first threshold is crossed from a value less than the first threshold to a level greater than the first threshold and/or when the second threshold is crossed from a value less than the second threshold to a level greater than the second threshold; and is
Otherwise the resource is added.
16. The apparatus of claim 13, wherein the ambience management entity determines the quality of service for the packet by being configured to:
comparing the determined buffer level available in the pool with the determined value of a configuration parameter of an ambience level related to the buffer pool;
comparing the determined value of the configuration parameter of the ambience buffer level assigned to the ambience and the ambience level related to the buffer assigned to the ambience; and
determining the quality of service for the packet based on a result of the comparison.
17. The apparatus of claim 16, wherein the ambience management entity determines the quality of service for the packet by being configured to discard the packet when:
the determined buffer level available in the pool is less than a value of a determined configuration parameter AURA _ POOLS _ LEVELS [ DROP ] indicating a threshold for the parameter AURA _ POOLS _ LEVELS [ DROP ] for dropping packets, or
The determined buffer level allocated to the ambience is larger than the determined value of the configuration parameter AURA _ CNT _ LIMIT, or
The determined buffer level allocated to the ambience is greater than a value of a determined configuration parameter AURA _ CNT _ LEVELS [ DROP ], the configuration parameter AURA _ CNT _ LEVELS [ DROP ] indicating a threshold of the parameter AURA _ CNT _ LEVELS for dropping packets.
18. The apparatus of claim 16, wherein the ambience management entity determines the quality of service for the packet by being configured to perform a random early discard of the packet when:
the determined buffer level available in the pool is between a value of a determined configuration parameter AURA _ POOLS _ LEVELS [ DROP ] indicating a threshold of the parameter AURA _ POOLS _ LEVELS for dropping packets and a value of a determined configuration parameter AURA _ POOLS _ LEVELS [ PASS ] indicating a threshold of the parameter AURA _ POOLS _ LEVELS for passing packets, or
The determined buffer level allocated to the ambience is between a value of a determined configuration parameter AURA _ CNT _ LEVELS [ DROP ] indicating a threshold of the parameter AURA _ CNT _ LEVELS for dropping packets and a value of a determined configuration parameter AURA _ CNT _ LEVELS [ PASS ] indicating a threshold of the parameter AURA _ CNT _ LEVELS for delivering packets.
19. The apparatus of claim 18, wherein the ambience management entity performs the random early dropping of the packets by being configured to:
calculating a first DROP probability as a first function of the determined value of the configuration parameter AURA _ POOLS _ LEVELS [ PASS ] and the determined value of the configuration parameter AURA _ POOLS _ LEVELS [ DROP ];
calculating a second DROP probability as a second function of the determined value of the configuration parameter AURA _ CNT _ LEVELS [ PASS ] and the determined value of the configuration parameter AURA _ POOLS _ LEVELS [ DROP ];
combining the calculated first drop probability and the second drop probability;
generating a pseudo random number; and
performing a random early discard based on a result of the comparison of the combined discard probability and the pseudorandom number.
20. The apparatus of claim 19, wherein the ambience managing entity combines the calculated first and second drop probabilities by being configured to take the greater of the first and second drop probabilities.
21. The apparatus of claim 19, wherein the ambience management entity performs random early dropping of the packet by being configured to drop the packet when the pseudo-random number is less than or equal to the drop probability combined.
22. The apparatus of claim 19, wherein the ambience management entity performs random early dropping of the packet by being configured to drop the packet when the pseudo-random number is less than the drop probability combined.
23. The apparatus of claim 16, wherein the ambience management entity determines the quality of service for the packet by being configured to apply backpressure when:
the determined buffer level available in the pool is less than a value of a determined configuration parameter AURA _ POOLS _ LEVELS [ BP ] indicating a threshold of the parameter AURA _ POOLS _ LEVELS [ BP ] for applying backpressure, or
The determined buffer level allocated to the ambience is less than a value of a determined configuration parameter AURA _ CNT _ LEVELS [ BP ], the configuration parameter AURA _ CNT _ LEVELS [ BP ] indicating a threshold of the parameter AURA _ CNT _ LEVELS for applying backpressure.
24. The apparatus of claim 23, wherein the ambience management entity applies backpressure by being configured to:
mapping the atmosphere identifiers of all the atmospheres requesting backpressure to backpressure indicators;
mapping the backpressure indicator onto one or more channels of the interface; and
applying the backpressure in accordance with the backpressure indicator.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/140,503 | 2013-12-25 | ||
| US14/140,503 US9379992B2 (en) | 2013-12-25 | 2013-12-25 | Method and an apparatus for virtualization of a quality-of-service |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1207719A1 HK1207719A1 (en) | 2016-02-05 |
| HK1207719B true HK1207719B (en) | 2018-04-20 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11422839B2 (en) | Network policy implementation with multiple interfaces | |
| KR101698648B1 (en) | A method and an apparatus for virtualization of quality-of-service | |
| US11290388B2 (en) | Flow control method and apparatus | |
| JP6420354B2 (en) | Traffic class arbitration based on priority and bandwidth allocation | |
| CN111800351B (en) | Congestion notification packet generation by a switch | |
| US8929253B2 (en) | Virtual switching ports on high-bandwidth links | |
| US8565092B2 (en) | Dynamic flow redistribution for head of line blocking avoidance | |
| CN118433113A (en) | Receiver-based sophisticated congestion control | |
| US20100150164A1 (en) | Flow-based queuing of network traffic | |
| US9674102B2 (en) | Methods and network device for oversubscription handling | |
| US8848529B2 (en) | WRR scheduler configuration for optimized latency, buffer utilization | |
| US10819640B1 (en) | Congestion avoidance in multipath routed flows using virtual output queue statistics | |
| WO2014108773A1 (en) | Low-latency lossless switch fabric for use in a data center | |
| CN108683607B (en) | Virtual machine flow control method, device and server | |
| CN108259377A (en) | Queue assignment method and device | |
| US11374872B1 (en) | Methods and systems for adaptive network quality of service for latency critical applications | |
| CN111108728A (en) | Method and device for processing message | |
| WO2023103563A1 (en) | Message processing method, customer premise equipment (cpe) and computer readable storage medium | |
| US20230155947A1 (en) | Method for identifying flow, and apparatus | |
| CN1669276A (en) | Method and system for controlling bandwidth allocation | |
| CN114827051A (en) | Bandwidth control policer in a network adapter | |
| CN116319590A (en) | Queue control method and device | |
| HK1207719B (en) | A method and an apparatus for virtualization of a quality-of-service | |
| CN114499735B (en) | Service transmission method, system and computer readable storage medium | |
| CN116647883A (en) | Enhanced Virtual Channel Switching |