[go: up one dir, main page]

CN113709810A - Method, device and medium for configuring network service quality - Google Patents

Method, device and medium for configuring network service quality Download PDF

Info

Publication number
CN113709810A
CN113709810A CN202111003765.7A CN202111003765A CN113709810A CN 113709810 A CN113709810 A CN 113709810A CN 202111003765 A CN202111003765 A CN 202111003765A CN 113709810 A CN113709810 A CN 113709810A
Authority
CN
China
Prior art keywords
qos
crd
pod
binding
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111003765.7A
Other languages
Chinese (zh)
Other versions
CN113709810B (en
Inventor
吴正东
李阳
王天青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Xinghuan Zhongzhi Information Technology Co ltd
Original Assignee
Henan Xinghuan Zhongzhi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Xinghuan Zhongzhi Information Technology Co ltd filed Critical Henan Xinghuan Zhongzhi Information Technology Co ltd
Priority to CN202111003765.7A priority Critical patent/CN113709810B/en
Publication of CN113709810A publication Critical patent/CN113709810A/en
Application granted granted Critical
Publication of CN113709810B publication Critical patent/CN113709810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0268Traffic management, e.g. flow control or congestion control using specific QoS parameters for wireless networks, e.g. QoS class identifier [QCI] or guaranteed bit rate [GBR]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0226Traffic management, e.g. flow control or congestion control based on location or mobility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/18Negotiating wireless communication parameters
    • H04W28/20Negotiating bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/04Large scale networks; Deep hierarchical networks
    • H04W84/08Trunked mobile radio systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention discloses a QoS configuration method, equipment and a medium. The method executed by the main node comprises the following steps: acquiring QoS configuration information of a user plane corresponding to at least one Pod in a container platform; generating a QoS binding resource CRD of a control plane according to the QoS rule, and generating an application group CRD corresponding to the QoS binding CRD according to each Pod positioning information; the QoS binding CRD and the application group CRD are stored in the main node, and the working node running with the configured Pod is indicated to issue a flow control rule to the matched physical networking equipment when monitoring the storage of the QoS binding CRD and the application group CRD.

Description

Method, device and medium for configuring network service quality
Technical Field
The embodiment of the invention relates to the field of network flow control of a container group, in particular to a method, equipment and a medium for configuring network service quality.
Background
Under the current wave of cloud, various types of applications or services are gradually migrated to the cloud for deployment, and the container cloud is an increasingly widely used underlying cloud technology, wherein the container cloud based on the kubernets technology has become a de facto standard of the container cloud. Different applications or services running in the container cloud have different requirements on network quality of service (QoS), which requires that the kubernets cluster can provide QoS guarantee as required, and can guarantee predictable network quality of service as required in the aspects of packet loss, delay, jitter, bandwidth, and the like.
The QoS bandwidth guarantee function in the traditional network is mature, but the current kubernets community only realizes the maximum bandwidth limit function in the QoS, and the specific implementation mode is as follows: the bandwidth metadata is attached to the container group (Pod) object by adding bandwidth metadata in the annotations (indications) that limits the maximum in-out bandwidth.
In the process of implementing the invention, the inventor finds that the prior art has the following defects: the Kubernetes community container network QoS bandwidth guarantee function can only limit the maximum bandwidth of the Pod access direction, cannot guarantee the minimum bandwidth, and cannot realize more precise QoS control; in addition, the current QoS configuration is placed in the annotation, then the bandwidth metadata in the annotation is attached to the Pod object, and the Pod needs to be restarted after the configuration is modified to take effect, which may cause that the service provided inside the Pod is temporarily unavailable to affect the service, and the Pod object needing to configure the QoS cannot be flexibly selected; meanwhile, Ifb network devices need to be created for the Veth device pair of each Pod with the QoS set, and then a large number of Ifb devices need to be maintained, which results in a large workload of device maintenance.
Disclosure of Invention
The embodiment of the invention provides a configuration method, equipment and medium of network service quality, which creatively realize multi-dimensional Kubernetes cluster container network QoS bandwidth guarantee, and further can provide reliable and guaranteed network communication service under the scenes of different demands of different Pods on network bandwidth resources.
In a first aspect, an embodiment of the present invention provides a QoS configuration method, which is executed by a master node in a container platform, and includes:
obtaining QoS configuration information of a user plane corresponding to at least one Pod in a container platform, wherein the QoS configuration information of the user plane comprises: a QoS rule and at least one item of Pod positioning information;
generating a QoS binding CRD (user-defined resource) of a control plane according to the QoS rule, and generating an application CRD corresponding to the QoS binding CRD according to each Pod positioning information;
and storing the QoS binding CRD and the application group CRD in the main node to indicate that the working node running the configured Pod issues a flow control rule aiming at the configured Pod to the matched physical networking equipment when monitoring the storage of the QoS binding CRD and the application group CRD.
In a second aspect, an embodiment of the present invention further provides a method for configuring a network quality of service QoS, where the method is executed by a work node in a container platform, and includes:
performing CRD storage monitoring on a main node in a container platform;
when monitoring that the new QoS binding CRD and the application to the group CRD are stored in the main node, detecting whether the Pod configured by the monitored CRD is positioned on the local node;
if yes, obtaining the CNI type of a CNI (Container Network Interface) plug-in adapted by the local node, and obtaining physical networking equipment used when the CNI plug-in realizes cross-node Network communication according to the CNI type;
and generating a TC (Traffic Control) strategy and an IPset set according to the monitored QoS binding CRD and the application CRD, and issuing the TC strategy and the IPset set to the physical networking equipment so as to realize bandwidth guarantee of the configured Pod.
In a third aspect, an embodiment of the present invention further provides a master node in a container platform, including a processor and a memory, where the memory is configured to store instructions that, when executed, cause the processor to:
obtaining QoS configuration information of a user plane corresponding to at least one Pod in a container platform, wherein the QoS configuration information of the user plane comprises: a QoS rule and at least one item of Pod positioning information;
generating a QoS binding CRD of a control plane according to the QoS rule, and generating an application group CRD corresponding to the QoS binding CRD according to each Pod positioning information;
and storing the QoS binding CRD and the application group CRD in the main node to indicate that the working node running with the configured Pod sends the matched flow control rule to the matched physical networking equipment when monitoring the storage of the QoS binding CRD and the application group CRD.
In a fourth aspect, an embodiment of the present invention further provides a work node in a container platform, including a processor and a memory, where the memory is used to store instructions, and when the instructions are executed, the processor is caused to perform the following operations:
performing CRD storage monitoring on a main node in a container platform;
when monitoring that the new QoS binding CRD and the application to the group CRD are stored in the main node, detecting whether a configuration Pod corresponding to the monitored CRD is positioned on a local node;
if yes, acquiring the CNI type of the CNI plug-in adapted to the local node, and acquiring physical networking equipment used by the CNI plug-in to realize cross-node network communication according to the CNI type;
and generating a TC strategy and an IPset set according to the monitored QoS binding CRD and the application group CRD, and issuing the TC strategy and the IPset set to the physical networking equipment so as to realize bandwidth guarantee for configuring Pod.
In a fifth aspect, an embodiment of the present invention further provides a storage medium, where the storage medium is configured to store instructions, where the instructions are configured to execute the method for configuring network QoS according to any embodiment of the present invention.
In the technical scheme of the embodiment of the invention, after the main node in the container platform acquires the QoS configuration information of the user plane corresponding to at least one Pod in the container platform, according to the QoS rule, generating QoS binding CRD of control plane, and according to Pod positioning information, generating application CRD corresponding to QoS binding CRD and storing it in main node, when the slave node in the container platform monitors that the master node stores the new QoS binding CRD and the application CRD aiming at the local node, the CNI type of the CNI plug-in adapted by the local node is obtained, and then obtaining physical networking equipment used by the CNI plug-in to realize cross-node network communication, and generating a flow control TC strategy and an IPset set according to the monitored QoS binding CRD and application to the group CRD, and issuing the flow control TC strategy and the IPset set to the physical networking equipment to realize bandwidth guarantee of the configured Pod. The technical scheme of the embodiment of the invention creatively realizes the multi-dimensional Kubernetes cluster container network QoS bandwidth guarantee, and further can provide reliable and guaranteed network communication service under the scenes of different demands of different Pod on network bandwidth resources.
Drawings
Fig. 1a is a diagram of a QoS guarantee implementation architecture of a kubernets cluster container platform to which an embodiment of the present invention is applicable;
fig. 1b is a configuration method of QoS performed by a master node in a container platform according to an embodiment of the present invention;
fig. 2 is a configuration method of QoS performed by a slave node in a container platform according to a second embodiment of the present invention;
FIG. 3 is a diagram illustrating an application scenario in which a container platform provides QoS guarantee for a specific Pod access direction, according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an application scenario in which a container platform provides end-to-end QoS guarantee according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an application scenario in which a container platform provides QoS guarantee for at least one specified IP address to access one or a group of Pod(s);
fig. 6 is a block diagram of a QoS configuration apparatus configured in a master node in a container platform according to a third embodiment of the present invention;
fig. 7 is a block diagram of a QoS configuration apparatus configured in a slave node in a container platform according to a fourth embodiment of the present invention;
fig. 8 is a schematic structural diagram of a computer device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
First, the relevant terms used in the embodiments of the present invention are briefly explained:
kubernetes: the system is an open source, is used for managing containerized applications on a container cluster host, provides a mechanism for deploying, planning updating and maintaining the container applications, and aims to make the deploying of the containerized applications simple and efficient.
Pod: also referred to as a container group, is the smallest or simplest basic unit for kubernets deployment or management, with one Pod representing one process running on the cluster.
Service: and a logic grouping of the Pods, a strategy for accessing the Pods, wherein the load is balanced in the corresponding Pods after the user accesses the Service.
CRD: the full name of Custom Resource Definition, namely the user-defined Resource, is an extension of the Kubernets API, and can enable a user to simply store and acquire structured data.
TC: traffic Control is a Traffic speed limiting, shaping and policy Control mechanism provided by the Linux kernel.
QoS: the Quality of Service is a network flow control mechanism, and the reliability of network flow transmission is guaranteed.
User plane: a protocol stack for transmitting user information.
A control plane: a protocol stack for transmitting signaling information.
For convenience of explaining the embodiments of the present invention, first, an implementation principle of the embodiments of the present invention is briefly described with reference to a QoS guarantee implementation architecture diagram of a kubernets cluster container platform to which the embodiment of the present invention shown in fig. 1a is applied.
First, the implementation objectives of the embodiments of the present invention are: a method for realizing bandwidth guarantee (maximum and minimum bandwidths) in QoS (quality of Service) of a container cloud platform network is provided, Pod or Service is selected through a flexible resource selector, and QoS bandwidth guarantee of the container cloud network is realized from multiple dimensions, wherein the QoS bandwidth guarantee comprises QoS bandwidth guarantee of a single Pod or Service, Pod to Pod, Service to Service, a specific IP (Internet Protocol) or a specific plurality of IPs to Pod, a specific IP or a specific plurality of IPs to Service.
The maximum bandwidth can be set, the minimum bandwidth can be set, QoS bandwidth guarantee of the Kubernetes cluster container network is really realized, and meanwhile, the realization scheme of the maximum bandwidth function of the current Kubernetes community is improved.
In summary, the objectives that the invention is intended to achieve mainly include the following aspects:
1. the resource selector is realized by flexible resource identifiers, namely, the selection of the Pod or Service with the required configuration is realized, and the flexible resource identifiers include but are not limited to Namespace, Label, Pod Name, and annotation custom field.
2. And QoS bandwidth guarantee of a specific Pod access direction is realized through a resource selector mode.
3. End-to-end QoS bandwidth guarantee, namely QoS bandwidth guarantee from one group of Pod to another group of Pod, is realized through a resource selector mode.
4. And QoS bandwidth guarantee is realized when one or more IP accesses the Pod by means of a resource selector and one or more IP address segments.
5. And the Pod does not need to be restarted when the Kubernetes cluster QoS configuration is realized.
6. Reducing the number of Ifb devices on each operating node eliminates the need to create one Ifb device for each associated Pod, thereby reducing cost and increasing efficiency.
As shown in fig. 1a, a kubernets cluster container platform to which embodiments of the present invention are applicable includes: a Master Node and a plurality of Worker nodes.
In each working node, a plurality of Pod is configured. Wherein, in the host node, a human-computer interface (Vector UI) is configured, and a user can input QoS configuration information of a corresponding user plane for one or more Pod to be configured in the container platform through the human-computer interface and send the QoS configuration information to a Vector Controller (Vector Controller). The vector controller user generates a QoS binding CRD (QosBinding CRD) of the control plane according to the QoS configuration information of the user plane, and stores an application to group CRD (i.e., an applied group CRD) corresponding to the QosBinding CRD in a database (typically, an etcd database) of the host node.
Meanwhile, each working node monitors QosBinding CRD and applied ToGroup CRD which are updated and stored in a database of the main node and point to one or more Pod to be configured in real time through a Vector Agent (Vector Agent) configured by each working node and by calling a service interface (kube api-server) in the main node. When one of the working nodes detects that a Pod configured by a currently stored QosBinding CRD and an applied to group CRD operates in its own node, a physical networking device, such as a bridge device or a physical Network card device, used when the CNI implements cross-node Network communication is acquired according to the CNI type, typically, flannel, antrea, or calico, of a CNI (Container Network Interface) plug-in adapted to the own node determined by the Vector Agent, and then a matched TC can be generated according to the QosBinding CRD and the applied to group CRD of the Pod and issued to the physical networking device, so as to implement bandwidth guarantee for each configured Pod.
Example one
Fig. 1b is a flowchart of a QoS configuration method according to an embodiment of the present invention, where this embodiment is applicable to a scenario that different Pod have different requirements on network bandwidth resources, and provides a reliable and guaranteed network communication service, and the method may be executed by a QoS configuration apparatus, where the apparatus may be implemented in a software and/or hardware manner, and may be generally integrated in a master node in a container platform, as shown in fig. 1b, the method specifically includes:
step 110, obtaining QoS configuration information of a user plane corresponding to at least one Pod in the container platform, where the QoS configuration information of the user plane includes: a QoS rule, and at least one item of Pod location information.
Referring to fig. 1a, the QoS configuration information of the user plane is configuration information input by a user through a human-machine interface provided in the host node.
In this embodiment, the QoS rule may include: QoS bandwidth guarantee type, maximum bandwidth, minimum bandwidth and priority;
the QoS bandwidth guarantee types comprise: bandwidth limitations on a single Pod, end-to-end bandwidth limitations between one group of pods and another group of pods within a container platform, and bandwidth limitations when at least one specified IP address accesses one or a group of pods; the bandwidth limits include maximum and minimum bandwidth limits in the direction of access.
In this embodiment, the bandwidth of a single Pod is limited to implement QoS bandwidth guarantee in a specific certain Pod access direction; end-to-end bandwidth limitation between one group of Pods (one Service) and the other group of Pods (the other Service) in the container platform is used for realizing end-to-end QoS bandwidth guarantee in a Kubernets cluster; the bandwidth limitation when at least one specified IP address accesses one Pod is used for realizing QoS bandwidth guarantee from one or more specific IP to a certain specific Pod; and the bandwidth limitation when the at least one specified IP address accesses a group of Pod is used for realizing QoS bandwidth guarantee of one or more specific IPs to a specific Service.
Of course, it is understood that the QoS bandwidth guarantee types may further include: the embodiment of the present invention does not limit the bandwidth limitation of a group of services, and the bandwidth limitation between one Pod and another Pod in the container platform.
The method and the system for controlling the bandwidth of the container platform are different from the prior art that the container platform only can realize the function of limiting the maximum bandwidth, and the technical scheme of each embodiment of the invention can simultaneously set the maximum bandwidth and the minimum bandwidth.
Specifically, the maximum bandwidth is used to set the maximum bandwidth of the traffic, the minimum bandwidth is used to set the minimum bandwidth guarantee of the traffic, and the priority is used to set the priority of the traffic.
Further, the QoS rule may further include: leaf QoS rules, traffic direction, etc., which are not limited by this implementation. The Leaf QoS rule is used for setting end-to-end QoS bandwidth guarantee, and the flow direction is used for setting the bandwidth guarantee direction as a flow direction entering direction or a flow direction exiting direction.
In this embodiment, the Pod positioning information is used to position or identify a Pod that needs to perform QoS configuration on the user plane QoS configuration information.
Wherein the Pod positioning information may include at least one of: at least one custom field in the Pod namespace, Pod tag, Pod name, and Pod annotation.
Specifically, the Pod namespace is used for matching pods under one or more namespaces; a Pod tag for matching one or more Pod labels; pod names, which are used for matching pods of corresponding names, can use a mode similar to regular expressions; at least one custom field in the Pod note for matching the custom field in the Pod Annotation.
Step 120, generating a QoS binding CRD (qosbinding CRD) of the control plane according to the QoS rule, and generating an application group CRD (applied to group CRD) corresponding to the QoS binding CRD according to each Pod location information.
In this embodiment, step S120 may be performed by a Vector Controller in the master node shown in fig. 1a, where the Vector Controller is used as a core component of the container network QoS function, and runs on a node of the kubernets cluster in the form of kubernets Workload (Workload), for example, Deployment.
List/Watch QosPolicoy CRD in Vector Controller and system resource such as Pod, uniformly convert the Qos configuration information of user plane into Qosbinding CRD of control plane.
Further, at least one custom field in the Pod name space, the Pod label, the Pod name and the Pod annotation is searched according to Pod positioning information in the Qos configuration information of the user plane, and the searched Pod information can be stored in the applied to group CRD in a unified manner.
It should be emphasized again that the solution of the embodiments of the present invention implements the resource selector in a flexible resource identification manner. The resource selector can identify the Pod or Service which needs to set the QoS guarantee by matching at least one custom field in the Pod namespace, the Pod tag, the Pod name and the Pod annotation. The method for realizing the functions relates to the following steps:
when the resource selector in the Vector Controller starts, the Pod cache is synchronized through the Kubernets inverter mechanism.
When a user adds new QoS configuration through QoS configuration information of a user plane, the QoS configuration information in an applied ToGroup CRD of a control plane is transmitted by calling a resource selection module, wherein the applied ToGroup CRD comprises information such as at least one custom field in a Pod name space, a Pod label, a Pod name and a Pod annotation of a QoS configuration object.
The resource selection module firstly matches the Pod namespace field, filters the cached Pod resources, and performs matching filtering with a default namespace if the Pod namespace is not configured.
The resource selection module matches a Pod Label (Label) field to filter the cached Pod resources, wherein the Pod Label field can be a Label corresponding to Service configuration or a custom Label.
The resource selection module may directly match Pod names and filter and match specific Pod names under a specific Pod namespace.
The resource selection module can also flexibly match custom fields in the Pod notes, and the Pod matching filtering is carried out by analyzing the Pod notes.
Step 130, storing the QoS binding CRD and the application group CRD in the master node to indicate that the working node running the configured Pod issues the flow control rule for the configured Pod to the matched physical networking device when monitoring the storage of the QoS binding CRD and the application group CRD.
As described above, after storing the QoS binding CRD and the application group CRD in the master node, the working node monitors the update, and when monitoring the storage of the QoS binding CRD and the application group CRD, the working node running the configured Pod can issue the flow control rule for the configured Pod to the matched physical networking device.
In the technical scheme of the embodiment of the invention, after the main node in the container platform acquires the QoS configuration information of the user plane corresponding to at least one Pod in the container platform, according to the QoS rule, generating QoS binding CRD of control plane, and according to Pod positioning information, generating application CRD corresponding to QoS binding CRD and storing it in main node, when the slave node in the container platform monitors that the master node stores the new QoS binding CRD and the application CRD aiming at the local node, the CNI type of the CNI plug-in adapted by the local node is obtained, and then obtaining physical networking equipment used by the CNI plug-in to realize cross-node network communication, and generating a flow control TC strategy and an IPset set according to the monitored QoS binding CRD and application to the group CRD, and issuing the flow control TC strategy and the IPset set to the physical networking equipment to realize bandwidth guarantee of the configured Pod. The technical scheme of the embodiment of the invention creatively realizes the multi-dimensional Kubernetes cluster container network QoS bandwidth guarantee, and further can provide reliable and guaranteed network communication service under the scenes of different demands of different Pod on network bandwidth resources.
On the basis of the foregoing embodiments, when the QoS bandwidth guarantee type in the QoS rule is bandwidth limitation on a single Pod, generating a QoS binding user-defined resource CRD of a control plane according to the QoS rule, and generating an application group CRD corresponding to the QoS binding CRD according to each Pod location information, may specifically include:
configuring class identification of HTB class matched with the QoS rule, and distributing matched Handle identification for each filter of the HTB class;
after analyzing each item of information included in the QoS rule, generating a QoS binding CRD of the control plane according to an analysis processing result, the class mark and each Handle mark;
and screening the prestored cache information of each Pod in the container platform according to the at least one item of Pod positioning information to obtain the identification information of the single Pod, and generating an application group CRD corresponding to the QoS binding CRD according to the identification information of the single Pod.
On the basis of the foregoing embodiments, when the QoS bandwidth guarantee type in the QoS rule is an end-to-end bandwidth limitation between one group of Pod and another group of Pod in the container platform, according to the QoS rule, a QoS binding CRD of the control plane is generated, and according to each Pod positioning information, an application CRD corresponding to the QoS binding CRD is generated, which may specifically include:
configuring class identification of HTB class matched with the QoS rule, and distributing matched Handle identification for each filter of the HTB class;
configuring a bottom layer Leaf QoS rule according to two groups of Pod specified in the QoS bandwidth guarantee type;
after analyzing various information included in the QoS rule, generating a QoS binding CRD of the control plane according to an analysis processing result, a class mark, each Handle mark and the Leaf QoS rule;
screening prestored cache information of each Pod in the container platform according to the at least one item of Pod positioning information to obtain identification information of the two groups of pods;
and generating an application CRD corresponding to the QoS binding CRD according to the Leaf QoS rule and the identification information of the two groups of Pods.
On the basis of the foregoing embodiments, when the QoS bandwidth guarantee type in the QoS rule is bandwidth limitation when at least one IP address accesses one or a group of Pod, generating a QoS binding CRD of a control plane according to the QoS rule, and generating an application CRD corresponding to the QoS binding CRD according to each Pod location information, may specifically include:
configuring class identification of HTB class matched with the QoS rule, and distributing matched Handle identification for each filter of the HTB class;
configuring a Leaf QoS rule according to one or a group of Pod specified in the QoS bandwidth guarantee type;
after analyzing various information included in the QoS rule, generating a QoS binding CRD of the control plane according to an analysis processing result, a class mark, each Handle mark and the Leaf QoS rule;
screening prestored cache information of each Pod in the container platform according to the at least one item of Pod positioning information to obtain identification information of the Pod or the group of pods, and acquiring at least one specified IP address included in a QoS rule;
and generating an application group CRD corresponding to the QoS binding CRD according to the Leaf QoS rule, the identification information of the Pod or the group of Pods and the at least one specified IP address.
Example two
Fig. 2 is a flowchart of a QoS configuration method according to a second embodiment of the present invention, where this embodiment is applicable to a scenario where different Pod demands network bandwidth resources to different degrees, and provides a reliable and guaranteed network communication service, and the method may be executed by a QoS configuration apparatus, where the apparatus may be implemented in a software and/or hardware manner, and may be generally integrated in a work node in a container platform, as shown in fig. 2, the method specifically includes:
s210, performing CRD storage monitoring on the host node in the container platform.
As shown in fig. 1a, the method according to the embodiment of the present invention is mainly executed by a Vector Agent in a working node. The Vector Agent runs on a working node of a Kubernetes cluster in a Kubernetes working load, for example, in the form of Daemonset, and is a configuration Agent of QoS bandwidth guarantee related functions.
The Vector Agent comprises functions of CNI type detection, cross-node communication network equipment detection, QosBinding CRD and applied ToGroup CRD resources monitoring of a control plane, QoS Driver module issuing QoS rules and an Ipset set and the like. The specific functions are as follows:
and loading a CNI (network interface Unit) Detector module when the Vector Agent is started, detecting the type of the currently used CNI plug-in and acquiring a bridge device or a physical network card device when the current CNI plug-in realizes cross-node network communication.
In addition, after the Vector Agent is started, the update storage of the QosBinding CRD and applied to group CRD resources of the control plane in the master node is monitored, and when the QosBinding CRD resources are added or deleted in the master node, the creation or deletion of the QosPolicy resources is indicated, and the QosDriver is called to issue or delete the TC rule to the acquired bridge device or physical network card device.
When the applied group CRD resource is updated, the Pod information related to QosPoacy is changed, and a Qosdriver is called to update the corresponding Ipset set.
The Vector Agent calls a QosDriver module when issuing TC rules, and functions of the QosDriver include management of HTB (Hierarchical Token Bucket) Qdisc queues in the access direction, initialization or default configuration of HTB Class rule management, QoS IPset set management, TC Basic type Filter management and the like.
S220, when monitoring that the new QoS binding CRD and the application to the group CRD are stored in the main node, detecting whether the Pod configured by the monitored CRD is positioned on the local node.
And S230, if so, acquiring the CNI type of the CNI plug-in adapted to the local node, and acquiring physical networking equipment used by the CNI plug-in to realize cross-node network communication according to the CNI type.
Optionally, the obtaining a CNI type of a CNI plug-in adapted to a local node, and obtaining, according to the CNI type, a physical networking device used when the CNI plug-in implements cross-node network communication may include:
reading a CNI configuration file mounted in a preset storage directory to acquire the CNI type; if the CNI type is detected to be an Overlay network (namely, an Overlay network), acquiring a bridge device matched with the local node; and if the CNI type is detected to be an Underlay network (namely, a lower layer network), acquiring the physical network card equipment matched with the local node.
Correspondingly, on the basis of the foregoing embodiments, after acquiring, according to the CNI type, the physical networking device used by the CNI plug-in to implement cross-node network communication, the method may further include:
detecting whether Ifb equipment in an incoming direction exists on the network bridge equipment or the physical network card equipment;
if not, creating Ifb equipment corresponding to the network bridge equipment or the physical network card equipment, and redirecting the traffic of a non-class Qdisc queue (class Qdisc queue) in the incoming direction to the outgoing direction of the created Ifb equipment;
and creating a Qdisc queue of an HTB type as a similar Qdisc queue (Classful Qdisc queue) of the bridge device or the physical network card device in the outgoing direction.
It should be emphasized again that with the above arrangement, when a Vector Agent in a working node issues a TC rule, it is no longer necessary to create a one-to-one Ifb device for each Pod. The Vector Agent is used for detecting the CNI type, further detecting the used network bridge equipment or physical network card, issuing a QoS TC configuration rule on the related network bridge equipment or physical network card, and not issuing the configuration to the Path equipment corresponding to each Pod one by one.
Correspondingly, when the Vector Agent is started, the configuration file (CNI configuration file) hung under the/etc/CNI/net.d directory is read, and the type of the currently used CNI plug-in is detected.
At the same time, the Vector Agent reads the corresponding Configmap configuration for different CNI plug-ins. And reading the network type used by the current CNI from the configuration, acquiring the corresponding network bridge equipment when the network type is an Overlay network, and acquiring the corresponding physical network card equipment when the network type is an Underlay network.
Further, if the directransmitting or Hybrid configuration is enabled in the network, the bridge device and the physical network card device may be acquired simultaneously.
In addition, the Vector Agent detects whether Ifb equipment in the incoming direction and class fusl Qdisc queues on the Ifb equipment exist on the bridge equipment or the physical network card equipment, if not, the Ifb equipment is created, the flow of the class fusl Qdisc queues in the incoming direction is redirected to the outgoing direction of the Ifb equipment, and meanwhile, whether a class fusl Qdisc queue in the outgoing direction exists on the bridge equipment or the physical network card equipment is detected, and if not, an HTB type Qdisc queue is created.
That is, each working node will only create Ifb devices corresponding to the number of bridge devices or physical network card devices.
S240, according to the monitored QoS binding CRD and the application group CRD, generating a flow control TC strategy and an IPset set and issuing the strategy and the IPset set to the physical networking equipment so as to realize bandwidth guarantee of the configured Pod.
In the technical scheme of the embodiment of the invention, after the main node in the container platform acquires the QoS configuration information of the user plane corresponding to at least one Pod in the container platform, according to the QoS rule, generating QoS binding CRD of control plane, and according to Pod positioning information, generating application CRD corresponding to QoS binding CRD and storing it in main node, when the slave node in the container platform monitors that the master node stores the new QoS binding CRD and the application CRD aiming at the local node, the CNI type of the CNI plug-in adapted by the local node is obtained, and then obtaining physical networking equipment used by the CNI plug-in to realize cross-node network communication, and generating a flow control TC strategy and an IPset set according to the monitored QoS binding CRD and application to the group CRD, and issuing the flow control TC strategy and the IPset set to the physical networking equipment to realize bandwidth guarantee of the configured Pod. The technical scheme of the embodiment of the invention creatively realizes the multi-dimensional Kubernetes cluster container network QoS bandwidth guarantee, and further can provide reliable and guaranteed network communication service under the scenes of different demands of different Pod on network bandwidth resources.
It should be emphasized again that by applying the technical solutions of the embodiments of the present invention, the Pod does not need to be restarted when the QoS configuration information is updated. The QoS configuration information is stored in a CRD mode, the QoS configuration information is not attached to the Pod object in an annotation mode any more, and therefore the QoS configuration information and the Pod object can be decoupled.
Correspondingly, in this embodiment, as shown in fig. 1a, non-invasive development may be adopted, and the QoS bandwidth guarantee function may be implemented in a manner of customizing a kubernets Operator plug-in. Meanwhile, when the container platform is deployed, a QosPolicyCRD, a qosbinding CRD, and an applied group CRD are created and used for storing user QoS rule data and control plane QoS rule data. When QoS bandwidth guarantee configuration is newly added, related CRD configuration can be directly updated, and when QoS bandwidth guarantee configuration is deleted, only related CRD configuration needs to be deleted.
Meanwhile, the Vector Agent is used as a QoS bandwidth guarantee DE implementation Agent, and can directly issue TC configuration information matched with the QoS rule to a related network bridge or a physical network card. And the Pod does not need to be restarted, namely the corresponding TC rule is set by the configuration flow of the Pod through the CNI Plugin when the Pod is created.
Application scenario one
Fig. 3 is an application scenario diagram of a container platform providing QoS guarantee for a specific Pod access direction, where as shown in fig. 3, the pods affiliated to different services have different requirements on network bandwidth resources, and because network implementations on hosts of different pods share sending and receiving Queue in the sending and receiving processes of traffic, Pod communications may have a bandwidth contention behavior, which results in that even though a maximum bandwidth is set for a specific Pod, the network transmission service quality of the Pod may not be guaranteed, so that a QoS minimum bandwidth needs to be set for the Pod, and a method for implementing the foregoing function involves the following steps:
a Vector Controller in the master node monitors qosbiolcy CRD and Pod, and when a new QoS configuration is added, pre-processes the configuration, for example, allocates a Class Id (for identification and HTB Class configuration) to each QoS rule, allocates a Handler Id to each Filter, and stores the pre-processed data in the qosbonding CRD. And creating an associated applied group CRD resource for storing associated Pod information or IpBlocks information. In addition, when the QoS bandwidth guarantee configuration is deleted, the resource information such as the qosbinding CRD and the applied group CRD after the control plane has been preprocessed is deleted.
A Vector Agent in a working node monitors QosBindingCAD and applied ToGroup CRD stored in a master node, the Vector Agent sets a TC HTB Qdisc queue on a bridge device or a physical network card for caching data packets to perform refined flow control, sets a Root TC HTB Class as a total bandwidth of a worker node, sets a Default TC HTB Class as a Default minimum bandwidth of flow without setting a QoS rule, when QosBinding CRD resources exist, creates a TC HTB Class and configures ceil, rate and priority to be mounted under the Root TC HTB Class, wherein ceil is the maximum bandwidth, rate is the minimum bandwidth, and priority is a newly added flow refined control priority, updates the value of the Delaut TC Class (the total bandwidth of the Root TC HTB Class subtracts all mounted Class values), creates and sets an associated set, creates a Basic type set, and associates a Filter set with the IP set. When the applied ToGroup CRD resource is updated, indicating that the associated Pod or IpBlocks has an updating behavior, updating the Ipset set associated with the QoS rule. The TC Qdisc caches the data packets through the queues to control the network transceiving speed, the TC Class is used for representing a control strategy, and the TCFilter is used for dividing the data packets into specific control strategies. And bandwidth sharing and exclusive modes are realized through a multilevel QoS tree, so that QoS bandwidth guarantee is realized.
Application scenario two
Fig. 4 is a diagram of an application scenario in which a container platform provides end-to-end QoS guarantee, to which an embodiment of the present invention is applied. As shown in fig. 4, the bandwidth guarantee in the single container access direction cannot meet the end-to-end service communication requirement, and there may be a scenario where multiple pops (services) access one or multiple pops (services) simultaneously, and when the traffic priorities of the multiple pops are different, there may be a scenario where a high-priority Pod needs to be guaranteed to one or multiple pops. The method for realizing the functions relates to the following steps:
a Vector Controller in a main node monitors QosPolicon CRD and Pod, when new end-to-end QoS configuration is added, the configuration is preprocessed, for example, a Leaf QoS rule is configured when the configuration is converted into QosBinding CRD resources of a control plane, and associated applied group CRD resources are created for the Leaf QoS rule; meanwhile, when the deleted end-to-end QoS guarantee configuration is carried out, the associated Leaf QoS rule and the applied ToGroup CRD resource are deleted.
When a Vector Agent in a working node monitors QosBinding CRD and applied ToGroup CRD resources and determines that the newly added QosBinding CRD resources contain a Leaf QoS rule, the Vector Agent configures the Leaf TC HTB Class and simultaneously configures the Leaf TC HTB Class rule, and mounts the Leaf TC HTB Class under the TC HTB Class. Wherein the rate of the Leaf TC HTB Class rule is the minimum bandwidth value between the configured Pods. At the same time, the rate value of the Default Leaf TC HTB Class (TC HTB Class rate value minus all the Leaf TC HTB Class rate values loaded under that Class) is created or updated. Meanwhile, the Vector Agent also creates a corresponding IPset set for the Leaf QoS rule and sets basic Filter to associate the IPset set with the Leaf TC HTB Class.
Application scenario three
Fig. 5 is a diagram of an application scenario in which a container platform provides QoS guarantee for at least one specified IP address to access one or a group of Pod, and as shown in fig. 5, when a client outside a kubernets cluster needs to access a Pod or an internal Pod needs to access a service outside the kubernets cluster, it is also important to guarantee QoS bandwidth in the two cases, and it is necessary to configure and identify a specific IP or multiple IPs to implement refined control of QoS bandwidth guarantee. The method for realizing the functions relates to the following steps:
first, one, multiple I P addresses or IP address field configurations are specified in QosPolicy CRD configuration support.
And a Vector Controller in the host node monitors QosPolicoy CRD and Pod, and when new QoS configuration information for specifying one or more IP access pods is added, the configuration information is preprocessed and converted into QosBinding CRD resources of a control plane, and one or more configured IPs are stored into an IPBlocks field of an applied ToGroup CRD associated with a Leaf QoS rule.
When a Vector Agent in a working node monitors QosBinding CRD and applied ToGroup CRD resources, the Vector Agent checks whether an IpBlocks field in the associated applied ToGroup CRD resources is empty after configuring TC HTB Class, if not, a Leaf TC HTB Class rule is created, and the Leaf TC HTB Class is mounted under the TC HTB Class. The rate value of the Default Leaf TC HTB Class (TC HTB Class rate value minus all the Leaf TC HTB Class rate values posted to that Class) is created or updated. A corresponding set of Ipsets is created, the Ip Hash content of which is the value of the Iblocks field in the associated applied ToGroup CRD resource. Set basic Filter to associate the Ipset set with the Leaf TC HTB Class.
Through the description of the application scenario, it can be known that: the technical scheme of each embodiment of the invention is based on a mode of a resource selector, flexibly matches Pod or Service resources needing to be set with QoS guarantee configuration, uses a user-defined CRD resource to store QoS configuration information, uses a user-defined Kubernet Operator mode to issue TC QoS configuration under the condition of realizing without invading Kubernet codes, realizes bandwidth sharing and monopolizing modes through a multi-level QoS tree, realizes the QoS bandwidth guarantee function of a Kubernet cluster container network from multiple dimensions, and can improve the defect of realizing the maximum bandwidth limit function of the existing Kubernet community.
EXAMPLE III
Fig. 6 is a schematic structural diagram of a QoS configuration apparatus according to a third embodiment of the present invention. The apparatus is configured in a master node in a container platform, and in conjunction with fig. 6, the apparatus comprises: a user plane configuration information obtaining module 610, a control plane configuration information generating module 620 and a storing module 630, wherein:
a user plane configuration information obtaining module 610, configured to obtain QoS configuration information of a user plane corresponding to at least one Pod in the container platform, where the QoS configuration information includes: a QoS rule and at least one item of Pod positioning information;
a control plane configuration information generating module 620, configured to generate a QoS binding CRD of the control plane according to the QoS rule, and generate an application group CRD corresponding to the QoS binding CRD according to each Pod locating information;
the storage module 630 is configured to store the QoS binding CRD and the application to the group CRD in the master node, so as to indicate that the working node running the configured Pod issues a flow control rule for the configured Pod to the matched physical networking device when monitoring the storage of the QoS binding CRD and the application to the group CRD.
In the technical scheme of the embodiment of the invention, after the main node in the container platform acquires the QoS configuration information of the user plane corresponding to at least one Pod in the container platform, according to the QoS rule, generating QoS binding CRD of control plane, and according to Pod positioning information, generating application CRD corresponding to QoS binding CRD and storing it in main node, when the slave node in the container platform monitors that the master node stores the new QoS binding CRD and the application CRD aiming at the local node, the CNI type of the CNI plug-in adapted by the local node is obtained, and then obtaining physical networking equipment used by the CNI plug-in to realize cross-node network communication, and generating a flow control TC strategy and an IPset set according to the monitored QoS binding CRD and application to the group CRD, and issuing the flow control TC strategy and the IPset set to the physical networking equipment to realize bandwidth guarantee of the configured Pod. The technical scheme of the embodiment of the invention creatively realizes the multi-dimensional Kubernetes cluster container network QoS bandwidth guarantee, and further can provide reliable and guaranteed network communication service under the scenes of different demands of different Pod on network bandwidth resources.
On the basis of the foregoing embodiments, the QoS rule may include: QoS bandwidth guarantee type, maximum bandwidth, minimum bandwidth and priority;
the QoS bandwidth guarantee types may include: bandwidth limitations on a single Pod, end-to-end bandwidth limitations between one group of pods and another group of pods within a container platform, and bandwidth limitations when at least one specified internet protocol, IP, address accesses one or a group of pods; bandwidth limits include maximum and minimum bandwidth limits in the direction of access;
the Pod location information may include at least one of: at least one custom field in the Pod namespace, Pod tag, Pod name, and Pod annotation.
On the basis of the foregoing embodiments, when the QoS bandwidth guarantee type in the QoS rule is a bandwidth limitation on a single Pod, the control plane configuration information generating module 620 may be specifically configured to:
configuring class identification of HTB class matched with the QoS rule, and distributing matched Handle identification for each filter of the HTB class;
after analyzing each item of information included in the QoS rule, generating a QoS binding CRD of the control plane according to an analysis processing result, the class mark and each Handle mark;
and screening the prestored cache information of each Pod in the container platform according to the at least one item of Pod positioning information to obtain the identification information of the single Pod, and generating an application group CRD corresponding to the QoS binding CRD according to the identification information of the single Pod.
On the basis of the foregoing embodiments, when the QoS bandwidth guarantee type in the QoS rule is an end-to-end bandwidth limitation between one group of Pod and another group of Pod in the container platform, the control plane configuration information generating module 620 may be specifically configured to:
configuring class identification of HTB class matched with the QoS rule, and distributing matched Handle identification for each filter of the HTB class;
configuring a bottom layer Leaf QoS rule according to two groups of Pod specified in the QoS bandwidth guarantee type;
after analyzing various information included in the QoS rule, generating a QoS binding CRD of the control plane according to an analysis processing result, a class mark, each Handle mark and the Leaf QoS rule;
screening prestored cache information of each Pod in the container platform according to the at least one item of Pod positioning information to obtain identification information of the two groups of pods;
and generating an application CRD corresponding to the QoS binding CRD according to the Leaf QoS rule and the identification information of the two groups of Pods.
On the basis of the foregoing embodiments, when the QoS bandwidth guarantee type in the QoS rule is a bandwidth limitation when at least one specified IP address accesses one or a group of Pod, the control plane configuration information generating module 620 may be specifically configured to:
configuring class identification of HTB class matched with the QoS rule, and distributing matched Handle identification for each filter of the HTB class;
configuring a Leaf QoS rule according to one or a group of Pod specified in the QoS bandwidth guarantee type;
after analyzing various information included in the QoS rule, generating a QoS binding CRD of the control plane according to an analysis processing result, a class mark, each Handle mark and the Leaf QoS rule;
screening prestored cache information of each Pod in the container platform according to the at least one item of Pod positioning information to obtain identification information of the Pod or the group of pods, and acquiring at least one specified IP address included in a QoS rule;
and generating an application group CRD corresponding to the QoS binding CRD according to the Leaf QoS rule, the identification information of the Pod or the group of Pods and the at least one specified IP address.
The QoS configuration device provided by the embodiment of the invention can execute the QoS configuration method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 7 is a schematic structural diagram of a QoS configuration apparatus according to a fourth embodiment of the present invention. The device is configured in a working node in a container platform, and in conjunction with fig. 7, the device comprises: a monitoring module 710, a Pod position determining module 720, a physical networking device obtaining module 730, and a flow control policy issuing module 740, wherein:
the monitoring module 710 is configured to perform storage monitoring of user-defined resources CRD on the host node in the container platform;
a Pod position determining module 720, configured to detect whether a Pod configured to the monitored CRD is located on a local node when it is monitored that a new QoS binding CRD and an application to a group CRD are stored in the master node;
a physical networking device obtaining module 730, configured to, if yes, obtain a CNI type of a CNI plug-in adapted to the local node, and obtain, according to the CNI type, a physical networking device used when the CNI plug-in realizes cross-node network communication;
and a flow control policy issuing module 740, configured to bind the CRD and the application group CRD according to the monitored QoS, generate a TC policy and an IPset set, and issue the TC policy and the IPset to the physical networking device, so as to implement bandwidth guarantee on the configured Pod.
In the technical scheme of the embodiment of the invention, after the main node in the container platform acquires the QoS configuration information of the user plane corresponding to at least one Pod in the container platform, according to the QoS rule, generating QoS binding CRD of control plane, and according to Pod positioning information, generating application CRD corresponding to QoS binding CRD and storing it in main node, when the slave node in the container platform monitors that the master node stores the new QoS binding CRD and the application CRD aiming at the local node, the CNI type of the CNI plug-in adapted by the local node is obtained, and then obtaining physical networking equipment used by the CNI plug-in to realize cross-node network communication, and generating a flow control TC strategy and an IPset set according to the monitored QoS binding CRD and application to the group CRD, and issuing the flow control TC strategy and the IPset set to the physical networking equipment to realize bandwidth guarantee of the configured Pod. The technical scheme of the embodiment of the invention creatively realizes the multi-dimensional Kubernetes cluster container network QoS bandwidth guarantee, and further can provide reliable and guaranteed network communication service under the scenes of different demands of different Pod on network bandwidth resources.
On the basis of the foregoing embodiments, the physical networking device obtaining module 730 may be specifically configured to:
reading a CNI configuration file mounted in a preset storage directory to acquire the CNI type;
if the CNI type is detected to cover an Overlay network, acquiring a bridge device matched with the local node;
and if the CNI type is detected to be a lower-layer Underlay network, acquiring the physical network card equipment matched with the local node.
On the basis of the above embodiments, the method may further include:
a queue configuration module, configured to detect whether Ifb devices in an ingress direction exist on the network bridge device or the physical network card device after acquiring, according to the CNI type, a physical networking device used by the CNI plug-in to implement cross-node network communication;
if the virtual network interface card does not exist, creating Ifb equipment corresponding to the network bridge equipment or the physical network interface card equipment, and redirecting the flow of the non-class Qdisc queue in the incoming direction to the outgoing direction of the created Ifb equipment;
and creating an HTB type Qdisc queue as a similar Qdisc queue of the bridge device or the physical network card device in the outgoing direction.
The QoS configuration device provided by the embodiment of the invention can execute the QoS configuration method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 8 is a schematic structural diagram of a computer device according to a fifth embodiment of the present invention, and as shown in fig. 8, the computer device includes:
one or more processors 810, one processor 810 being illustrated in FIG. 8;
a memory 820;
the apparatus may further comprise: an input device 830 and an output device 840.
The processor 810, the memory 820, the input device 830 and the output device 840 in the apparatus may be connected by a bus or other means, for example, in fig. 8.
The memory 820 is a non-transitory computer-readable storage medium and may be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to a QoS configuration method in the embodiment of the present invention (for example, as shown in fig. 6, the user plane configuration information obtaining module 610, the control plane configuration information generating module 620, and the storage module 630; or, as shown in fig. 7, the monitoring module 710, the Pod location determining module 720, the physical networking device obtaining module 730, and the flow control policy issuing module 740). The processor 810 executes various functional applications and data processing of the computer device by executing the software programs, instructions and modules stored in the memory 820, namely, a method for configuring QoS performed by a master node in a container platform, which implements the above-described method embodiments, namely:
obtaining QoS configuration information of a user plane corresponding to at least one Pod in a container platform, wherein the QoS configuration information of the user plane comprises: a QoS rule and at least one item of Pod positioning information;
generating a QoS binding CRD of a control plane according to the QoS rule, and generating an application group CRD corresponding to the QoS binding CRD according to each Pod positioning information;
and storing the QoS binding CRD and the application group CRD in the main node to indicate that the working node running the configured Pod issues a flow control rule aiming at the configured Pod to the matched physical networking equipment when monitoring the storage of the QoS binding CRD and the application group CRD.
Or, another QoS configuration method executed by a work node in a container platform is as follows:
performing CRD storage monitoring on a main node in a container platform;
when monitoring that the new QoS binding CRD and the application to the group CRD are stored in the main node, detecting whether the Pod configured by the monitored CRD is positioned on the local node;
if yes, acquiring the CNI type of the CNI plug-in adapted to the local node, and acquiring physical networking equipment used by the CNI plug-in to realize cross-node network communication according to the CNI type;
and generating a flow control TC strategy and an IPset set according to the monitored QoS binding CRD and the application group CRD, and issuing the flow control TC strategy and the IPset set to the physical networking equipment so as to realize bandwidth guarantee of the configured Pod.
The memory 820 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 820 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 820 may optionally include memory located remotely from processor 810, which may be connected to the terminal device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 830 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer apparatus. The output device 840 may include a display device such as a display screen.
EXAMPLE six
Sixth embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a QoS configuration method performed by a master node in a container platform according to an embodiment of the present invention, that is:
obtaining QoS configuration information of a user plane corresponding to at least one Pod in a container platform, wherein the QoS configuration information of the user plane comprises: a QoS rule and at least one item of Pod positioning information;
generating a QoS binding CRD of a control plane according to the QoS rule, and generating an application group CRD corresponding to the QoS binding CRD according to each Pod positioning information;
and storing the QoS binding CRD and the application group CRD in the main node to indicate that the working node running the configured Pod issues a flow control rule aiming at the configured Pod to the matched physical networking equipment when monitoring the storage of the QoS binding CRD and the application group CRD.
Alternatively, when executed by a processor, the program implements a QoS configuration method performed by a work node in a container platform according to an embodiment of the present invention, that is:
storing and monitoring user-defined resources CRD to a main node in a container platform;
when monitoring that the new QoS binding CRD and the application to the group CRD are stored in the main node, detecting whether the Pod configured by the monitored CRD is positioned on the local node;
if yes, acquiring the CNI type of the CNI plug-in adapted to the local node, and acquiring physical networking equipment used by the CNI plug-in to realize cross-node network communication according to the CNI type;
and generating a flow control TC strategy and an IPset set according to the monitored QoS binding CRD and the application group CRD, and issuing the flow control TC strategy and the IPset set to the physical networking equipment so as to realize bandwidth guarantee of the configured Pod.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (17)

1. A method for configuring network quality of service (QoS) is executed by a main node in a container platform, and is characterized by comprising the following steps:
obtaining QoS configuration information of a user plane corresponding to at least one container group Pod in a container platform, wherein the QoS configuration information of the user plane comprises: a QoS rule and at least one item of Pod positioning information;
generating a QoS binding user self-defined resource CRD of a control plane according to the QoS rule, and generating an application group CRD corresponding to the QoS binding CRD according to each Pod positioning information;
and storing the QoS binding CRD and the application group CRD in the main node to indicate that the working node running the configured Pod issues a flow control rule aiming at the configured Pod to the matched physical networking equipment when monitoring the storage of the QoS binding CRD and the application group CRD.
2. The method of claim 1, wherein the QoS rules comprise: QoS bandwidth guarantee type, maximum bandwidth, minimum bandwidth and priority;
the QoS bandwidth guarantee types comprise: bandwidth limitations on a single Pod, end-to-end bandwidth limitations between one group of pods and another group of pods within a container platform, and bandwidth limitations when at least one specified internet protocol, IP, address accesses one or a group of pods; bandwidth limits include maximum and minimum bandwidth limits in the direction of access;
the Pod positioning information comprises at least one of: at least one custom field in the Pod namespace, Pod tag, Pod name, and Pod annotation.
3. The method of claim 2, wherein when the QoS bandwidth guarantee type in the QoS rule is bandwidth limitation for a single Pod, generating a QoS binding user-defined resource CRD of a control plane according to the QoS rule, and generating an application group CRD corresponding to the QoS binding CRD according to each Pod positioning information, comprises:
configuring class identification of a hierarchical token bucket HTB class matched with the QoS rule, and distributing matched manager Handle identification for each filter of the HTB class;
after analyzing each item of information included in the QoS rule, generating a QoS binding CRD of the control plane according to an analysis processing result, the class mark and each Handle mark;
and screening the prestored cache information of each Pod in the container platform according to the at least one item of Pod positioning information to obtain the identification information of the single Pod, and generating an application group CRD corresponding to the QoS binding CRD according to the identification information of the single Pod.
4. The method of claim 2, wherein when a QoS bandwidth guarantee type in the QoS rule is an end-to-end bandwidth limitation between one group of Pod and another group of Pod in a container platform, generating a QoS binding CRD of a control plane according to the QoS rule, and generating an application group CRD corresponding to the QoS binding CRD according to each Pod positioning information, comprises:
configuring class identification of a hierarchical token bucket HTB class matched with the QoS rule, and distributing matched Handle identification for each filter of the HTB class;
configuring a bottom layer Leaf QoS rule according to two groups of Pod specified in the QoS bandwidth guarantee type;
after analyzing various information included in the QoS rule, generating a QoS binding CRD of the control plane according to an analysis processing result, a class mark, each Handle mark and the Leaf QoS rule;
screening prestored cache information of each Pod in the container platform according to the at least one item of Pod positioning information to obtain identification information of the two groups of pods;
and generating an application CRD corresponding to the QoS binding CRD according to the Leaf QoS rule and the identification information of the two groups of Pods.
5. The method of claim 2, wherein when the QoS bandwidth guarantee type in the QoS rule is bandwidth limitation when at least one specified IP address accesses one or a group of Pod, generating a QoS binding CRD of a control plane according to the QoS rule, and generating an application CRD corresponding to the QoS binding CRD according to each Pod location information, comprises:
configuring class identification of HTB class matched with the QoS rule, and distributing matched Handle identification for each filter of the HTB class;
configuring a Leaf QoS rule according to one or a group of Pod specified in the QoS bandwidth guarantee type;
after analyzing various information included in the QoS rule, generating a QoS binding CRD of the control plane according to an analysis processing result, a class mark, each Handle mark and the Leaf QoS rule;
screening prestored cache information of each Pod in the container platform according to the at least one item of Pod positioning information to obtain identification information of the Pod or the group of pods, and acquiring at least one specified IP address included in a QoS rule;
and generating an application group CRD corresponding to the QoS binding CRD according to the Leaf QoS rule, the identification information of the Pod or the group of Pods and the at least one specified IP address.
6. A method for configuring network QoS (quality of service) is executed by a working node in a container platform, and is characterized by comprising the following steps:
storing and monitoring user-defined resources CRD to a main node in a container platform;
when monitoring that the new QoS binding CRD and the application to the group CRD are stored in the main node, detecting whether the Pod configured by the monitored CRD is positioned on the local node;
if yes, acquiring the CNI type of a container network interface CNI plug-in adapted to the local node, and acquiring physical networking equipment used by the CNI plug-in to realize cross-node network communication according to the CNI type;
and generating a flow control TC strategy and an IPset set according to the monitored QoS binding CRD and the application group CRD, and issuing the flow control TC strategy and the IPset set to the physical networking equipment so as to realize bandwidth guarantee of the configured Pod.
7. The method according to claim 6, wherein obtaining a CNI type of a CNI plug-in adapted to a native node, and obtaining, according to the CNI type, a physical networking device used when the CNI plug-in realizes cross-node network communication, includes:
reading a CNI configuration file mounted in a preset storage directory to acquire the CNI type;
if the CNI type is detected to cover an Overlay network, acquiring a bridge device matched with the local node;
and if the CNI type is detected to be a lower-layer Underlay network, acquiring the physical network card equipment matched with the local node.
8. The method according to claim 7, after obtaining, according to the CNI type, a physical networking device used by the CNI plug-in to implement cross-node network communication, further comprising:
detecting whether Ifb equipment in an incoming direction exists on the network bridge equipment or the physical network card equipment;
if the virtual network interface card does not exist, creating Ifb equipment corresponding to the network bridge equipment or the physical network interface card equipment, and redirecting the flow of the non-class Qdisc queue in the incoming direction to the outgoing direction of the created Ifb equipment;
and creating an HTB type Qdisc queue as a similar Qdisc queue of the bridge device or the physical network card device in the outgoing direction.
9. A master node in a container platform, comprising a processor and a memory to store instructions that, when executed, cause the processor to:
obtaining QoS configuration information of a user plane corresponding to at least one container group Pod in a container platform, wherein the QoS configuration information of the user plane comprises: a QoS rule and at least one item of Pod positioning information;
generating a QoS binding user self-defined resource CRD of a control plane according to the QoS rule, and generating an application group CRD corresponding to the QoS binding CRD according to each Pod positioning information;
and storing the QoS binding CRD and the application group CRD in the main node to indicate that the working node running with the configured Pod sends the matched flow control rule to the matched physical networking equipment when monitoring the storage of the QoS binding CRD and the application group CRD.
10. The master node of claim 9,
the QoS rules include: QoS bandwidth guarantee type, maximum bandwidth, minimum bandwidth and priority;
the QoS bandwidth guarantee types comprise: bandwidth limitations on a single Pod, end-to-end bandwidth limitations between one group of pods and another group of pods within a container platform, and bandwidth limitations when at least one specified internet protocol, IP, address accesses one or a group of pods; bandwidth limits include maximum and minimum bandwidth limits in the direction of access;
the Pod positioning information comprises at least one of: at least one custom field in the Pod namespace, Pod tag, Pod name, and Pod annotation.
11. The home node of claim 10, wherein when the QoS bandwidth guarantee type in the QoS rule is bandwidth limitation for a single Pod, the processor is configured to generate a QoS binding user-defined resource CRD of a control plane according to the QoS rule and generate an application group CRD corresponding to the QoS binding CRD according to each Pod location information by:
configuring class identification of a hierarchical token bucket HTB class matched with the QoS rule, and distributing matched manager Handle identification for each filter of the HTB class;
after analyzing each item of information included in the QoS rule, generating a QoS binding CRD of the control plane according to an analysis processing result, the class mark and each Handle mark;
and screening the prestored cache information of each Pod in the container platform according to the at least one item of Pod positioning information to obtain the identification information of the single Pod, and generating an application group CRD corresponding to the QoS binding CRD according to the identification information of the single Pod.
12. The home node of claim 10, wherein when the QoS bandwidth guarantee type in the QoS rule is an end-to-end bandwidth limitation between one group of Pod and another group of Pod in a container platform, the processor is configured to generate a QoS binding CRD of a control plane according to the QoS rule and generate an application group CRD corresponding to the QoS binding CRD according to each Pod positioning information by:
configuring class identification of a hierarchical token bucket HTB class matched with the QoS rule, and distributing matched manager Handle identification for each filter of the HTB class;
configuring a bottom layer Leaf QoS rule according to two groups of Pod specified in the QoS bandwidth guarantee type;
after analyzing various information included in the QoS rule, generating a QoS binding CRD of the control plane according to an analysis processing result, a class mark, each Handle mark and the Leaf QoS rule;
screening prestored cache information of each Pod in the container platform according to the at least one item of Pod positioning information to obtain identification information of the two groups of pods;
and generating an application CRD corresponding to the QoS binding CRD according to the Leaf QoS rule and the identification information of the two groups of Pods.
13. The home node of claim 10, wherein when the QoS bandwidth guarantee type in the QoS rule is a bandwidth limitation when at least one specified IP address accesses one or a group of Pod, the processor is configured to generate a QoS binding CRD of a control plane according to the QoS rule and generate an application CRD corresponding to the QoS binding CRD according to each Pod location information by:
configuring class identification of HTB class matched with the QoS rule, and distributing matched Handle identification for each filter of the HTB class;
configuring a Leaf QoS rule according to one or a group of Pod specified in the QoS bandwidth guarantee type;
after analyzing various information included in the QoS rule, generating a QoS binding CRD of the control plane according to an analysis processing result, a class mark, each Handle mark and the Leaf QoS rule;
screening prestored cache information of each Pod in the container platform according to the at least one item of Pod positioning information to obtain identification information of the Pod or the group of pods, and acquiring at least one specified IP address included in a QoS rule;
and generating an application group CRD corresponding to the QoS binding CRD according to the Leaf QoS rule, the identification information of the Pod or the group of Pods and the at least one specified IP address.
14. A worker node in a container platform comprising a processor and a memory, the memory to store instructions that when executed cause the processor to:
storing and monitoring user-defined resources CRD to a main node in a container platform;
when monitoring that the new QoS binding CRD and the application to the group CRD are stored in the main node, detecting whether a configuration Pod corresponding to the monitored CRD is positioned on a local node;
if yes, acquiring the CNI type of a container network interface CNI plug-in adapted to the local node, and acquiring physical networking equipment used by the CNI plug-in to realize cross-node network communication according to the CNI type;
and generating a flow control TC strategy and an IPset set according to the monitored QoS binding CRD and the application group CRD, and issuing the flow control TC strategy and the IPset set to the physical networking equipment so as to realize bandwidth guarantee for configuring Pod.
15. The working node of claim 14, wherein the processor is configured to obtain a CNI type of a CNI plug-in adapted to a native node, and obtain, according to the CNI type, a physical networking device used when the CNI plug-in implements cross-node network communication:
reading a CNI configuration file mounted in a preset storage directory to acquire the CNI type;
if the CNI type is detected to cover an Overlay network, acquiring a bridge device matched with the local node;
and if the CNI type is detected to be a lower-layer Underlay network, acquiring the physical network card equipment matched with the local node.
16. The operational node of claim 15, wherein the processor is configured to configure the Ifb device after acquiring the physical networking device used by the CNI plug-in to implement the cross-node network communication according to the CNI type by:
detecting whether Ifb equipment in an incoming direction exists on the network bridge equipment or the physical network card equipment;
if the virtual network interface card does not exist, creating Ifb equipment corresponding to the network bridge equipment or the physical network interface card equipment, and redirecting the flow of the non-class Qdisc queue in the incoming direction to the outgoing direction of the created Ifb equipment;
and creating an HTB type Qdisc queue as a similar Qdisc queue of the bridge device or the physical network card device in the outgoing direction.
17. A storage medium for storing instructions for performing the method of configuring network QoS according to any one of claims 1 to 5; alternatively, the instructions are used for executing the configuration method of network QoS according to any one of claims 6 to 8.
CN202111003765.7A 2021-08-30 2021-08-30 Method, equipment and medium for configuring network service quality Active CN113709810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111003765.7A CN113709810B (en) 2021-08-30 2021-08-30 Method, equipment and medium for configuring network service quality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111003765.7A CN113709810B (en) 2021-08-30 2021-08-30 Method, equipment and medium for configuring network service quality

Publications (2)

Publication Number Publication Date
CN113709810A true CN113709810A (en) 2021-11-26
CN113709810B CN113709810B (en) 2024-01-26

Family

ID=78656733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111003765.7A Active CN113709810B (en) 2021-08-30 2021-08-30 Method, equipment and medium for configuring network service quality

Country Status (1)

Country Link
CN (1) CN113709810B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114661419A (en) * 2022-03-25 2022-06-24 星环信息科技(上海)股份有限公司 Service quality control system and method
CN114710549A (en) * 2022-02-25 2022-07-05 网宿科技股份有限公司 A method, system and service node for dynamic management of network card in container platform
CN115086166A (en) * 2022-05-19 2022-09-20 阿里巴巴(中国)有限公司 Computing system, container network configuration method, and storage medium
CN115134310A (en) * 2022-08-31 2022-09-30 浙江大华技术股份有限公司 Traffic scheduling method and device, storage medium and electronic device
CN116668467A (en) * 2022-02-21 2023-08-29 北京金山云网络技术有限公司 A resource access method, device, cloud hosting system and electronic equipment
CN116996379A (en) * 2023-08-11 2023-11-03 中科驭数(北京)科技有限公司 Cloud primary network service quality configuration method and device based on OVN-Kubernetes
CN118394478A (en) * 2024-05-07 2024-07-26 北京宝兰德软件股份有限公司 Method, device, equipment and medium for applying non-container to nanotubes
WO2025134024A1 (en) * 2023-12-21 2025-06-26 云智能资产控股(新加坡)私人股份有限公司 Bandwidth limiting system and method, and electronic device, storage medium and program product

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160205024A1 (en) * 2015-01-12 2016-07-14 Citrix Systems, Inc. Large scale bandwidth management of ip flows using a hierarchy of traffic shaping devices
WO2018009367A1 (en) * 2016-07-07 2018-01-11 Cisco Technology, Inc. System and method for scaling application containers in cloud environments
US20190116518A1 (en) * 2017-10-12 2019-04-18 Intel IP Corporation Device requested protocol data unit session modification in the 5g system
US20190166518A1 (en) * 2017-11-27 2019-05-30 Parallel Wireless, Inc. Access Network Collective Admission Control
CN111371627A (en) * 2020-03-24 2020-07-03 广西梯度科技有限公司 Method for setting multiple IP (Internet protocol) in Pod in Kubernetes
CN111917586A (en) * 2020-08-07 2020-11-10 北京凌云雀科技有限公司 Container bandwidth adjusting method, server and storage medium
CN112073330A (en) * 2020-09-02 2020-12-11 浪潮云信息技术股份公司 Cloud platform container network current limiting method
CN112104499A (en) * 2020-09-14 2020-12-18 浪潮思科网络科技有限公司 Container network model construction method, device, equipment and medium
CN112165435A (en) * 2020-09-29 2021-01-01 山东省计算中心(国家超级计算济南中心) A bidirectional flow control method and system based on virtual machine network quality of service
CN112187660A (en) * 2020-08-31 2021-01-05 浪潮云信息技术股份公司 Tenant flow limiting method and system for cloud platform container network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160205024A1 (en) * 2015-01-12 2016-07-14 Citrix Systems, Inc. Large scale bandwidth management of ip flows using a hierarchy of traffic shaping devices
WO2018009367A1 (en) * 2016-07-07 2018-01-11 Cisco Technology, Inc. System and method for scaling application containers in cloud environments
US20190116518A1 (en) * 2017-10-12 2019-04-18 Intel IP Corporation Device requested protocol data unit session modification in the 5g system
US20190166518A1 (en) * 2017-11-27 2019-05-30 Parallel Wireless, Inc. Access Network Collective Admission Control
CN111371627A (en) * 2020-03-24 2020-07-03 广西梯度科技有限公司 Method for setting multiple IP (Internet protocol) in Pod in Kubernetes
CN111917586A (en) * 2020-08-07 2020-11-10 北京凌云雀科技有限公司 Container bandwidth adjusting method, server and storage medium
CN112187660A (en) * 2020-08-31 2021-01-05 浪潮云信息技术股份公司 Tenant flow limiting method and system for cloud platform container network
CN112073330A (en) * 2020-09-02 2020-12-11 浪潮云信息技术股份公司 Cloud platform container network current limiting method
CN112104499A (en) * 2020-09-14 2020-12-18 浪潮思科网络科技有限公司 Container network model construction method, device, equipment and medium
CN112165435A (en) * 2020-09-29 2021-01-01 山东省计算中心(国家超级计算济南中心) A bidirectional flow control method and system based on virtual machine network quality of service

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116668467A (en) * 2022-02-21 2023-08-29 北京金山云网络技术有限公司 A resource access method, device, cloud hosting system and electronic equipment
CN114710549A (en) * 2022-02-25 2022-07-05 网宿科技股份有限公司 A method, system and service node for dynamic management of network card in container platform
CN114710549B (en) * 2022-02-25 2023-09-22 网宿科技股份有限公司 A dynamic management method, system and business node for network cards in a container platform
CN114661419A (en) * 2022-03-25 2022-06-24 星环信息科技(上海)股份有限公司 Service quality control system and method
CN114661419B (en) * 2022-03-25 2025-04-25 星环信息科技(上海)股份有限公司 A service quality control system and method
CN115086166B (en) * 2022-05-19 2024-03-08 阿里巴巴(中国)有限公司 Computing system, container network configuration method, and storage medium
CN115086166A (en) * 2022-05-19 2022-09-20 阿里巴巴(中国)有限公司 Computing system, container network configuration method, and storage medium
CN115134310A (en) * 2022-08-31 2022-09-30 浙江大华技术股份有限公司 Traffic scheduling method and device, storage medium and electronic device
CN115134310B (en) * 2022-08-31 2022-12-06 浙江大华技术股份有限公司 Traffic scheduling method and device, storage medium and electronic device
CN116996379A (en) * 2023-08-11 2023-11-03 中科驭数(北京)科技有限公司 Cloud primary network service quality configuration method and device based on OVN-Kubernetes
CN116996379B (en) * 2023-08-11 2024-06-07 中科驭数(北京)科技有限公司 OVN-Kubernetes-based cloud primary network service quality configuration method and device
WO2025134024A1 (en) * 2023-12-21 2025-06-26 云智能资产控股(新加坡)私人股份有限公司 Bandwidth limiting system and method, and electronic device, storage medium and program product
CN118394478A (en) * 2024-05-07 2024-07-26 北京宝兰德软件股份有限公司 Method, device, equipment and medium for applying non-container to nanotubes
CN118394478B (en) * 2024-05-07 2024-10-18 北京宝兰德软件股份有限公司 Method, device, equipment and medium for applying non-container to nanotubes

Also Published As

Publication number Publication date
CN113709810B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
CN113709810B (en) Method, equipment and medium for configuring network service quality
US11461154B2 (en) Localized device coordinator with mutable routing information
US11044230B2 (en) Dynamically opening ports for trusted application processes hosted in containers
CN110462589B (en) On-demand code execution in a local device coordinator
US10417049B2 (en) Intra-code communication in a localized device coordinator
US11178254B2 (en) Chaining virtual network function services via remote memory sharing
US10452439B2 (en) On-demand code execution in a localized device coordinator
CN103946834B (en) virtual network interface object
US8819211B2 (en) Distributed policy service
US10216540B2 (en) Localized device coordinator with on-demand code execution capabilities
CN104038401B (en) Method and system for interoperability for distributed overlay virtual environments
JP2021529386A (en) Execution of auxiliary functions on the on-demand network code execution system
US20120291024A1 (en) Virtual Managed Network
US10372486B2 (en) Localized device coordinator
CN110352401B (en) Local device coordinator with on-demand code execution capability
US11372654B1 (en) Remote filesystem permissions management for on-demand code execution
CN113301116A (en) Cross-network communication method, device, system and equipment for microservice application
CN111258627A (en) Interface document generation method and device
CN114650223B (en) Network configuration method and device of Kubernetes cluster and electronic equipment
CN117395225A (en) Data access method, device, system and equipment in cloud primary container network
CN115378993B (en) Method and system for supporting namespace-aware service registration and discovery
US11151022B1 (en) Testing of executable code for local device coordinator
CN118827205A (en) A traffic forwarding method, device, computer equipment and storage medium
CN120433982A (en) Firewall configuration method for protecting cloud security, network traffic processing method and device
CN119520629A (en) Routing method and device for business data access request

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant