CN117459468B - A method and system for processing business traffic in a multi-CNI container network - Google Patents
A method and system for processing business traffic in a multi-CNI container network Download PDFInfo
- Publication number
- CN117459468B CN117459468B CN202311435191.XA CN202311435191A CN117459468B CN 117459468 B CN117459468 B CN 117459468B CN 202311435191 A CN202311435191 A CN 202311435191A CN 117459468 B CN117459468 B CN 117459468B
- Authority
- CN
- China
- Prior art keywords
- network
- cni
- container
- traffic
- dpu
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000012545 processing Methods 0.000 title claims abstract description 39
- 230000006870 function Effects 0.000 claims description 22
- 230000003287 optical effect Effects 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 7
- 238000009434 installation Methods 0.000 claims 1
- 238000007726 management method Methods 0.000 description 20
- 230000008569 process Effects 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 9
- 230000003863 physical function Effects 0.000 description 7
- 241000322338 Loeseliastrum Species 0.000 description 5
- 238000002955 isolation Methods 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000007792 addition Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000005111 flow chemistry technique Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000011031 large-scale manufacturing process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2441—Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/645—Splitting route computation layer and forwarding layer, e.g. routing according to path computational element [PCE] or based on OpenFlow functionality
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention provides a method and a system for processing service traffic in a multi-CNI container network, wherein the multi-CNI container network comprises a first CNI network for bearing service management traffic and a second CNI network for bearing service data traffic, the method comprises the following steps that the first CNI network transmits the service management traffic generated by each container group to a kernel protocol stack at a Host side of the multi-CNI container network through a first type network card, the service management traffic is forwarded according to a preset forwarding strategy, the second CNI network unloads the service data traffic generated by each container group to DPU equipment through a second type network card and transmits a flow table to the DPU equipment, and the DPU equipment forwards the service data traffic by utilizing a pre-deployed traffic forwarding surface and the flow table. The invention can solve the problem of consuming a large amount of server CPU when processing the traffic of the multi-CNI container network.
Description
Technical Field
The present invention relates to the field of multi-CNI network traffic processing technologies, and in particular, to a method and a system for processing traffic in a multi-CNI container network.
Background
Container (Container) virtualization technology refers to running multiple containers on one physical host, each Container having an independent running environment, with multiple Container network interfaces (Container Network Interface, CNI) present. A multi-CNI container network based on container virtualization technology refers to deploying multiple CNIs in one container cloud to provide different container network functions, and constructing a cloud server cluster. And deploying cloud service products realized by the container service on the cluster server through a docker technology. The container cloud is a cloud delivery model of the PaaS layer. The container cloud may be deployed in two ways, one is to deploy the container on the virtual machine (in many traditional enterprises, the container is deployed on the virtual machine) and the other is to deploy the container directly on the bare computer server. Kubernetes (K8 s for short) is a portable and extensible open source platform for managing containerized workload and services, and for building K8s clusters (one type of cloud server clusters), which can facilitate declarative configuration and automation. Among them, CNI is at the earliest the container network specification initiated by CoreOS, which is the basis of Kubernetes network plug-in. The basic idea is that Container Runtime when creating a container, it creates network namespace first, then calls the CNI plug-in to configure the network for this netns, and then starts the processes in the container. The cloud native computing foundation (Cloud Native Computing Foundation, CNCF) has been added to become the network model for CNCF initiative.
In the current container virtualization technology, due to the service requirement of the containers in the cluster, a plurality of CNI networks as shown in fig. 2 are generally adopted to provide a plurality of network planes isolated from each other for the container group (Pod of Containers, abbreviated as POD), and different service flows do not affect each other, so that the management flow and the service flow can be separated, and the management network is not affected due to network congestion caused when the service flow increases to occupy a large amount of bandwidth. For example, it is currently common to use flannel as a default CNI to provide default management network support for K8s networks and to use a primary CNI, such as Calico, MAC vlan or SRIOV CNI, as a secondary CNI to provide a data network for traffic. Wherein, POD is the minimum deployment and management basic unit in Kubernetes cluster, co-addressing, co-scheduling.
The current multi-CNI scheme can provide network isolation for different services, ensure that the services with different requirements can work on independent network planes, but each CNI uses a kernel protocol stack to carry out network forwarding, because of providing one CNI more, the consumption of CPU and memory resources on the Host side of the server is increased, and even if the processing path of the kernel protocol stack is shortened by using a bypass technology such as DPDK, the consumption of a large amount of CPU resources is additionally increased. Currently, whether a public cloud server or a private cloud server, network processing occupies a large amount of CPU resources, and the CPU resources dedicated to processing core business logic are preempted. Providing network isolation for cloud server clusters (including K8s clusters) through multiple CNIs actually adding additional consumption of cloud server CPU resources.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and system for processing traffic in a multi-CNI container network, which obviates or mitigates one or more of the drawbacks of the prior art.
In one aspect, the present invention provides a method for processing traffic in a multi-CNI container network, where the multi-CNI container network includes a first CNI network for carrying traffic management traffic and a second CNI network for carrying traffic data traffic generated by processing traffic by container groups, and each container group included in the multi-CNI container network includes a first type network card corresponding to the first CNI network and a second type network card corresponding to the second CNI network, and the method includes the following steps:
The first CNI network transmits the service management flow generated by each container group to a kernel protocol stack at the Host side of the multi-CNI container network through a first type network card, and the kernel protocol stack forwards the service management flow according to a preset forwarding strategy;
The second CNI network unloads the business data flow generated by each container group to DPU equipment connected with the second CNI network through a second type network card, and issues a flow table for guiding flow forwarding to the DPU equipment;
the DPU device uses a pre-deployed traffic forwarding plane for forwarding traffic data traffic, and forwards traffic data traffic offloaded from a second CNI network according to the flow table from the second CNI network.
In some embodiments of the present invention, the multi-CNI container network is created by a network orchestration tool for cloud native container virtualization, the network orchestration tool is used to build a container virtualization cluster by installing and deploying the network orchestration tool on a control node of a cloud server cluster and a cloud server node, and a multi-CNI support component is pre-deployed in the network orchestration tool so that the network orchestration tool supports a plurality of CNIs.
In some embodiments of the present invention, an interface detection plug-in is pre-deployed on the cloud proto-container virtualization network orchestration tool, so that the second CNI network can discover a physical function interface and/or a virtual function interface on a server node, and the virtual function interface based on single root I/O virtualization is implemented through the interface detection plug-in.
In some embodiments of the present invention, the cloud primary container virtualization network orchestration tool is Kubernetes, the multi-CNI support component is Multus-CNI, the method further comprises deploying Calico CNI and ovn-Kubernetes CNI based on the Kubernetes installed and deployed, the Calico CNI being used to carry the first CNI network, the ovn-Kubernetes CNI plug-in being used to carry the second CNI network, wherein the Calico CNI is set as a default CNI for Kubernetes.
In some embodiments of the invention, the method further comprises pre-deploying ovn-kubernetes CNI components on the control nodes of the cloud server cluster comprising nodes ovs, ovnkube-master, ovnkube-db and ovnkube in full mode, and ovn-kubernetes CNI components on the Host side of the worker nodes of the cloud server cluster comprising OVN-K8s-CNI-Overlay, and ovnkube nodes.
In some embodiments of the present invention, the first type network card is a virtual network card interface, and the second type network card is a virtual function interface supporting single root I/O virtualization.
In some embodiments of the invention, the method further comprises pre-deploying a traffic forwarding plane in the system on a chip of the DPU device for forwarding traffic data traffic.
In some embodiments of the present invention, the method further comprises pre-deploying, in the system on chip of the DPU device, a second CNI network control component for supporting flow table delivery and a second CNI network node configuration component for setting interface configuration with an application container engine, the server and the DPU device being connected to a multi-CNI container network through a switch and/or an optical switch.
In some embodiments of the present invention, when a plurality of DPU devices are connected to the second CNI network, a data connection is established between different DPU devices through a switch or an optical switch, and a data connection is established between different container groups within the same DPU device.
In another aspect, the present invention provides a system for processing traffic in a multi-CNI container network, comprising a processor and a memory, the memory having stored therein computer instructions for executing the computer instructions stored in the memory, the system implementing the steps of the method according to any of the above embodiments when the computer instructions are executed by the processor.
According to the method and the system for processing the service flow in the multi-CNI container network, the DPU equipment is connected with the second CNI network, the service data flow is forwarded on the pre-deployed flow forwarding surface of the DPU equipment based on the received flow table for guiding flow forwarding, the service data flow originally deployed on the Host side of the multi-CNI container network, namely the cloud server node can be forwarded, the network data packet is efficiently processed through the DPU equipment special for data processing, the pressure of the server CPU is effectively reduced, the problem that the server CPU is consumed in a large amount when the flow of the multi-CNI container network is processed is solved, the service management flow is reserved for the server CPU to process, and the server CPU can be more focused on processing real service logic. Meanwhile, the invention realizes the isolation effect of the multi-CNI container network on the management network carrying the service management flow and the data network carrying the service data flow, and obviously improves the CPU resource utilization efficiency of the cloud server node.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the above-described specific ones, and that the above and other objects that can be achieved with the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate and together with the description serve to explain the application. In the drawings:
fig. 1 is a flowchart of a method for processing traffic in a multi-CNI container network according to an embodiment of the present invention.
Fig. 2 is a method for processing plane traffic by using a multi-CNI container network in the prior art.
Fig. 3 is a method for processing plane traffic by a multi-CNI container network according to an embodiment of the present invention.
Fig. 4 is a method for processing plane traffic by using a multi-CNI container network built based on K8s according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a multi-CNI container network built based on K8s on the Host side in a specific embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a container group and DPU equipment on the Host side of a multi-CNI container network built based on K8s in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following embodiments and the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. The exemplary embodiments of the present invention and the descriptions thereof are used herein to explain the present invention, but are not intended to limit the invention.
It should be noted here that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, while other details not greatly related to the present invention are omitted.
It should be emphasized that the term "comprises/comprising" when used herein is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
It is also noted herein that the term "coupled" may refer to not only a direct connection, but also an indirect connection in which an intermediate is present, unless otherwise specified.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same reference numerals represent the same or similar components, or the same or similar steps.
In a network, each host, router and server has a network layer that can be broken down into two interacting parts, a data plane and a control plane, the data plane referring to the functions of the respective host, router or server in the network layer, determining how datagrams (network packets) arriving at one of the input links of the respective host, router or server are forwarded to one of the output links of the router. In the present invention, the problem faced on the server side is that the large amount of traffic data traffic generated by the multi-CNI container network occupies a large amount of server CPU resources. The server may be a real server or a cloud server. The invention aims to provide a method for processing business data traffic carried by a multi-CNI container network to relieve the problem of occupation of a large amount of CPU on a server side.
A data center processor (Data Processing Unit, abbreviated as DPU), which is a large class of special purpose processors that have been developed recently, is the third important computational power chip in the data center scene, following the CPU, GPU, to provide a computational engine for high bandwidth, low latency, data intensive computing scenes. The system is a new generation computing chip which takes data as a center, is I/O intensive, adopts a software definition technology route to support the virtualization of an infrastructure resource layer, and has the advantages of improving the efficiency of a computing system, reducing the total ownership cost of the whole system, improving the data processing efficiency and reducing the performance loss of other computing chips.
Kubernetes, also known as K8s, is an open source system for automatically deploying, scaling and managing containerized applications. It combines the containers that make up the application into a logical unit to facilitate management and service discovery. Kubernetes originated from the operational experience of the Google15 year production environment, while aggregating the best creatives and practices of the community. K8s is typically used to build a multi-CNI container network. Linux containers provide a lightweight virtualization method that can run multiple virtual environments (containers) simultaneously on a single host. In technologies such as XEN or KVM, the processor simulates the entire hardware environment and the hypervisor controls the virtual machine. Unlike this, the container provides virtualization at the operating system level, where the kernel controls the isolated container. Kubernetes possess a large and rapidly growing ecology, and its service, support and tool uses are quite extensive. The name Kubernetes originates in greek and means "rudders" or "pilots". The abbreviation k8s is due to the eight character relationship between k and s. Google was open to Kubernetes project in 2014. Kubernetes builds on Google's experience with large scale production workload for decades, combining the most excellent ideas and practices in communities. There are some most basic core network requirements in Kubernetes cluster networks such as that each network interface on a Pod must have its own unique IP, that a Pod should have the ability to communicate with any other Pod in the Kubernetes cluster without NAT, and that an Agents on a node (e.g., a system daemon, kubelet) can communicate with all PODs on that node. However kubernetes does not have built-in tools or components to address the core network requirements as described above, but instead provides the core network capabilities as described above through a network overlay plug-in compliant with the Container Network Interface (CNI) specification. Known CNIs such as Flannel Calico can provide the kubernetes trunked network with the required capabilities of the core network.
The CNI container network interface (Container Network Interface), abbreviated as CNI, is an item under CNCF flag, and is composed of a set of specifications and libraries for configuring network interfaces of Linux containers, and also contains some plug-ins. CNI only concerns network allocation at the time of container creation and releases network resources when a container is deleted.
OVS (Open vSwitch), are directed to providing a functional virtual switch for Linux-based Hypervisors. As a virtual switch, it supports port mirroring, VLAN, and some other network monitoring protocols.
In order to solve the problem of consumption of CPU on the Host side (namely, the server side) by the traffic of the network plane of the multi-CNI container network constructed in the prior art, the invention provides a method for processing the traffic in the multi-CNI container network, which is realized by unloading large-scale traffic data traffic to DPU equipment, wherein the DPU equipment and the server are connected to the same network plane through a switch, and the proposed traffic unloading strategy is realized through an internally pre-deployed program.
Fig. 1 is a flowchart of a method for processing traffic in a multi-CNI container network according to an embodiment of the present invention, where the CNI container network includes a first CNI network for carrying traffic management traffic and a second CNI network for carrying traffic data traffic generated by processing traffic by container groups, and each container group included in the multi-CNI container network includes a first type network card corresponding to the first CNI network and a second type network card corresponding to the second CNI network, and the method includes the following steps:
step S110, the first CNI network transmits the service management flow generated by each container group to a kernel protocol stack at the Host side of the multi-CNI container network through a first type network card, and the kernel protocol stack forwards the service management flow according to a preset forwarding strategy.
Wherein the set of containers (POD, pod of Containers) contains one or more containers and the resources used to manage these containers are an abstract set of one or a set of services (processes). Network and storage may be shared among PODs (which may be understood simply as a logical virtual machine, but is not a virtual machine). In some embodiments of the present invention, each CNI network represents a network plane, the first CNI network is a control plane for carrying control plane traffic, the second CNI network is a data plane for carrying data plane traffic, and the volume of the data plane traffic is much larger than the volume of the control plane traffic.
In the implementation process, the Host side of the multi-CNI container network refers to a physical Host on which the container operates, and the CNI plug-in operates on the Host side and is responsible for creating and configuring a network interface for the container. In the cloud server cluster operation environment, the Host side is a server side, the corresponding other side is a DPU computing card side, the DPU computing card is used for assisting a server in carrying out flow forwarding, and is mainly responsible for solving the problem of large service data flow of data flow, the DPU computing card is connected to a second CNI network through a switch or an optical switch, and the second CNI network is located in the cloud server cluster. In cloud computing, the Host side is typically used to refer to a physical device provided by a cloud service provider.
And step S120, the second CNI network uninstalls the service data flow generated by each container group to DPU equipment connected with the second CNI network through a second type network card, and transmits a flow table for guiding flow forwarding to the DPU equipment, wherein a flow forwarding surface for forwarding the service data flow is deployed on the DPU equipment in advance.
In the implementation process, the DPU device and the server on the Host side are commonly connected to a second CNI network to be offloaded through a switch. Because different CNI networks can process different services, more than one second CNI network can be used for processing service data traffic, different functions can be realized through different second CNI networks, and traffic unloading principles of a plurality of second CNI networks for service processing are consistent. Forwarding of the flow table is usually performed directly by means of a switch, hardware of the switch performs flow table forwarding, based on Openflow technology, the traffic data traffic from the second CNI network will simultaneously generate a flow table on the switch when arriving at the switch, the flow table arrives with the traffic data traffic at a DPU device connected to the second CNI network, and the DPU device forwards the traffic data traffic based on the flow table and pre-deployed ovs.
And step S130, the DPU device uses a pre-deployed traffic forwarding plane for forwarding the traffic data traffic and forwards the traffic data traffic unloaded from the second CNI network according to the flow table from the second CNI network.
In a specific implementation process, a traffic forwarding plane for forwarding traffic data traffic can be pre-deployed on the DPU device through an OVS plug-in. The plug-in implementation is merely an example, and the present invention is not limited thereto.
According to the processing method for the service flow in the multi-CNI container network, the DPU equipment is connected with the second CNI network, the service data flow is forwarded on the pre-deployed flow forwarding surface of the DPU equipment based on the received flow table used for guiding flow forwarding, the service data flow originally deployed on the Host side of the multi-CNI container network, namely the cloud server node can be forwarded, the network data packet is efficiently processed through the DPU equipment special for data processing, the pressure of a server CPU is effectively reduced, the problem that a large amount of server CPUs are consumed when the flow of the multi-CNI container network is processed is solved, the service management flow is reserved for the server CPU to process, and the server CPU can be more focused on processing real service logic. Meanwhile, the invention realizes the isolation effect of the multi-CNI container network on the management network carrying the service management flow and the data network carrying the service data flow, and obviously improves the CPU resource utilization efficiency of the cloud server node.
As shown in fig. 3, the present invention adds DPU equipment on the basis of fig. 2, deploys OVS on a DPU SoC (system on a chip) to forward the offloaded traffic data traffic, and adaptively adjusts the interfaces to veth interfaces and vf interfaces.
In a specific implementation process, a multi-CNI container network needs to be built in advance, and the network configuration tool for cloud primary container virtualization can be used for creation, the network configuration tool is installed and deployed on a control node of a cloud server cluster and a cloud server node, the container virtualization cluster is built by the network configuration tool, and a multi-CNI supporting component is deployed in the network configuration tool in advance, so that the network configuration tool supports a plurality of CNIs. For example, kubernetes is installed and deployed on Maste nodes of the cloud server cluster and cloud server nodes, the component container virtualizes the cluster, the control node is called a Master node, and the working node is called a Worker node.
In some embodiments of the present invention, an interface detection plug-in is further deployed in advance on the cloud proto-container virtualization network orchestration tool, so that the second CNI network can discover a physical function interface and/or a virtual function interface on a server node, and the virtual function interface based on single-root I/O virtualization is implemented through the interface detection plug-in. For example, the plug-in used by the interface detection plug-in the Kubernetes scenario is an SR-IOV Network Device Plugin plug-in, thereby enabling ovn-Kubernetes CNI to discover physical Function interfaces (Physical Function, PF) and/or Virtual Function interfaces (VF) on server nodes, the SR-IOV Network Device Plugin plug-in also being used to provide the capabilities of an SR-IOV VF interface. SR-IOV Virtual Function (VF) PCI Express (PCIe) Virtual Function (VF) is a lightweight PCIe Function on a network adapter that supports single root I/O virtualization (SR-IOV).
By adopting the embodiment, the multi-CNI container network can detect the accessed socket, identify the accessed physical function interface and virtual function interface, and ensure that the function proposed by the scheme can be smoothly realized.
In a specific implementation process, the cloud primordial container virtualization network orchestration tool is Kubernetes, the multi-CNI support component is Multus-CNI, and the method further comprises (1) deploying Calico CNI and ovn-Kubernetes CNI based on the Kubernetes installed and deployed, wherein Calico CNI is used for carrying the first CNI network, ovn-Kubernetes CNI plugins are used for carrying the second CNI network, and (2) wherein Calico CNI is set as a default CNI of Kubernetes. Calico is a set of container cluster network CNI plug-in units with open sources, which are used for forwarding network traffic based on Linux kernel protocol stacks, providing IP allocation for containers in cloud primary virtualization clusters, providing basic network support such as network connection for containers, virtual machines and hosts, and can be used on PaaS or IaaS platforms such as kubernetes, openShift, dockerEE, openStrack. The term interprets PaaS (Platform-as-a-service), iaaS (Infrastructure-as-a-service), saaS (Software-as-a-service). OVN-Kubernetes network plug-in is an open source, fully functional Kubernetes CNI plug-in that uses Open Virtual Network (OVN) to manage network traffic, which provides an overlay-based network implementation. An Open VSwitch (OVS) is run on each node using a cluster of OVN-Kubernetes plug-ins. OVN configure OVSs on each node to implement the declared network configuration. OVN-a series of daemons of Kubernetes for converting container-cluster virtual network configurations into OpenFlow rules. OpenFlow is a protocol for communicating with network switches and routers that provides a method for remotely controlling network traffic flows on network devices, enabling network administrators to configure, manage, and monitor the flows of network traffic.
In some embodiments of the invention, the method further comprises pre-deploying ovn-kubernetes CNI components on the control nodes of the cloud server cluster comprising nodes ovs, ovnkube-master, ovnkube-db and ovnkube in full mode, and ovn-kubernetes CNI components on the Host side of the worker nodes of the cloud server cluster comprising OVN-K8s-CNI-Overlay, and ovnkube nodes (ovnkube-node). Based on the pre-arrangement of the plugins as above, traffic data traffic of the second CNI network can be offloaded to the DPU device and forwarded.
Specifically, multus-CNI is a plug-in that enables kubernetes container clusters to support multiple CNIs simultaneously. With the development, kubernetes lacks support for the required functionality of multiple network interfaces in virtualized networks. Traditionally, networks may use multiple network interfaces to separate the network planes that control, manage and control the user/data. They are also used to support different protocols, meeting different regulatory and configuration requirements. To address this need intel implements the CNI plug-in of MULTUS, in which functionality is provided to add multiple interfaces to the Pod. This allows the POD to connect to multiple networks through different interfaces, and each interface will use its own CNI plug-in (the CNI plug-ins may be the same or different, depending entirely on the requirements and implementation). SRIOV network device plugin plug-in, SR-IOV network device plug-in expands the functions of Kubernetes, solving the high performance network I/O problem by discovering and publishing SR-IOV network physical function interfaces (PF, physical functions), virtual function interfaces (VF, virtual functions) and auxiliary network devices, particularly sub-function interfaces (SF) in the Kubernetes host.
In a specific embodiment of the present invention, the first type network card is a virtual network card interface, and the second type network card is a virtual function interface supporting single root I/O virtualization. The invention is not limited thereto, and the specific types of network card interfaces above are merely examples.
In yet another embodiment of the present invention, the method further comprises pre-deploying a traffic forwarding plane in the system on a chip of the DPU device for forwarding traffic data traffic. Typically, this step is achieved by pre-deploying OVSs as the forwarding plane of ovn-kubernetes CNI on a system on chip (SoC) of the DPU device. The invention is not limited thereto and the above OVS insert is only an example.
In some embodiments of the present invention, the method further comprises pre-deploying, in the system on chip of the DPU device, a second CNI network control component for supporting flow table delivery and a second CNI network node configuration component for setting interface configuration by using an application container engine, wherein the server and the DPU device are connected to the multi-CNI container network through a switch and/or an optical switch. In a specific implementation process, a ovn-kubernetes CNI supporting a second CNI network is used to deploy ovn controllers (ovn-controllers) on a system on a chip of a DPU device, and a ovnkube node component is used to issue an openflow flow table and ovs interface configuration for the ovs forwarding plane. The ovs interface configuration refers to configuring the PF and VF.
In some embodiments of the present invention, when a plurality of DPU devices are connected to the second CNI network, data connection is established between different DPU devices through a switch or an optical switch, and data connection is established between different container groups in the same DPU device. A switch is a network device that connects different devices in a network and forwards packets to the correct destination device based on the destination address in the packet. The switch may transmit data using an electrical signal or an optical signal, wherein the switch that transmits data using an optical signal is an optical switch that employs an optical fiber as a transmission medium.
Specifically, taking K8s as an example of a cloud native container virtualization network orchestration tool, calico and ovn-kubernetes are taken as CNI plug-ins used by container multi-CNIs to specifically describe how to implement offloading of CNIs occupying more CPUs in the multi-CNIs to DPUs.
In the present technology center, both the forwarding plane of Calico and the forwarding plane of ovn-kubernetes are deployed on server nodes, in this embodiment, we offload the forwarding plane of the second CNI ovn-kubernetes carrying large data traffic to the DPU Soc, and the dedicated DPU data processor processes the data traffic, so as to achieve the purpose of releasing a large amount of data and saving the CPU on the host side of the server. It should be noted that the forwarding plane, the service plane, and the control plane are three main functional components of the router architecture. The forwarding plane decides what kind of processing is performed on the data message flowing into the interface, for example, according to the destination IP address and other attribute parameters in the message, a proper route path is selected according to the route table to forward, the service plane is responsible for calculating and processing user service data and is mainly completed by the router CPU, and the control plane is used for realizing interconnection of network topology structures and mainly controlling which route will enter the corresponding protocol route table by various routing protocols.
In order to implement the flow offloading scheme proposed by the present invention, the required preparation steps include:
(1) And installing and deploying the Kubernetes on the Master control node and the cloud server node of the cloud server cluster, and virtualizing the cluster by the component container.
(2) Based on Kubernetes deployment Calico CNI, as a K8s default CNI.
(3) Multus-CNI is deployed, so that the K8s cluster has the capability of supporting a plurality of CNIs.
(4) The method is based on Kubernetes and deploys ovn-Kubernetes CNI as a K8s second CNI, wherein ovn-Kubernetes CNI components required to be deployed on a master node comprise ovs, ovnkube-master, ovnkube-db, ovnkube nodes (full mode), and ovn-Kubernetes CNI components required to be deployed on host side of a worker node comprise OVN-K8s-CNI-Overlay, ovnkube nodes (dpu-host mode).
(5) The SR-IOV Network Device Plugin plug-in is deployed to provide ovn-kubernetes with the ability to discover the PF/VF interface resources on the server node and provide the SRIOV VF interface for Pod.
(6) OVSs are deployed on DPU SoC (system on chip) of the worker node as forwarding plane ovn-kubernetes CNI.
(7) A docker deployment ovn controller (ovn-controller) is used on the DPU SoC, a ovnkube node component for issuing openflow flow tables and ovs interface configurations for ovs forwarding planes.
(8) Service Pod is deployed in kubernetes clusters, v1.mu-CNI/default-network kube-system/calico-net is set in annotations field of yaml configuration file defining Pod, default network is designated as Calico CNI, and ks.v1.cni.cncf.io/networks kube-system/ovn-network@eth1 of annotations is set, indicating use of dual CNI network, second CNI is ovn-kubernetes network.
(9) And after the service container group (Pod) is established, each container group is provided with two network cards, and the two network cards correspond to Calico CNI networks (namely a first CNI network) and ovn-kubernetes CNI networks (namely a second CNI network) respectively. The method comprises the steps of enabling Calico CNI network cards to be veth interfaces, enabling flow data to enter a Host kernel protocol stack after being sent out of Pod, and carrying out flow forwarding according to an iptables strategy and a routing strategy of Calico, wherein ovn-kubernetes network cards are SRIOV VF interfaces, enabling the flow data to enter a DPU Soc side after being sent out of Pod, enabling the flow data to enter a ovs forwarding surface from VF representor, carrying out corresponding data flow forwarding according to a flow table issued to ovs from ovn, and not occupying CPU resources of any Host side (namely a server side).
As shown in fig. 4, fig. 5, and fig. 6, in the K8s container cluster, the POD has multiple CNI networks (multiple CNIs provide different networks for the same POD), uses Calico CNI as a first CNI, carries and manages a traffic network, and uses ovn-kubernetes CNI as a second CNI, carries a data traffic network generated by a service. The DPU Soc (System on Chip) is an operating System running on the DPU network card. Fig. 5 and 6 are enlarged partial detail views of fig. 4.
The method accordingly also provides a system for processing traffic in a multi-CNI container network, the system comprising a computer device comprising a processor and a memory, the memory having stored therein computer instructions for executing the computer instructions stored in the memory, the system implementing the steps of the method as described above when the computer instructions are executed by the processor.
Embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as described above. The computer readable storage medium may be a tangible storage medium such as Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, floppy disks, hard disk, a removable memory disk, a CD-ROM, or any other form of storage medium known in the art.
According to the method and the system for processing the service flow in the multi-CNI container network, the DPU equipment is connected with the second CNI network, the service data flow is forwarded on the pre-deployed flow forwarding surface of the DPU equipment based on the received flow table for guiding flow forwarding, the service data flow originally deployed on the Host side of the multi-CNI container network, namely the cloud server node can be forwarded, the network data packet is efficiently processed through the DPU equipment special for data processing, the pressure of the server CPU is effectively reduced, the problem that the server CPU is consumed in a large amount when the flow of the multi-CNI container network is processed is solved, the service management flow is reserved for the server CPU to process, and the server CPU can be more focused on processing real service logic. Meanwhile, the invention realizes the isolation effect of the multi-CNI container network on the management network carrying the service management flow and the data network carrying the service data flow, and obviously improves the CPU resource utilization efficiency of the cloud server node. In addition, the invention enumerates the realization mode of the flow unloading path and plug-ins which are required to be arranged, and ensures that the steps of the flow processing method provided by the invention are smoothly realized.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein can be implemented as hardware, software, or a combination of both. The particular implementation is hardware or software dependent on the specific application of the solution and the design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. The method processes of the present invention are not limited to the specific steps described and shown, but various changes, modifications and additions, or the order between steps may be made by those skilled in the art after appreciating the spirit of the present invention.
In this disclosure, features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations can be made to the embodiments of the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A method for processing traffic in a multi-CNI container network, wherein the multi-CNI container network includes a first CNI network for carrying traffic management traffic and a second CNI network for carrying traffic data traffic generated by processing traffic by container groups, each container group included in the multi-CNI container network is provided with a first type network card corresponding to the first CNI network and a second type network card corresponding to the second CNI network, the method comprising the steps of:
The first CNI network transmits the service management flow generated by each container group to a kernel protocol stack at the Host side of the multi-CNI container network through a first type network card, and the kernel protocol stack forwards the service management flow according to a preset forwarding strategy;
The second CNI network unloads the business data flow generated by each container group to DPU equipment connected with the second CNI network through a second type network card, and issues a flow table for guiding flow forwarding to the DPU equipment;
the DPU device uses a pre-deployed traffic forwarding plane for forwarding traffic data traffic, and forwards traffic data traffic offloaded from a second CNI network according to the flow table from the second CNI network.
2. The method of claim 1, wherein the multi-CNI container network is created by a network orchestration tool for cloud-native container virtualization, wherein the network orchestration tool builds a container virtualization cluster by installing and deploying the network orchestration tool on control nodes of a cloud server cluster and cloud server nodes, and wherein a multi-CNI support component is pre-deployed in the network orchestration tool such that the network orchestration tool supports a plurality of CNIs.
3. The method according to claim 2, wherein an interface detection plug-in is pre-deployed on the network orchestration tool virtualized by the cloud primary container, enabling the second CNI network to discover physical function interfaces and/or virtual function interfaces on server nodes, the virtual function interfaces virtualized on a single root I/O basis being implemented by the interface detection plug-in.
4. The method of claim 2, wherein the network orchestration tool for cloud proto-container virtualization is Kubernetes, the multi-CNI support component is Multus-CNI, the method further comprising:
The Kubernetes deployment Calico CNI and ovn-Kubernetes CNI plug-ins based on installation deployment, the Calico CNI being used to carry the first CNI network, the ovn-Kubernetes CNI plug-ins being used to carry the second CNI network;
Wherein Calico CNI is set to the default CNI of Kubernetes.
5. The method according to claim 1, characterized in that the method further comprises:
The ovn-kubernetes CNI components pre-deployed on the control nodes of the cloud server cluster contain ovs, ovnkube-master, ovnkube-db and ovnkube nodes in full mode;
The ovn-kubernetes CNI components on the Host side of the worker node of the cloud server cluster include OVN-K8s-CNI-Overlay, and ovnkube nodes.
6. The method of claim 1, wherein the first type of network card is a virtual network card interface and the second type of network card is a virtual function interface supporting single root I/O virtualization.
7. The method of claim 1, wherein a traffic forwarding plane for forwarding traffic data traffic is pre-deployed in a system-on-a-chip of the DPU device.
8. The method of claim 7, wherein the method further comprises:
A second CNI network control component for supporting flow table issuing and a second CNI network node configuration component for setting interface configuration are deployed in a system on chip of the DPU device in advance by using an application container engine;
the Host side is a server, and the server and the DPU device are connected to the multi-CNI container network through a switch and/or an optical switch.
9. The method of claim 8, wherein when a plurality of DPU devices are connected to the second CNI network, a data connection is established between different DPU devices through a switch or an optical switch, and a data connection is established between different container groups within the same DPU device.
10. A system for DPU offloading data network plane traffic in a multi-CNI container network, comprising a processor and a memory, wherein the memory has stored therein computer instructions for executing the computer instructions stored in the memory, which when executed by the processor, implements the steps of the method of any one of claims 1 to 9.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311435191.XA CN117459468B (en) | 2023-10-31 | 2023-10-31 | A method and system for processing business traffic in a multi-CNI container network |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311435191.XA CN117459468B (en) | 2023-10-31 | 2023-10-31 | A method and system for processing business traffic in a multi-CNI container network |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN117459468A CN117459468A (en) | 2024-01-26 |
| CN117459468B true CN117459468B (en) | 2024-12-13 |
Family
ID=89590650
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311435191.XA Active CN117459468B (en) | 2023-10-31 | 2023-10-31 | A method and system for processing business traffic in a multi-CNI container network |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117459468B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119071071B (en) * | 2024-08-29 | 2025-05-27 | 北京火山引擎科技有限公司 | Network access method, device, equipment and medium |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116800616A (en) * | 2023-08-25 | 2023-09-22 | 珠海星云智联科技有限公司 | Management method and related device of virtualized network equipment |
| CN116886496A (en) * | 2023-08-11 | 2023-10-13 | 中科驭数(北京)科技有限公司 | DPU-based data processing method, device, equipment and readable storage medium |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111147297B (en) * | 2019-12-23 | 2022-07-15 | 广东省新一代通信与网络创新研究院 | Multi-layer network plane construction method of kubernets |
| CN113676471B (en) * | 2021-08-17 | 2023-04-07 | 上海道客网络科技有限公司 | Cross-node communication method, system, medium and electronic device based on container cloud platform |
| US20230239268A1 (en) * | 2022-01-21 | 2023-07-27 | Vmware, Inc. | Managing ip addresses for dpdk enabled network interfaces for cloud native pods |
-
2023
- 2023-10-31 CN CN202311435191.XA patent/CN117459468B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116886496A (en) * | 2023-08-11 | 2023-10-13 | 中科驭数(北京)科技有限公司 | DPU-based data processing method, device, equipment and readable storage medium |
| CN116800616A (en) * | 2023-08-25 | 2023-09-22 | 珠海星云智联科技有限公司 | Management method and related device of virtualized network equipment |
Also Published As
| Publication number | Publication date |
|---|---|
| CN117459468A (en) | 2024-01-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20220123960A1 (en) | Data Packet Processing Method, Host, and System | |
| US9031081B2 (en) | Method and system for switching in a virtualized platform | |
| US11831551B2 (en) | Cloud computing data center system, gateway, server, and packet processing method | |
| CN107278359B (en) | Method, host and system for processing message in cloud computing system | |
| JP6513835B2 (en) | Packet processing method, host, and system in cloud computing system | |
| CN103346981B (en) | Virtual switch method, relevant apparatus and computer system | |
| US9354905B2 (en) | Migration of port profile associated with a target virtual machine to be migrated in blade servers | |
| EP4184323A1 (en) | Performance tuning in a network system | |
| JP6805116B2 (en) | A server system that can operate when the PSU's standby power supply does not work | |
| US20100287262A1 (en) | Method and system for guaranteed end-to-end data flows in a local networking domain | |
| US11658918B2 (en) | Latency-aware load balancer for topology-shifting software defined networks | |
| US20210216484A1 (en) | Processing task deployment in adapter devices and accelerators | |
| CN116915701B (en) | Data processor-based route acceleration method, system and device | |
| CN117459468B (en) | A method and system for processing business traffic in a multi-CNI container network | |
| CN114816651A (en) | A communication method, device and system | |
| CN113127144A (en) | Processing method, processing device and storage medium | |
| US20230385697A1 (en) | Self-learning green networks | |
| US20230342275A1 (en) | Self-learning green application workloads | |
| CN108886476A (en) | Multiple provider framework for virtual switch data plane and data plane migration | |
| US11362895B2 (en) | Automatic configuration of an extended service appliance for network routers | |
| EP4471591A1 (en) | Network telemetry-aware scheduler | |
| CN120358212A (en) | Method and device for forwarding north-south traffic of cloud primary container network based on DPU and electronic equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |