[go: up one dir, main page]

CN120223694A - A data packet processing method and related equipment - Google Patents

A data packet processing method and related equipment Download PDF

Info

Publication number
CN120223694A
CN120223694A CN202410489651.5A CN202410489651A CN120223694A CN 120223694 A CN120223694 A CN 120223694A CN 202410489651 A CN202410489651 A CN 202410489651A CN 120223694 A CN120223694 A CN 120223694A
Authority
CN
China
Prior art keywords
data packet
packet queue
virtual machine
tenant
cloud server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410489651.5A
Other languages
Chinese (zh)
Inventor
李春鹤
罗颖
郭洪志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Publication of CN120223694A publication Critical patent/CN120223694A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

The application provides a data packet processing method and related equipment, which are used for distinguishing the flow of a physical network card in-direction and reasonably scheduling communication resources so as to avoid communication resource waste. The method is applied to a cloud server, and the cloud server is communicated with a plurality of network devices through a physical network card. The method comprises the steps of obtaining a plurality of data packet queues, wherein each data packet queue comprises a data packet corresponding to the same virtual machine, and the data packet queues are derived from a plurality of network devices. And determining a scheduling strategy according to the identification of each data packet queue, wherein the scheduling strategy indicates the communication resource used for forwarding each data packet queue to the corresponding virtual machine. And sending the data packets to the virtual machines corresponding to each data packet queue based on the scheduling strategy.

Description

Data packet processing method and related equipment
The present application claims priority from the national intellectual property agency, application number 202311839364.4, chinese patent application entitled "VM isolation and scheduling method for Host physical network card ingress" filed on day 27 12 of 2023, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of cloud computing, and in particular, to a method and related device for processing a data packet.
Background
There may be a "noisy neighbor" problem, both in public and private cloud environments. Noisy neighbors are that when multiple tenants use the same shared communication resource, it appears that one tenant occupies most or even all of the shared communication resource, resulting in limited communication resources for other tenants.
In a related technical solution, credit (credit) is allocated to the network card of each virtual machine according to time, and the virtual machine consumes the credit in the process of sending or receiving the data packet. And distributing soft switching communication resources for the virtual machine network cards based on the credit of each virtual network card. Wherein the soft switching communication resources of the virtual network cards with more than one credit are larger than those of the virtual network cards with less than one credit.
In this method, the basis for allocating communication resources to the virtual network card is the remaining credit of the virtual network card, but the remaining credit can only reflect the historical packet transmission condition of the virtual network card. The fact that the remaining credit is too large does not mean that the data volume sent to the virtual network card is necessarily larger than that of the virtual network card with less residual credit in a next period of time, and therefore the allocated communication resources are not matched with the actual transmission data volume of the virtual network card, and the communication resources are wasted.
Disclosure of Invention
The application provides a processing method and related equipment of a data packet, which are used for distinguishing the flow of the incoming direction of a physical network card, provide a basis for determining the communication resources used when forwarding the data packet, and can reasonably schedule the communication resources to avoid the waste of the communication resources. Meanwhile, the situation that a single virtual machine or a single tenant occupies most of forwarding communication resources can be avoided, traffic interference among different receivers (virtual machines or tenants) is effectively reduced, and traffic isolation is achieved.
In a first aspect, the present application provides a method for processing a data packet, where the method is applied to a cloud server, and the method includes:
The processing method of the data packet is applied to a 'many-to-one' scene, one cloud server is communicated with a plurality of network devices, and the data packet is transmitted with the plurality of network devices through a physical network card. The cloud server acquires a plurality of data packet queues, and the data packet included in each data packet queue corresponds to the same virtual machine. The correspondence referred to herein means that the destination address of the data packet in one data packet queue is the same virtual machine, or that the receiving side of the data packet included in the same data packet queue is the same virtual machine. The cloud server can also be understood to acquire multiple paths of data streams, and each path of data stream corresponds to the same virtual machine. That is, the cloud server may acquire data packets that have been classified according to the received virtual machines. The cloud server determines a scheduling policy of the data packet queues according to the identification of each data packet queue, wherein the scheduling policy indicates communication resource allocation information for forwarding the plurality of data packet queues, and the communication resource allocation information comprises communication resources used for forwarding each data packet to a corresponding virtual machine. And the cloud server sends the data packets to the virtual machine corresponding to each data packet queue based on the scheduling strategy.
In the application, the data packet queues acquired by the cloud server are classified according to different received virtual machines, and the receiving party of the data packets is defined, namely the data packets sent to different virtual machines are distinguished. The cloud server acquires the data packet through the physical network card, that is, in the application, the cloud service can distinguish the flow in the direction of the physical network card, and provides a basis for determining the communication resource used when forwarding the data packet. Meanwhile, the receivers of the data packet queues are distinguished, so that the allocation of the scheduling communication resources can be reasonably carried out based on the specific conditions of the virtual machines corresponding to each data packet queue, and the waste of the communication resources is avoided. In addition, the situation that a single virtual machine or a single tenant occupies most of forwarding communication resources can be avoided, traffic interference among different receivers (virtual machines or tenants) is effectively reduced, and traffic isolation is achieved.
In some optional implementations of the first aspect, the identification of each data packet queue includes a tenant identification of each data packet queue, the tenant identification being for uniquely indicating a tenant receiving the data packet queue. Before determining the scheduling policy, the cloud server further obtains the tenant priority corresponding to each packet queue, where the tenant priority may be set by a tenant, may be determined based on a cloud service purchased or used by the tenant, or may be defined in other manners, and is not limited herein. And the cloud server determines a scheduling strategy according to the tenant identification of each data packet queue and the tenant priority corresponding to each data packet queue. The scheduling policy includes that communication resources used by the data packet queues with high tenant priorities to be forwarded to the corresponding first virtual machines are better than communication resources used by the data packet queues with low tenant priorities to be forwarded to the corresponding second virtual machines.
In the application, the cloud server can acquire the tenant priority, and when determining the scheduling policy, the communication resources used in the forwarding process of the data packet queue with high tenant priority are better than the communication resources used in the forwarding process of the data packet queue with low tenant priority based on the tenant priority and the tenant identification of the data packet queue, and the communication resources are matched with the actual application scene and the service requirement, so that the allocation of the scheduling communication resources is more reasonable.
In some optional implementations of the first aspect, the communication resources used to forward each packet queue to the corresponding virtual machine are a variety of possibilities including at least one of a transmission rate, bandwidth resources used, a corresponding throughput, or a latency to forward each packet queue to the corresponding virtual machine.
In the application, the communication resources used for forwarding the data packet queue to the virtual machine indicated by the scheduling policy have various possible conditions, and the scheduling policy is configured from various aspects, so that the application scene and the implementation mode of the technical scheme of the application are enriched.
In some optional implementations of the first aspect, the cloud server may classify the data packets by itself to obtain a plurality of data packet queues. Specifically, the cloud server receives a data packet to be classified sent by the physical network card, and determines a virtual machine corresponding to the data packet to be classified according to a destination address of the data packet to be classified and a mapping relation between the destination address and the virtual machine address. And classifying the data packets to be classified according to the virtual machines corresponding to the data packets to be classified to obtain a plurality of data packet queues.
In some optional implementations of the first aspect, the cloud server obtains a plurality of packet queues, which may be a plurality of packet queues that receive the physical network card transmission. That is, the physical network card classifies the received data packets to obtain a plurality of data packet queues. And then sending a plurality of data packet queues to the cloud server.
In the application, the cloud server can acquire a plurality of data packet queues in various ways, and the application scene and the implementation mode of the technical scheme of the application are enriched. In the scheme that the cloud service classifies the received data packets by itself to obtain a plurality of data packet queues, functions of the cloud server are enriched, performance requirements on the physical network card are low, and the cloud service is beneficial to being coupled with the traditional physical network card. In the scheme that the physical network card sends a plurality of data packet queues to the cloud server, the processor (central processing unit, CPU) resources of the cloud server are not consumed, and the resource expense of the cloud server is saved.
In a second aspect, the present application provides a cloud server, where the cloud server communicates with a plurality of network devices through a physical network card, including:
and the receiving and transmitting unit is used for acquiring a plurality of data packet queues, wherein the data packet included in each data packet queue corresponds to the same virtual machine, and the plurality of data packet queues are derived from a plurality of network devices.
And the processing unit is used for determining a scheduling strategy according to the identification of each data packet queue, wherein the scheduling strategy indicates the communication resource used for forwarding each data packet queue to the corresponding virtual machine.
And the receiving and transmitting unit is also used for transmitting the data packets to the virtual machine corresponding to each data packet queue based on the scheduling strategy.
The cloud server is configured to perform the method of the first aspect or any one of the possible implementation manners of the first aspect.
In a third aspect, the present application provides a computing device comprising a processor and a memory. The processor of the computing device is configured to execute instructions stored in the memory, such that the computing device implements the method shown in the foregoing first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, the present application provides a cluster of computing devices, comprising at least one computing device, each computing device comprising a processor and a memory, the processor of the at least one computing device being configured to execute instructions stored in the memory of the at least one computing device, such that the cluster of computing devices implements the method disclosed in the first aspect and any one of the possible implementations of the first aspect.
In a fifth aspect, the application provides a computer program product comprising instructions which, when executed on a processor, implement the method of the first aspect or any one of the possible implementations of the first aspect, or which, when executed by a cluster of computer devices, cause the cluster of computer devices to implement the method of the first aspect and any one of the possible implementations of the first aspect.
In a sixth aspect, the present application provides a computer readable storage medium having stored thereon computer program instructions for implementing the method of the first aspect or any one of the possible implementations of the first aspect, when the computer program instructions are run on a processor, or for causing a cluster of computer devices to implement the method disclosed in the first aspect and any one of the possible implementations of the first aspect, when the computer program instructions are run by the cluster of computer devices.
Advantageous effects shown in any of the second to sixth aspects are similar to those of the first aspect or any possible implementation manner of the first aspect, and are not described here again.
Drawings
FIG. 1 is a schematic diagram of a system architecture according to an embodiment of the present application;
FIG. 2 is a schematic diagram of another system architecture according to an embodiment of the present application;
Fig. 3 is a schematic flow chart of a method for processing a data packet according to an embodiment of the present application;
fig. 4 is another flow chart of a processing method of a data packet according to an embodiment of the present application;
fig. 5 is another flow chart of a processing method of a data packet according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a cloud server according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a computing device according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a configuration of a computing device cluster according to an embodiment of the present application;
Fig. 9 is another schematic structural diagram of a computing device cluster according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a processing method of a data packet, which is used for distinguishing the flow of the incoming direction of a physical network card, provides a basis for determining the communication resources used when forwarding the data packet, can reasonably schedule the communication resources and avoids the waste of the communication resources. Meanwhile, the situation that a single virtual machine or a single tenant occupies most of forwarding communication resources can be avoided, traffic interference among different receivers (virtual machines or tenants) is effectively reduced, and traffic isolation is achieved.
Embodiments of the present application are described below with reference to the accompanying drawings. As one of ordinary skill in the art can know, with the development of technology and the appearance of new scenes, the technical scheme provided by the embodiment of the application is also applicable to similar technical problems.
The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely illustrative of the manner in which embodiments of the application have been described in connection with the description of the objects having the same attributes. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. In addition, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes an association of associated objects, meaning that there may be three relationships, e.g., A and/or B, and that there may be A alone, while A and B are present, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a, b, or c) of a, b, c, a-b, a-c, b-c, or a-b-c may be represented, wherein a, b, c may be single or plural.
Referring to fig. 1 and fig. 2, fig. 1 and fig. 2 are schematic diagrams of a system architecture according to an embodiment of the present application.
The processing method of the data packet provided by the embodiment of the application is applied to cloud computing, and in the current cloud computing field, the cloud platform can provide diversified services for users. As shown in fig. 1, a tenant logs in to the cloud platform 30 through an account number and a password registered in the cloud platform 30 by the client 10 via the internet 20, and uses a cloud service purchased by the tenant.
The cloud platform 30 is used to manage an infrastructure including a plurality of data centers disposed in different areas, for example, an area 1 shown in fig. 1 includes a cloud data center 1 and a cloud data center 2, and an area 2 includes a cloud data center 3 and a cloud data center 4. Each cloud data center is provided with a plurality of servers, and service instances (comprising at least one of virtual machines, containers and proprietary hosts) are run on the servers.
In the embodiment of the present application, the type of cloud service provided by the cloud platform 30 is not limited, and the type of cloud service used by the tenant is not limited, that is, the present application is not limited to a business scenario, as long as the cloud server is in a "many-to-one" scenario. The "many-to-one" scenario refers to that a cloud service communicates with a plurality of network devices and receives data packets sent by the plurality of network devices.
The "many-to-one" scenario is described in conjunction with fig. 2, where the cloud server establishes communication connections with a plurality of network devices through a physical network card, as shown in fig. 2. And the data packets sent by the plurality of network devices to the cloud service are transmitted to a virtual switch running on the cloud server through the physical network card.
In the related technical scheme, the virtual switch does not identify and distinguish the virtual machine corresponding to the data packet received through the physical network card. When the data volume of the data packet acquired through the physical network card reaches the upper limit of the processing capacity of the virtual switch, the data packet sent to a part of virtual machines fills up the communication resources of the virtual switch and affects the data exchange of other virtual machines. The processing method of the data packet provided by the embodiment of the application is used for solving the problems.
It should be noted that, in the embodiment shown in fig. 2, there are many possibilities for the network device that communicates with the cloud server, including other hosts in the cloud, virtual machines on other hosts managed by the tenant a, devices deployed in the data center, and the like, which are not limited herein. In addition, in the embodiment shown in fig. 2, m and n are integers greater than 2, and the specific number is determined based on the needs of practical applications, which is not limited herein.
It will be appreciated that in the embodiment shown in fig. 2, both the virtual machine 1 and the virtual machine 2 are virtual machines managed by the tenant a, and provide cloud services for the tenant a. In the embodiment of the application, the number of virtual machines managed by one tenant is not limited.
Referring to fig. 3, fig. 3 is a flow chart of a processing method of a data packet according to an embodiment of the present application, including:
301. The cloud server acquires a plurality of data packet queues, each data packet queue comprises a data packet corresponding to the same virtual machine, and the data packet queues are derived from a plurality of network devices.
The embodiment of the application is applied to a 'many-to-one' communication scene, the cloud server establishes communication connection with a plurality of network devices, and the data packets sent by the plurality of network devices are acquired through a physical network card. That is, on the physical network card receiving side, the cloud service may receive data packets sent by the plurality of network devices.
Still further, in the embodiment of the present application, the cloud server acquires a plurality of packet queues, where each packet queue corresponds to one virtual machine, in other words, packets included in one packet queue correspond to the same virtual machine. The correspondence here means that the receiving side of the data packet included in one data packet queue is the same virtual machine. That is, the plurality of packet queues acquired by the cloud service are classified based on the virtual machine receiving the packets.
In the embodiment of the present application, there are various possible ways for the cloud server to acquire a plurality of data packet queues, and the following descriptions respectively apply:
In some optional embodiments, the cloud server has a flow classification function, and may classify the acquired data packets by itself. That is, the cloud server acquires a plurality of data packet queues, including receiving data packets to be classified sent by the physical network card. And then determining the virtual machine corresponding to the data packet to be classified based on the destination address of the data packet to be classified and the mapping relation between the destination address and the virtual machine address. And classifying the data packets to be classified according to the virtual machines corresponding to the data packets to be classified to obtain a plurality of data packet queues.
The data packet to be classified, which is sent by the physical network card to the cloud server, is derived from a plurality of network devices communicated with the cloud server, and may be part or all of the plurality of network devices. And the data packet to be classified is a data packet received by the cloud server in a certain period of time, and optionally, the data amount corresponding to the data packet is smaller than or equal to the upper processing limit of the virtual switch running on the cloud server. In addition, it should be noted that one network device may send a plurality of data packets to the cloud server through the physical network card, where the plurality of data packets correspond to the same or different virtual machines.
In the scheme that the cloud server classifies the acquired data packets by itself, the cloud server may determine a virtual machine corresponding to each data packet to be classified based on a mapping relationship between a destination address of the data packet to be classified and a virtual machine address. The mapping relationship between the destination address of the data packet and the virtual machine address may be stored in the cloud server, or may be stored in other devices in communication with the cloud server, which is not limited herein. In the former scheme, after the cloud server acquires the data packet to be classified, the virtual machine corresponding to the data packet to be classified can be determined by inquiring the mapping relation stored by the cloud server based on the destination address of the data packet to be classified. In the latter scheme, after the cloud server acquires the data packet to be classified, the virtual machine address corresponding to the destination address of the data packet to be classified may be acquired from other devices. For example, the destination address of the data packet 1 to be classified is sent to other devices, and the mapping relationship between the destination address and the virtual machine address sent by the other devices is received, so that the cloud server determines the virtual machine address corresponding to the data packet 1 to be classified.
It may be understood that the mapping relationship between the destination address of the data packet and the virtual machine address may be stored in the form of a key value pair, a form, or other forms capable of indicating the mapping relationship, such as a hash table, which is not limited herein.
For example, assume that the destination IP address of the data packet to be classified obtained by the cloud server is 192.168.1.100, and the virtual machine IP address corresponding to the destination IP address stored in the foregoing mapping relationship is the IP address of the virtual machine 1. Then, the cloud service determines that the virtual machine corresponding to the data packet to be classified is the virtual machine 1, and enqueues the data packet to be classified into a data packet queue corresponding to the virtual machine 1.
In some alternative embodiments, the cloud server may directly obtain the plurality of data packet queues. That is, the cloud server receives a plurality of data packet queues sent by the physical network card. In the technical scheme, the physical network card has a stream classification function, and after the data packet to be classified is acquired, the virtual machine corresponding to the data packet to be classified is determined based on the destination address of the data packet to be classified and the mapping relation between the destination address and the virtual machine address. And classifying the data packets to be classified according to the virtual machines corresponding to the data packets to be classified to obtain a plurality of data packet queues. The specific process of the physical network card obtaining the plurality of data packet queues is similar to the process of the cloud service obtaining the plurality of data packet queues by self-classification, and is not repeated here.
It can be understood that whether the cloud server has a stream classification function or not, the physical network card can classify the data packet, so that the resource consumption of the cloud server is reduced.
In the application, the cloud server can acquire a plurality of data packet queues in various ways, and the application scene and the implementation mode of the technical scheme of the application are enriched. In the scheme that the cloud service classifies the received data packets by itself to obtain a plurality of data packet queues, functions of the cloud server are enriched, performance requirements on the physical network card are low, and the cloud service is beneficial to being coupled with the traditional physical network card. In the scheme that the physical network card sends a plurality of data packet queues to the cloud server, CPU resources of the cloud server are not consumed, and resource overhead of the cloud server is saved.
302. And the cloud server determines a scheduling strategy according to the identification of each data packet queue, and the scheduling strategy indicates the communication resource used for forwarding each data packet queue to the corresponding virtual machine.
Based on the foregoing, each packet queue corresponds to the same virtual machine. A virtual machine is only managed by one tenant at the same time, which means that the data packets in the same data packet queue correspond to the same tenant. In the embodiment of the application, the identification of each data packet queue comprises a tenant identification of each data packet queue, wherein the tenant identification is used for uniquely indicating the tenant of the virtual machine for managing and receiving the data packet queue. The tenant identity may be an identity uniquely indicative of the tenant, such as a serial number, a network identifier (virtual extensible local area network network identifier, VNI) of the virtual extended local area network, etc.
The cloud server may further obtain a tenant priority corresponding to each packet queue, where the tenant priority may be set by a tenant, may also be determined based on a cloud service purchased or used by the tenant, or may be defined in other manners, and is not limited herein. Optionally, the cloud server may send a query instruction to the controller, where the query instruction includes a tenant identifier, and is configured to request to obtain a tenant priority corresponding to the tenant identifier. The controller sends response information for the query instruction to the cloud server, wherein the response information comprises tenant priority.
The cloud server determines a scheduling policy of the plurality of data packet queues based on tenant identifications according to each data packet queue and tenant priorities corresponding to each data packet queue. The scheduling policy indicates that communication resources used by the data packet queues with high tenant priorities for forwarding to the corresponding first virtual machines are better than communication resources used by the data packet queues with low tenant priorities for forwarding to the corresponding second virtual machines. That is, the packet queues with high tenant priorities use better communication resources in the forwarding process.
In the application, the cloud server can acquire the tenant priority, and when determining the scheduling policy, the communication resources used in the forwarding process of the data packet queue with high tenant priority are better than the communication resources used in the forwarding process of the data packet queue with low tenant priority based on the tenant priority and the tenant identification of the data packet queue, and the communication resources are matched with the actual application scene and the service requirement, so that the allocation of the scheduling communication resources is more reasonable.
In embodiments of the present application, there are a number of possibilities for the communication resources used in forwarding the packet queues, including at least one of transmission rate, bandwidth, throughput, or latency. The foregoing description of "the communication resources used by the packet queues with high tenant priorities to forward to the corresponding first virtual machines are better than the communication resources used by the packet queues with low tenant priorities to forward to the corresponding second virtual machines" has different understanding on different communication resource types, and is described below in conjunction with examples:
Illustratively, it is assumed that the priority of tenant a corresponding to packet queue 1 is higher than the priority of tenant B corresponding to packet queue 2, and the data amount of packet queue 1 is greater than the data amount of packet queue 2. Then, the scheduling policy may indicate that the bandwidth resources allocated to the virtual machine managed by tenant a are greater than the bandwidth resources of the virtual machine managed by tenant B. Further, the weight ratio of the bandwidth resources of different virtual machines may be the same as the ratio of the data volume of the data packet queue 1 to the data volume of the data packet queue 2.
Illustratively, it is assumed that the priority of tenant C corresponding to packet queue 3 is higher than the priority of tenant D corresponding to packet queue 4, and the data amount of packet queue 3 is smaller than the data amount of packet queue 4. Then, the scheduling policy may indicate that the transmission rate of the packet queue 3 to the virtual machine managed by tenant C is faster than the transmission rate of the packet queue 4 to the virtual machine managed by tenant D. Optionally, under the scene of forwarding the data packet queue 3 and the data packet queue 4 at the same time, the speed limit of the data packet queue 4 can be realized by means of a token bucket algorithm, random packet loss and the like.
Illustratively, assuming that the priority of tenant E corresponding to packet queue 5 is higher than the priority of tenant F corresponding to packet queue 6, the scheduling policy may indicate that the throughput of forwarding packet queue 5 to tenant E is greater than the throughput of forwarding packet queue 6 to tenant F. The size relationship between the data amount of the packet queue 5 and the data amount of the packet queue 6 is not limited.
In the application, the communication resources used for forwarding the data packet queue to the virtual machine indicated by the scheduling policy have various possible conditions, and the scheduling policy is configured from various aspects, so that the application scene and the implementation mode of the technical scheme of the application are enriched.
303. And the cloud server sends the data packets to the virtual machine corresponding to each data packet queue based on the scheduling strategy.
After the cloud server determines the scheduling policy, the virtual switch can be invoked, so that the virtual switch sends the data packet to the virtual machine corresponding to each data packet queue based on the scheduling policy.
Based on the foregoing, in the present application, the packet queues acquired by the cloud server are classified according to the received virtual machines, and the receiving party of the packet is defined, that is, the packets sent to different virtual machines are distinguished. The cloud server acquires the data packet through the physical network card, that is, in the application, the cloud service can distinguish the flow in the direction of the physical network card, thereby realizing flow isolation and providing a basis for determining the communication resource used when forwarding the data packet. Meanwhile, the receivers of the data packet queues are distinguished, so that the allocation of the scheduling communication resources can be reasonably carried out based on the specific conditions of the virtual machines corresponding to each data packet queue, and the waste of the communication resources is avoided. In addition, the phenomenon that a single virtual machine or a single tenant occupies most of forwarding communication resources can be avoided, and traffic interference among different receivers (virtual machines or tenants) is effectively reduced.
In the foregoing example, the scheduling policy indicates the communication resource situation of virtual machines managed by tenants of different priorities. In some optional embodiments, the scheduling policy may also indicate a communication resource condition used by the virtual machines managed by the tenants of the same priority.
Optionally, the communication resources used by the packet queues with the same priority of the dispatching policy tenant to forward to the corresponding virtual machines are the same. The packet queues with the same tenant priority may correspond to one or more tenants.
For example, the cloud service receives the packet queue 1, the packet queue 2, and the packet queue 3 sent by the physical network card in the same period. The data packet queue 1 corresponds to the virtual machine 1 managed by the tenant a, the data packet queue 2 corresponds to the virtual machine 2 managed by the tenant a, the data packet queue 3 corresponds to the virtual machine 3 managed by the tenant B, and the priorities of the tenant a and the tenant B are the same. That is, the packet queue 1 and the packet queue 2 correspond to the same tenant, and the tenant priority of the tenant corresponding to the packet queue 3 is the same. Then, in the scheduling policy determined by the cloud server, the communication resources for forwarding the packet queue 1 to the virtual machine 1, the communication resources for forwarding the packet queue 2 to the virtual machine 2, and the communication resources for forwarding the packet queue 3 to the virtual machine 3 are indicated to be the same.
Optionally, the communication resources used by the packet queues with the same priority of the dispatching policy tenant to forward to the corresponding virtual machines are different. Alternatively, the scheduling policy may be determined in connection with the data volume of the data packet queues.
For example, the cloud service receives the packet queue 1, the packet queue 2, and the packet queue 3 sent by the physical network card in the same period. The data packet queue 1 corresponds to the virtual machine 1 managed by the tenant a, the data packet queue 2 corresponds to the virtual machine 2 managed by the tenant a, the data packet queue 3 corresponds to the virtual machine 3 managed by the tenant B, and the priorities of the tenant a and the tenant B are the same. That is, the packet queue 1 and the packet queue 2 correspond to the same tenant, and the tenant priority of the tenant corresponding to the packet queue 3 is the same. In addition, the data amount of the packet queue 1 is larger than the data amount of the packet queue 2 and the data amount of the packet queue 3.
Then, in the scheduling policy determined by the cloud server, it is indicated that the communication resource for forwarding the packet queue 1 to the virtual machine 1 is better than the communication resource for forwarding the packet queue 2 to the virtual machine 2, and the communication resource for forwarding the packet queue 2 to the virtual machine 2 is better than the communication resource for forwarding the packet queue 3 to the virtual machine 3. For example, the scheduling policy may indicate that the bandwidth resources allocated to virtual machine 1 are greater than the bandwidth resources allocated to virtual machine 2, and that the bandwidth resources allocated to virtual machine 2 are greater than the bandwidth resources allocated to virtual machine 3. Alternatively, the weight ratio of bandwidth resources of different virtual machines may be the same as the ratio of the data amounts of different data packet queues.
As can be seen from the foregoing example of step 302, in the embodiment of the present application, not only the virtual machines managed by tenants with different tenant priorities, but also the communication resources used in forwarding the packet queues are different. For the virtual machines managed by the tenants with the same tenant priority, different communication resources can be set, and the virtual machines can be closer to the needs of practical application, so that the practicability of the technical scheme of the application is improved.
In the foregoing example, the description was made with the cloud server as the execution subject. Next, the processing procedure of the data packet will be described from the system point of view. Referring to fig. 4 and fig. 5, fig. 4 and fig. 5 are schematic flow diagrams of a processing method of a data packet according to an embodiment of the present application.
It should be noted that, in the embodiments shown in fig. 4 and fig. 5, two virtual machines are managed by a cloud server, and each virtual machine corresponds to one data packet queue. In practical applications, the cloud server may also manage a greater number of virtual machines, which is not limited herein.
As shown in fig. 4, a virtual switch is operated on the cloud server, and the virtual switch includes a flow classification module, a scheduling module, and a forwarding module. The physical network card receives the data packets sent by the plurality of network devices and sends the data packets to the cloud server. The virtual switch running on the cloud server processes the data packets, so that the data packet processing method provided by the embodiment of the application is realized.
The flow classification module divides the data packets into data packet queues, and the data packets in each data packet queue correspond to the same virtual machine. For example, queue 1 shown in fig. 4 corresponds to Virtual Machine (VM) 1, and queue 2 corresponds to VM2. The specific function of the flow classification module is similar to the scheme that the cloud server classifies the acquired data packets to obtain a plurality of data packet queues in step 301 in the foregoing embodiment, and details are shown in the foregoing, and will not be repeated here.
The scheduling module determines a scheduling policy based on the identification of the plurality of data packet queues. This process is similar to the previous embodiment step 302 and will not be described again here.
The forwarding module forwards the data packet to the virtual machine based on the scheduling policy. In the embodiment shown in fig. 4, i.e. based on the scheduling policy, the queue 1 is forwarded to the virtual machine 1 and the queue 2 is forwarded to the virtual machine 2.
It should be noted that, during the forwarding process of the data packet, the data packet queues with the communication resources being in disadvantage may be stacked, and when the data packets are stacked to a certain extent, the cloud server may discard the data packet after receiving the data packet of the same virtual machine corresponding to the data packet queues. In this scheme, alternatively, if the sender of the data packet does not receive the response information for the data packet within a preset time, the sender may retransmit the data packet.
Unlike the embodiment shown in fig. 4, in the embodiment shown in fig. 5, the flow classification module is included in the physical network card, not the cloud server. That is, in the embodiment shown in fig. 5, the physical network card classifies the received data packets to obtain a plurality of data packet queues. The cloud server directly acquires a plurality of data packet queues through the physical network card, and the data packets do not need to be classified, so that CPU resources of the cloud server are saved. The function of the other individual modules is similar to the embodiment shown in fig. 4 and will not be described here again.
Referring to fig. 6, fig. 6 is a schematic diagram of a result of a cloud server according to an embodiment of the present application, where the cloud server communicates with a plurality of network devices through a physical network card. As shown in fig. 6, the cloud server 600 includes a transceiving unit 601 and a processing unit 602.
In some alternative embodiments, the transceiver 601 is configured to obtain a plurality of packet queues, where each packet queue includes a packet corresponding to a same virtual machine, and the plurality of packet queues are derived from a plurality of network devices.
The processing unit 602 is configured to determine a scheduling policy according to the identifier of each data packet queue, where the scheduling policy indicates a communication resource used for forwarding each data packet queue to the corresponding virtual machine.
The transceiver 601 is further configured to send a data packet to a virtual machine corresponding to each data packet queue based on a scheduling policy.
In some alternative embodiments, the identity of each packet queue includes the tenant identity of each packet queue. The transceiver 601 is further configured to obtain a tenant priority corresponding to each packet queue.
The processing unit 602 is specifically configured to determine a scheduling policy according to the tenant identifier of each data packet queue and the tenant priority corresponding to each data packet queue. The scheduling policy includes that communication resources used by the data packet queues with high tenant priorities to be forwarded to the corresponding first virtual machines are better than communication resources used by the data packet queues with low tenant priorities to be forwarded to the corresponding second virtual machines.
In some alternative embodiments, forwarding each packet queue to the communication resources used by the corresponding virtual machine includes forwarding each packet queue to at least one of a transmission rate, bandwidth resources used, a corresponding throughput, or a latency of the corresponding virtual machine.
In some alternative embodiments, the transceiver 601 is specifically configured to receive a data packet to be classified sent by a physical network card. And determining the virtual machine corresponding to the data packet to be classified according to the destination address of the data packet to be classified and the mapping relation between the destination address and the virtual machine address. And classifying the data packets to be classified according to the virtual machines corresponding to the data packets to be classified to obtain a plurality of data packet queues.
In some alternative embodiments, the transceiver unit 601 is specifically configured to receive a plurality of packet queues sent by a physical network card.
The transceiver unit 601 and the processing unit 602 may be implemented by software, or may be implemented by hardware. By way of example, the implementation of the processing unit 602 is described next with respect to the processing unit 602. Similarly, the implementation of the transceiver unit 601 may refer to the implementation of the processing unit 602.
As an example of a software functional unit, the processing unit 602 may include code that runs on a computing instance. The computing instance may include at least one of a physical host (computing device), a virtual machine, and a container, among others. Further, the above-described computing examples may be one or more. For example, the processing unit 602 may include code running on multiple hosts/virtual machines/containers. It should be noted that, multiple hosts/virtual machines/containers for running the code may be distributed in the same region, or may be distributed in different regions. Further, multiple hosts/virtual machines/containers for running the code may be distributed in the same availability zone (availability zone, AZ) or may be distributed in different AZs, each AZ comprising one data center or multiple geographically close data centers. Wherein typically a region may comprise a plurality of AZs.
Also, multiple hosts/virtual machines/containers for running the code may be distributed in the same virtual private cloud (virtual private cloud, VPC) or may be distributed in multiple VPCs. In general, one VPC is disposed in one region, and a communication gateway is disposed in each VPC for implementing inter-connection between VPCs in the same region and between VPCs in different regions.
As an example of a hardware functional unit, the processing unit 602 may include at least one computing device, such as a server or the like. Alternatively, the processing unit 602 may be a device implemented using an application-specific integrated circuit (ASIC), a programmable logic device (programmable logic device, PLD), or the like. The PLD may be implemented as a complex program logic device (complex programmable logical device, CPLD), a field-programmable gate array (FPGA) GATE ARRAY, a general-purpose array logic (GENERIC ARRAY logic, GAL), or any combination thereof.
The processing unit 602 may include multiple computing devices distributed across the same region or across different regions. The plurality of computing devices comprised by the processing unit 602 may be distributed in the same AZ or may be distributed in different AZ. Likewise, multiple computing devices included in processing unit 602 may be distributed across the same VPC or across multiple VPCs. Wherein the plurality of computing devices may be any combination of computing devices such as servers, ASIC, PLD, CPLD, FPGA, and GAL.
The transceiver 601 and the processor 602 implement different steps in the processing method of the data packet, so as to implement all the functions of the data cloud server 600. The cloud server 600 is used for the operations performed by the cloud server in the embodiments shown in fig. 2 to 5 to implement the method for processing a data packet according to the embodiment of the present application, which is not described herein again.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a computing device according to an embodiment of the application. The computing device 700 includes a processor 701, a communication interface 702, a bus 703, and a memory 704. The processor 701, the communication interface 702, and the memory 704 communicate with each other through the bus 703, and in practical applications, communication may also be implemented through other means such as wireless transmission, which is not limited herein.
The computing device 700 may be a server, it being understood that the application is not limited to the number of processors, memories in the computing device 700.
The processor 701 may include any one or more of a central processing unit (central processing unit, CPU), a graphics processor (graphics processing unit, GPU), a Microprocessor (MP), or a digital signal processor (DIGITAL SIGNAL processor, DSP).
Communication interface 702 enables communication between computing device 700 and other devices or communication networks using a transceiver module such as, but not limited to, a network interface card, transceiver, or the like.
Bus 703 may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, or the like. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one line is shown in fig. 7, but not only one bus or one type of bus. Bus 703 may include a path for transferring information between various components of computing device 700 (e.g., memory 704, processor 701, communication interface 702).
The memory 704 may include volatile memory (RAM), such as random access memory (random access memory). The memory 704 may also include non-volatile memory (non-volatile memory), such as read-only memory (ROM), flash memory, mechanical hard disk (HARD DISK DRIVE, HDD) or solid state disk (SSD STATE DRIVE).
The memory 704 stores executable program codes, and the processor 701 executes the executable program codes to implement the functions of the transceiver unit 601 and the processing unit 602, respectively, thereby implementing a packet processing method. That is, the memory 704 has stored thereon instructions for performing the processing method of the data packet.
The embodiment of the application also provides a computing device cluster, which comprises at least one computing device. The computing device may be a server, such as a central server, an edge server, or a local server in a local data center. In some alternative embodiments, the computing device may also be a terminal device such as a desktop, notebook, or smart phone.
Referring to fig. 8 and fig. 9, fig. 8 and fig. 9 are schematic structural diagrams of a computing device cluster according to an embodiment of the present application.
As shown in fig. 8, a cluster of computing devices includes at least one computing device 700. The same instructions for performing the processing method of the data packet provided by the embodiment of the present application may be stored in the memory 704 of one or more computing devices 700 in the computing device cluster.
In some possible implementations, part of the instructions for performing the processing method of the data packet may also be stored in the memory 704 of one or more computing devices 700 in the computing device cluster, respectively. In other words, a combination of one or more computing devices 704 may collectively execute instructions for performing the processing method of the data packet.
It should be noted that, the memories 704 in different computing devices 700 in the computing device cluster may store different instructions for performing part of the functions of the cloud server. That is, instructions stored in memory 704 in different computing devices 700 may implement the functionality of one or more of transceiver unit 601, processing unit 602.
In some possible implementations, one or more computing devices in a cluster of computing devices may be connected through a network. Wherein the network may be a wide area network or a local area network, etc. Fig. 9 shows one possible implementation. As shown in fig. 9, two computing devices 700A and 700B are connected by a network. Specifically, the connection to the network is made through a communication interface in each computing device. In this type of possible implementation, instructions to perform the functions of transceiver unit 601 are stored in memory 704 in computing device 700A. Meanwhile, instructions to perform the functions of processing unit 602 are stored in memory 704 in computing device 700B.
The connection manner between the clusters of computing devices shown in fig. 9 may be implemented by considering that in the method for processing a data packet provided in the present application, the processing operation and the operation other than the processing operation are separately implemented, that is, considering that the function of the transceiver unit 601 is performed by the computing device 700A, and the function of the processing unit 602 is performed by the computing device 700B.
It should be appreciated that the functionality of computing device 700A shown in fig. 9 may also be performed by multiple computing devices 700. Likewise, the functionality of computing device 700B may also be performed by multiple computing devices 700.
The embodiment of the application also provides another computing device cluster. The connection relationship between the computing devices in the computing device cluster may be similar to the connection manner of the computing device cluster described with reference to fig. 8 and fig. 9, which is not described herein.
Embodiments of the present application also provide a computer program product comprising instructions. The computer program product may be software or a program product containing instructions capable of running on a computing device or stored in any useful medium. The computer program product, when run on at least one computer device, causes the at least one computer device to perform the method of processing data packets described above.
The embodiment of the application also provides a computer readable storage medium. The computer readable storage medium may be any available medium that can be stored by a computing device or a data storage device such as a data center containing one or more available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., solid state disk), among others. The computer readable storage medium includes instructions that instruct a computing device to perform the method of processing data packets described above.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same, and although the present application has been described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the protection scope of the technical solution of the embodiments of the present application.

Claims (13)

1.一种数据包的处理方法,其特征在于,所述方法应用于云服务器,所述云服务器通过物理网卡与多个网络设备通信,所述方法包括:1. A method for processing a data packet, characterized in that the method is applied to a cloud server, the cloud server communicates with multiple network devices through a physical network card, and the method comprises: 获取多个数据包队列,每个数据包队列包括的数据包对应同一个虚拟机,所述多个数据包队列来源于所述多个网络设备;Acquire multiple data packet queues, each data packet queue includes data packets corresponding to the same virtual machine, and the multiple data packet queues are derived from the multiple network devices; 根据所述每个数据包队列的标识,确定调度策略,所述调度策略指示将所述每个数据包队列转发至对应的虚拟机所使用的通信资源;Determining a scheduling strategy according to an identifier of each data packet queue, wherein the scheduling strategy indicates a communication resource used to forward each data packet queue to a corresponding virtual machine; 基于所述调度策略,向所述每个数据包队列对应的虚拟机发送数据包。Based on the scheduling policy, a data packet is sent to the virtual machine corresponding to each data packet queue. 2.根据权利要求1所述的方法,其特征在于,所述每个数据包队列的标识,包括所述每个数据包队列的租户标识;2. The method according to claim 1, characterized in that the identifier of each data packet queue comprises a tenant identifier of each data packet queue; 在所述确定调度策略之前,所述方法还包括:Before determining the scheduling strategy, the method further includes: 获取所述每个数据包队列对应的租户优先级;Obtaining the tenant priority corresponding to each data packet queue; 所述根据所述每个数据包队列对应的标识,确定调度策略包括:Determining the scheduling strategy according to the identifier corresponding to each data packet queue includes: 根据所述每个数据包队列的租户标识、以及所述每个数据包队列对应的租户优先级,确定所述调度策略;Determining the scheduling strategy according to the tenant identifier of each data packet queue and the tenant priority corresponding to each data packet queue; 其中,所述调度策略包括:所述租户优先级高的数据包队列转发至对应的第一虚拟机所使用的通信资源优于所述租户优先级低的数据包队列转发至对应的第二虚拟机所使用的通信资源。The scheduling strategy includes: the communication resources used by the tenant high priority data packet queue forwarded to the corresponding first virtual machine are superior to the communication resources used by the tenant low priority data packet queue forwarded to the corresponding second virtual machine. 3.根据权利要求1或2所述的方法,其特征在于,将所述每个数据包队列转发至对应的虚拟机所使用的通信资源,包括:3. The method according to claim 1 or 2, characterized in that forwarding each data packet queue to the communication resources used by the corresponding virtual machine comprises: 将所述每个数据包队列转发至对应的虚拟机的传输速率、所使用的带宽资源、对应的吞吐量、或时延中的至少一项。At least one of a transmission rate, a used bandwidth resource, a corresponding throughput, or a delay at which each data packet queue is forwarded to a corresponding virtual machine. 4.根据权利要求1至3中任一项所述的方法,其特征在于,所述获取多个数据包队列,包括:4. The method according to any one of claims 1 to 3, wherein obtaining a plurality of data packet queues comprises: 接收物理网卡发送的待分类数据包;Receive the data packets to be classified sent by the physical network card; 根据所述待分类数据包的目的地址、以及目的地址与虚拟机地址的映射关系,确定所述待分类数据包对应的虚拟机;Determine the virtual machine corresponding to the data packet to be classified according to the destination address of the data packet to be classified and the mapping relationship between the destination address and the virtual machine address; 根据所述待分类数据包对应的虚拟机,对所述待分类数据包进行分类,得到所述多个数据包队列。The data packets to be classified are classified according to the virtual machines corresponding to the data packets to be classified to obtain the multiple data packet queues. 5.根据权利要求1至3中任一项所述的方法,其特征在于,所述获取多个数据包队列,包括:5. The method according to any one of claims 1 to 3, wherein obtaining a plurality of data packet queues comprises: 接收所述物理网卡发送的所述多个数据包队列。The multiple data packet queues sent by the physical network card are received. 6.一种云服务器,其特征在于,所述云服务器通过物理网卡与多个网络设备通信,所述云服务器包括:6. A cloud server, characterized in that the cloud server communicates with multiple network devices through a physical network card, and the cloud server comprises: 收发单元,用于获取多个数据包队列,每个数据包队列包括的数据包对应同一个虚拟机,所述多个数据包队列来源于多个网络设备;A transceiver unit, configured to obtain a plurality of data packet queues, each data packet queue including data packets corresponding to the same virtual machine, the plurality of data packet queues being derived from a plurality of network devices; 处理单元,用于根据所述每个数据包队列的标识,确定调度策略,所述调度策略指示将所述每个数据包队列转发至对应的虚拟机所使用的通信资源;A processing unit, configured to determine a scheduling strategy according to an identifier of each data packet queue, wherein the scheduling strategy indicates a communication resource used to forward each data packet queue to a corresponding virtual machine; 所述收发单元,还用于基于所述调度策略,向所述每个数据包队列对应的虚拟机发送数据包。The transceiver unit is further configured to send data packets to the virtual machines corresponding to each data packet queue based on the scheduling policy. 7.根据权利要求6所述的云服务器,其特征在于,所述每个数据包队列的标识,包括所述每个数据包队列的租户标识;7. The cloud server according to claim 6, wherein the identifier of each data packet queue comprises a tenant identifier of each data packet queue; 所述收发单元,还用于获取所述每个数据包队列对应的租户优先级;The transceiver unit is further used to obtain the tenant priority corresponding to each data packet queue; 处理单元,具体用于根据所述每个数据包队列的租户标识、以及所述每个数据包队列对应的租户优先级,确定所述调度策略;A processing unit, specifically configured to determine the scheduling strategy according to a tenant identifier of each data packet queue and a tenant priority corresponding to each data packet queue; 其中,所述调度策略包括:所述租户优先级高的数据包队列转发至对应的第一虚拟机所使用的通信资源优于所述租户优先级低的数据包队列转发至对应的第二虚拟机所使用的通信资源。The scheduling strategy includes: the communication resources used by the tenant high priority data packet queue forwarded to the corresponding first virtual machine are superior to the communication resources used by the tenant low priority data packet queue forwarded to the corresponding second virtual machine. 8.根据权利要求6或7所述的云服务器,其特征在于,将所述每个数据包队列转发至对应的虚拟机所使用的通信资源,包括:8. The cloud server according to claim 6 or 7, characterized in that forwarding each data packet queue to the communication resources used by the corresponding virtual machine comprises: 将所述每个数据包队列转发至对应的虚拟机的传输速率、所使用的带宽资源、对应的吞吐量、或时延中的至少一项。At least one of a transmission rate, a used bandwidth resource, a corresponding throughput, or a delay at which each data packet queue is forwarded to a corresponding virtual machine. 9.根据权利要求6至8中任一项所述的云服务器,其特征在于,所述收发单元,具体用于:9. The cloud server according to any one of claims 6 to 8, wherein the transceiver unit is specifically used to: 接收物理网卡发送的待分类数据包;Receive the data packets to be classified sent by the physical network card; 根据所述待分类数据包的目的地址、以及目的地址与虚拟机地址的映射关系,确定所述待分类数据包对应的虚拟机;Determine the virtual machine corresponding to the data packet to be classified according to the destination address of the data packet to be classified and the mapping relationship between the destination address and the virtual machine address; 根据所述待分类数据包对应的虚拟机,对所述待分类数据包进行分类,得到所述多个数据包队列。The data packets to be classified are classified according to the virtual machines corresponding to the data packets to be classified to obtain the multiple data packet queues. 10.根据权利要求6至8中任一项所述的云服务器,其特征在于,所述收发单元,具体用于接收所述物理网卡发送的所述多个数据包队列。10. The cloud server according to any one of claims 6 to 8, characterized in that the transceiver unit is specifically used to receive the multiple data packet queues sent by the physical network card. 11.一种计算设备集群,其特征在于,包括至少一个计算设备,每个计算设备包括处理器和存储器;11. A computing device cluster, comprising at least one computing device, each computing device comprising a processor and a memory; 所述至少一个计算设备的处理器用于执行所述至少一个计算设备的存储器中存储的指令,以使所述计算设备集群执行如权利要求1至5中任一项所述的方法。The processor of the at least one computing device is configured to execute instructions stored in the memory of the at least one computing device, so that the computing device cluster executes the method according to any one of claims 1 to 5. 12.一种包含指令的计算机程序产品,其特征在于,当所述指令被计算设备集群运行时,使得所述计算设备集群执行如权利要求1至5中任一项所述的方法。12 . A computer program product comprising instructions, wherein when the instructions are executed by a computing device cluster, the computing device cluster executes the method according to claim 1 . 13.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括计算机程序指令,当所述计算机程序指令由计算设备集群执行时,使得所述计算设备集群执行如权利要求1至5中任一项所述的方法。13. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises computer program instructions, and when the computer program instructions are executed by a computing device cluster, the computing device cluster executes the method according to any one of claims 1 to 5.
CN202410489651.5A 2023-12-27 2024-04-18 A data packet processing method and related equipment Pending CN120223694A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202311839364 2023-12-27
CN2023118393644 2023-12-27

Publications (1)

Publication Number Publication Date
CN120223694A true CN120223694A (en) 2025-06-27

Family

ID=96101510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410489651.5A Pending CN120223694A (en) 2023-12-27 2024-04-18 A data packet processing method and related equipment

Country Status (1)

Country Link
CN (1) CN120223694A (en)

Similar Documents

Publication Publication Date Title
US9692706B2 (en) Virtual enhanced transmission selection (VETS) for lossless ethernet
US12160368B2 (en) Intelligent resource selection for received content
US9686203B2 (en) Flow control credits for priority in lossless ethernet
US8954992B2 (en) Distributed and scaled-out network switch and packet processing
US9112801B2 (en) Quantized congestion notification in a virtual networking system
US7684423B2 (en) System and method for virtual network interface cards based on internet protocol addresses
US8010673B2 (en) Transitioning network traffic between logical partitions in one or more data processing systems
CN102857494B (en) Universal network interface controller
US8400917B2 (en) Method and system for load balancing using queued packet information
US7792140B2 (en) Reflecting the bandwidth assigned to a virtual network interface card through its link speed
US7742474B2 (en) Virtual network interface cards with VLAN functionality
CN112041826B (en) Fine-grained traffic shaping offload for network interface cards
CN105900063A (en) Method for scheduling in multiprocessing environment and device therefor
CN114024911B (en) Merge packets based on hints generated by the network adapter
US11811685B1 (en) Selective packet processing including a run-to-completion packet processing data plane
CN117795926A (en) Data packet prioritization in multiplexed sessions
US8478877B2 (en) Architecture-aware allocation of network buffers
CN112804166B (en) Message receiving and sending method, device and storage medium
WO2018057165A1 (en) Technologies for dynamically transitioning network traffic host buffer queues
CN120223694A (en) A data packet processing method and related equipment
US7591011B1 (en) Assigning higher priority to transactions based on subscription level
CN119449724B (en) Forwarding table item generation method, forwarding table item generation device, forwarding table item generation cluster, forwarding table item product and storage medium
WO2022147762A1 (en) Data packet sequencing method and apparatus
CN120639713A (en) A resource management method and related equipment
WO2025107862A1 (en) Load balancing method, network card, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication