[go: up one dir, main page]

CN120729797A - Data center traffic forwarding method, program product, electronic device and storage medium - Google Patents

Data center traffic forwarding method, program product, electronic device and storage medium

Info

Publication number
CN120729797A
CN120729797A CN202511049953.1A CN202511049953A CN120729797A CN 120729797 A CN120729797 A CN 120729797A CN 202511049953 A CN202511049953 A CN 202511049953A CN 120729797 A CN120729797 A CN 120729797A
Authority
CN
China
Prior art keywords
security
load balancing
core switch
data
switch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202511049953.1A
Other languages
Chinese (zh)
Inventor
李耀
陈林
王发修
高斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu New Hope Finance Information Co Ltd
Original Assignee
Chengdu New Hope Finance Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu New Hope Finance Information Co Ltd filed Critical Chengdu New Hope Finance Information Co Ltd
Priority to CN202511049953.1A priority Critical patent/CN120729797A/en
Publication of CN120729797A publication Critical patent/CN120729797A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a data center flow forwarding method, a program product, electronic equipment and a storage medium, wherein the method comprises the steps of receiving flow data through a core switch, distributing the flow data to at least one security cluster switch, carrying out security monitoring on the flow data by utilizing security equipment corresponding to the security cluster switch, and returning the flow data after the security monitoring to the core switch; the core switch forwards the flow data after safety monitoring to a plurality of first load balancing devices, the first load balancing devices are used for distributing the flow, the first load balancing devices forward the flow data to the intranet core switch through dynamic or static routing, the intranet core switch forwards the flow data to second load balancing devices, and the second load balancing devices are used for responding to the flow data. And the flow is distributed step by step, so that the flow scale which can be processed by the whole network architecture is improved, and massive service requests are responded. All the devices independently operate, so that the overall stability of the network is ensured.

Description

Data center traffic forwarding method, program product, electronic device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data center traffic forwarding method, a program product, an electronic device, and a storage medium.
Background
Data centers often need to handle massive service requests, which may include traffic data for e-commerce transactions, video streaming media, cloud computing services, etc. scenarios. The flow characteristics are strong in burstiness, and the existing flow forwarding method is difficult to cope with the flow increase, so that the risk of collapse of a network architecture is brought, and the overall stability of the network is affected.
Disclosure of Invention
An embodiment of the application aims to provide a data center flow forwarding method, a program product, electronic equipment and a storage medium, which are used for improving the problems.
The embodiment of the application provides a data center flow forwarding method, which comprises the steps of receiving flow data through a core switch, distributing the flow data to at least one security cluster switch, carrying out security monitoring on the flow data by using security equipment corresponding to the security cluster switch, returning the flow data after the security monitoring to the core switch, forwarding the flow data after the security monitoring to a plurality of first load balancing equipment by the core switch, carrying out flow distribution by the first load balancing equipment, forwarding the flow data to an intranet core switch by the first load balancing equipment through dynamic or static routing, forwarding the flow data to second load balancing equipment by the intranet core switch, and responding the flow data by the second load balancing equipment.
In the implementation process, the flow is split step by step, and is distributed to the security cluster switch and the corresponding security equipment by the core switch, and then distributed to a plurality of servers and the like by the first load balancing equipment, so that the flow processing pressure originally concentrated on a few devices is dispersed. The load of a single network device is reduced, the load can be distributed to a plurality of four-layer load balancing devices as required, and then the load is distributed to a large number of seven-layer load balancing devices, so that the flow scale which can be processed by the whole network architecture is greatly improved, and massive service requests can be easily handled. All the devices independently operate, a traditional cluster or stacking mode is abandoned, and even if one device fails, other devices are not involved, so that the overall stability of the network is ensured.
Optionally, in the embodiment of the application, safety monitoring is carried out on the flow data by utilizing safety equipment corresponding to the safety cluster switch, and the flow data after safety monitoring is returned to the core switch, comprising the steps that the safety equipment sends a safety layer route to the corresponding safety cluster switch; the security cluster exchanger learns the security layer route through the dynamic routing protocol, and forwards the traffic data to the corresponding security device by utilizing the five-tuple hash algorithm according to the traffic data destination address, the security device monitors the received traffic data safely, and the traffic data after the security monitoring is returned to the core exchanger.
In the implementation process, because the flow is pre-split through the security cluster switch, the flow scale processed by a single security device is moderate, the monitoring efficiency and accuracy are improved, the omission or delay of security monitoring caused by overload of the flow is avoided, and the overall security protection level of the data center is improved. A dynamic routing relation is established among the cluster switch, the core switch and the safety equipment, when a certain link or a certain safety equipment fails, the routing can be quickly adjusted, the traffic is forwarded to other available paths and equipment, intermittent transmission of the traffic in the safety monitoring process is reduced, and the robustness and reliability of the network architecture are enhanced.
Optionally, in the embodiment of the application, the method comprises the steps of receiving the traffic data through the core switch, distributing the traffic data to at least one security cluster switch, wherein the method comprises the steps of acquiring an operation mode, wherein the operation mode comprises a normal mode and an abnormal mode, the normal mode represents security monitoring of the traffic data, the abnormal mode represents security equipment failure, the traffic data is directly forwarded to the first load balancing equipment through the core switch, and the method comprises the steps of receiving the traffic data through the core switch and distributing the traffic data to at least one security cluster switch under the condition that the operation mode is the normal mode.
In the implementation process, in the normal mode, the core switch distributes the flow data to the security cluster switch, and the security equipment monitors the security, so that the security of the flow data is ensured. The security device can comprehensively check and filter traffic data, prevent security threats such as malicious attacks and viruses from entering the internal network of the data center, and protect business systems and data assets of the data center. Meanwhile, in the mode, the flow data is subjected to a safety monitoring link, so that potential safety risks can be found and responded in time, and the overall safe and stable operation of the data center is maintained.
Optionally, in the embodiment of the application, the routing table of the core switch comprises a security layer route and a load balancing layer route, the security layer route has higher priority than the load balancing layer route, and the method further comprises the step that the security layer route is closed in an abnormal mode, and the core switch forwards the received traffic data to a plurality of first load balancing devices.
In the implementation process, in the abnormal mode, when the safety equipment fails, the flow data can be directly forwarded to the first load balancing equipment through the core switch, so that the continuity of the service is ensured. The interruption of flow transmission caused by the failure of the safety equipment is avoided, the service of the data center can be ensured to continuously run, the influence on the service caused by the failure is reduced, and the availability and the reliability of the system are improved.
Optionally, in the embodiment of the application, the core switch forwards the traffic data after safety monitoring to a plurality of first load balancing devices, and the method comprises the steps that the core switch forwards the traffic data after safety monitoring to a second core switch through a five-tuple hash algorithm, the second core switch distributes the traffic data to the first load balancing devices through equivalent routes, and the first load balancing devices are four-layer load balancing devices.
In the implementation process, the flow data of the same session is always forwarded to the second core switch through the same path by the five-tuple hash algorithm of the core switch, so that the integrity of the session is improved. The second core switch distributes the traffic to a plurality of first load balancing devices through the equivalent route, so that load balancing and redundancy backup of the traffic are realized. The four-layer load balancing device distributes the flow according to the IP and the port information, so that the flow can be more accurately distributed to the back-end servers, and the load balancing of each server is ensured.
Optionally, in the embodiment of the application, the intranet core switch forwards the flow data to the second load balancing device, and the method comprises the step that the intranet core switch forwards the flow data to the second load balancing device through a five-tuple hash algorithm, and the second load balancing device is seven-layer load balancing device.
In the implementation process, the seven-layer load balancing device distributes the flow according to the application layer information, so that the flow is more accurately distributed to the back-end servers, and the load balancing capability of each server is improved. The system has the advantages of improving the overall performance and the resource utilization rate of the system and avoiding overload or idle of the server caused by concentrated flow.
Optionally, in the embodiment of the application, the flow data is forwarded to the corresponding security device by utilizing a quintuple hash algorithm, wherein the quintuple comprises a source address, a destination address, a source port, a destination port and a protocol type, the quintuple comprises the source address, the destination address, the source port, the destination port and the protocol type, a hash value is calculated according to the quintuple, the hash value is used for determining the security device corresponding to the flow data, and the flow data is forwarded to the corresponding security device according to the hash value.
In the implementation process, the flow is dispersed to a plurality of safety devices through the hash algorithm, so that overload of a single safety device due to excessive flow is avoided. Each safety device can share a part of flow for processing, so that the processing capacity and performance of the whole system are improved. The plurality of safety devices work in parallel, so that the hardware resources of the data center are fully utilized, and the resource utilization rate is improved. When the service growth needs to add new safety equipment, only the new safety equipment is added to the safety cluster switch, and the mapping relation between the hash value and the equipment is updated, so that part of traffic can be forwarded to the new equipment. The flexible expansion mode does not need to carry out large-scale reconstruction on the existing system, and the expansion cost and complexity are reduced.
The embodiment of the application also provides a data center network architecture, which comprises a core switch, a security cluster switch, security equipment, first load balancing equipment, second load balancing equipment and an intranet core switch, wherein the core switch is used for receiving traffic data and performing first-layer routing to the security cluster switch, the security cluster switch is used for forwarding the traffic data to the security equipment and realizing the load balancing of the security equipment through dynamic routing, the security equipment is used for performing security monitoring on the traffic data and distributing routing, the first load balancing equipment is used for performing traffic distribution, the intranet core switch is used for connecting the first load balancing equipment and the second load balancing equipment to construct an intranet architecture, the second load balancing equipment is used for analyzing an application program protocol and performing secondary load balancing on the traffic data, the core switch is connected with the security cluster switch and the first load balancing equipment, the security cluster switch is respectively connected with each component in the corresponding security equipment, and the intranet core switch is respectively connected with the first load balancing equipment and the second load balancing equipment.
In the implementation process, through multistage load balancing (first load balancing and second load balancing), traffic is reasonably distributed to a plurality of servers and equipment, single-point overload condition is improved, and overall processing capacity and response speed are improved. The five-tuple hash algorithm ensures that the same session data packet is processed through the same path, reduces unordered allocation time and improves user experience. The core switch is connected with multiple devices, and even if one device fails, other devices can still work, so that the network stability is ensured. New load balancing equipment, security equipment or servers can be conveniently added, linear expansion of network processing capacity is realized, and service growth requirements are met.
In a third aspect, embodiments of the present application also provide a computer program product comprising computer program instructions which, when executed by a processor, perform the method of the first aspect or any implementation of the first aspect.
In a fourth aspect, embodiments of the present application also provide an electronic device comprising a processor and a memory storing computer program instructions that, when executed by the processor, perform the method provided by the first aspect or any implementation of the first aspect.
In a fifth aspect, embodiments of the present application also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, perform the method provided by the first aspect or any implementation of the first aspect.
By adopting the data center flow forwarding method, the program product, the electronic equipment and the storage medium, the flow is distributed step by step, the core switch is distributed to the security cluster switch and the corresponding security equipment, and then the security cluster switch and the corresponding security equipment are distributed to a plurality of servers and the like through the first load balancing equipment, so that the flow processing pressure originally concentrated on a few devices is dispersed. The load of a single network device is reduced, the load can be distributed to a plurality of four-layer load balancing devices as required, and then the load is distributed to a large number of seven-layer load balancing devices, so that the flow scale which can be processed by the whole network architecture is greatly improved, and massive service requests can be easily handled. All the devices independently operate, a traditional cluster or stacking mode is abandoned, and even if one device fails, other devices are not involved, so that the overall stability of the network is ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a data center flow forwarding method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a data center network architecture according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the technical scheme of the present application will be described in detail below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present application, and thus are merely examples, and are not intended to limit the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs, and the terms used herein are for the purpose of describing particular embodiments only and are not intended to limit the application.
In the description of embodiments of the present application, the technical terms "first," "second," and the like are used merely to distinguish between different objects and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated, a particular order or a primary or secondary relationship. In the description of the embodiments of the present application, the meaning of "plurality" is two or more unless otherwise specifically defined.
Before describing embodiments of the present application, components of a data center network architecture are described:
The Core Switch is located in the Core position of the data center network, is mainly responsible for high-speed data exchange and forwarding, is a backbone part of the data center network, connects the external network with the internal network of the data center, and provides a high-bandwidth and low-delay data transmission channel for the data center.
The security cluster switch (SecSw) is used as an access point and a traffic distribution center of the security device, is connected with the core switch, and is responsible for distributing traffic forwarded by the core switch to a plurality of security devices for security monitoring, and meanwhile, collects traffic processed by the security devices and returns the traffic to the core switch.
The security equipment (Sec) is deployed on the security cluster switch and used for carrying out security monitoring and filtering on the traffic data, checking whether security threats such as malicious attack, virus, invasion and the like exist in the traffic, and guaranteeing the network security of the data center.
The first load balancing device (L4-Loadb) is responsible for distributing the traffic forwarded by the core switch to a plurality of servers or service nodes according to a specific load balancing algorithm (such as polling, least connection, etc.) based on a four-layer load balancing technology, so as to realize reasonable distribution of the traffic and balancing of server loads.
And an intranet Core switch (Internal-Core) is connected with the first load balancing equipment and the intranet access switch (TOR), plays a role of Core switching in the Internal network of the data center, and is responsible for forwarding and switching data among different areas of the intranet.
The second load balancing device (L7-Loadb) is based on seven-layer load balancing technology, is positioned after the intranet accesses the switch, and performs finer distribution and scheduling on the flows according to information (such as URL, cookie and the like) of the application layer, so that the availability and performance of the applications are improved.
Please refer to fig. 1, which illustrates a flow chart of a data center flow forwarding method according to an embodiment of the present application. The data center flow forwarding method provided by the embodiment of the application can be applied to electronic equipment, wherein the electronic equipment can comprise physical equipment such as a server, a PC (personal computer), a tablet personal computer or a smart phone, or virtual equipment such as a virtual machine or a container, and the electronic equipment can be single equipment, or can be a combination of a plurality of equipment or a cluster of a large number of equipment. The data center flow forwarding method may include:
And S110, receiving the flow data through the core switch, distributing the flow data to at least one security cluster switch, performing security monitoring on the flow data by using security equipment corresponding to the security cluster switch, and returning the flow data after the security monitoring to the core switch.
And step S120, forwarding the flow data after safety monitoring to a plurality of first load balancing devices by the core switch, wherein the first load balancing devices are used for distributing the flow.
And step S130, the first load balancing device forwards the flow data to the intranet core switch through the static route.
And step 140, forwarding the flow data to the second load balancing equipment by the intranet core switch, and responding the flow data by using the second load balancing equipment.
In step S110, the core switch receives traffic data from the external network, and forwards the traffic data to the security cluster switch according to a preset policy (such as an access control list, ACL, etc.) and a routing rule.
After receiving the traffic data, the security cluster switch distributes the traffic to a plurality of connected security devices according to a load balancing policy (such as equal cost multi-path routing, ECMP) configured by the security cluster switch. For example, the equally divided manner may be selected, and other allocation manners may be selected.
Each security device performs security monitoring on the distributed traffic data, including intrusion detection, firewall filtering, and/or virus scanning, to identify and filter out potential security threats.
The flow data after safety monitoring is returned to the safety cluster switch by the safety equipment, and the safety cluster switch forwards the flow data back to the core switch. In this way, the flow entering the data center is subjected to security check, so that the security of the network inside the data center is ensured to a certain extent.
As an implementation manner, only one logical network device (e.g. core 1) on the whole traffic path is responsible for centrally forwarding all traffic before the traffic reaches the switch (e.g. secsw) connected to the four-layer Load balancing device (L4-Load balancing), instead of the multiple devices sharing the traffic in parallel. The logical network device refers to a logical forwarding unit, which may be a single physical switch (such as core 1) or a virtualized logical unit composed of multiple switches (such as M-LAG or stacking system), but externally presents as a unified forwarding entity. All traffic from the external network is first converged to this logic device (core 1) and then used to make unified decisions on the next forwarding path.
In step S120, after receiving the traffic data after the security monitoring, the core switch distributes the traffic data to the plurality of first load balancing devices according to the routing information learned by the dynamic routing protocol (such as open shortest path first, OSPF) and the pre-configured load balancing policy.
After receiving the traffic data, the first load balancing device distributes the traffic data to a plurality of servers or service nodes at the back end, such as an intranet core switch, according to a four-layer load balancing algorithm. This distribution mechanism enables multiple servers to share traffic load, improving the concurrent processing capacity and availability of the system.
In step S130, a static route is preconfigured between the first load balancing device and the intranet core switch, so that a forwarding path of the traffic data from the first load balancing device to the intranet core switch is defined.
And the first load balancing equipment forwards the flow data to the intranet core switch according to the static route configuration. The use of static routing improves the stable transmission of traffic data between the core switch of the intranet and the first load balancing device, reduces the overhead and uncertainty of the dynamic routing protocol, and prevents the routing oscillation from affecting session maintenance.
As an implementation, the first Load balancing device (L4-Load balancing) may perform this forwarding action through a dynamic routing protocol (e.g., OSPF/BGP) in addition to forwarding traffic to the intranet Core switch (international-Core) using static routing. For example, the same dynamic routing protocol is operated between the first load balancing device and the intranet core switch, and the first load balancing device notifies the local virtual service address or the back-end server network segment to the intranet core switch through the routing protocol. And after the first load balancing device completes four-layer load balancing, the traffic is packaged into a new data packet, and the traffic is forwarded to the intranet core switch by selecting an optimal path through a dynamic routing protocol. The introduction of dynamic routing makes the flow forwarding between the first load balancing device and the intranet core switch more intelligent and elastic, and is especially suitable for large-scale and high-dynamic data center environments.
In step S140, after receiving the traffic data forwarded by the first load balancing device, the intranet core switch forwards the traffic data to the second load balancing device according to the routing policy and configuration in the intranet core switch.
The second load balancing device is based on seven-layer load balancing technology, further analyzes and processes the traffic data according to information (such as URL, host head and the like) of the application layer, and the back-end server responds to the traffic data. The method can forward the flow data to the corresponding server, return the response data of the server to the intranet core switch, and finally send the response data back to the core switch by the intranet core switch, and return the response data to the user through the core switch. The process realizes the fine management and scheduling of the application layer service, and improves the performance and user experience of the application.
In the implementation process of the embodiment, the flow is distributed step by step, and is distributed to the security cluster switch and the corresponding security equipment by the core switch, and then distributed to a plurality of servers and the like by the first load balancing equipment, so that the flow processing pressure originally concentrated on a few devices is dispersed. The load of a single network device is reduced, the load can be distributed to a plurality of four-layer load balancing devices as required, and then the load is distributed to a large number of seven-layer load balancing devices, so that the flow scale which can be processed by the whole network architecture is greatly improved, and massive service requests can be easily handled. All the devices independently operate, a traditional cluster or stacking mode is abandoned, and even if one device fails, other devices are not involved, so that the overall stability of the network is ensured.
For example, when a single safety device or load balancing device fails, traffic can be forwarded through other normal devices, and service interruption of the whole data center caused by single-point failure can not occur.
Optionally, in an embodiment of the present application, security monitoring is performed on the traffic data by using a security device corresponding to the security cluster switch, and the traffic data after the security monitoring is returned to the core switch, including:
the security device sends the security layer route to the corresponding security cluster switch. The security device configures a routing protocol to enable it to establish a dynamic routing relationship with the security cluster switch. For example, the security device may run the OSPF protocol and send its own IP address, subnet mask, etc. route information to the security cluster switch via an OSPF Hello message. Assuming the IP address of the security device is 192.168.1.10 and the subnet mask is 255.255.255.0, it encapsulates this information in an OSPF message and sends it to the multicast address 224.0.0.5 of the network segment where the security cluster switch is located to inform the security cluster switch of its own presence and reachable network range.
When the security device sends the security layer route, the security device can issue specific route information according to the service requirement and network configuration. For example, it may issue a host route to its own IP address telling the security cluster switch that all traffic destined for that IP address should go through its own processing. Or it may issue a network route containing multiple ranges of IP addresses indicating that it is able to handle traffic within these network ranges.
The security cluster switch learns the security layer route through the dynamic routing protocol, and forwards the traffic data to the corresponding security device by utilizing the quintuple hash algorithm according to the traffic data destination address.
The dynamic routing protocol is a network protocol for automatically exchanging routing information between different network devices, enabling the devices to dynamically learn and update routing tables. Common dynamic routing protocols include Open Shortest Path First (OSPF), border Gateway Protocol (BGP), etc. Through the dynamic routing protocol, the network equipment can automatically adapt to the change of network topology, and an optimal path is selected for data transmission.
The five-tuple hash algorithm is an algorithm for performing hash computation based on five tuples (source IP address, destination IP address, source port number, destination port number, protocol type) of an IP packet. In the network traffic forwarding, a plurality of IPs Bao Haxi belonging to the same data stream can be processed to the same next-hop device through a five-tuple hash algorithm, so that a plurality of IP packets requested by the same user can be processed through the same path, and the problems of disorder of the data packets and the like are solved.
The security cluster switch runs the same dynamic routing protocol (such as OSPF) as the security device, receives the routing information sent by the security device, and adds it to its own routing table. After receiving the OSPF Hello message of the security device, the security cluster switch analyzes the routing information therein, updates its own routing table, and records the network range connected with the security device and the corresponding next hop address (i.e., the IP address of the security device).
When the security cluster switch receives the traffic data forwarded by the core switch, it will first look up the routing table and find the matching routing entry according to the destination IP address of the traffic data. Assuming that the destination IP address of the received traffic data is 192.168.1.10, the security cluster switch searches the routing table to find that the next hop corresponding to the IP address is the IP address of the security device a, and then forwards the traffic data to the security device a.
And the security equipment carries out security monitoring on the received traffic data, and returns the traffic data after the security monitoring to the core switch. After receiving the flow data, the security device performs security monitoring on the flow data according to a pre-configured security policy and rule. For example, the firewall may check whether the traffic data conforms to a preset Access Control List (ACL) rule to determine whether the traffic is allowed to pass, and the intrusion detection system may analyze whether the traffic data has known attack characteristics or abnormal behavior patterns, such as SQL injection, XSS attack, etc.
After the safety device performs safety monitoring on the received flow, the safety device can also record the monitoring process and the result in a log mode, and then return the flow data after safety monitoring to the core switch.
In the implementation process of the embodiment, as the flow is pre-split through the security cluster switch, the flow scale processed by a single security device is moderate, the monitoring efficiency and accuracy are improved, the omission or delay of security monitoring caused by overload of the flow is avoided, and the overall security protection level of the data center is improved. A dynamic routing relation is established among the cluster switch, the core switch and the safety equipment, when a certain link or a certain safety equipment fails, the routing can be quickly adjusted, the traffic is forwarded to other available paths and equipment, intermittent transmission of the traffic in the safety monitoring process is reduced, and the robustness and reliability of the network architecture are enhanced.
Optionally, in an embodiment of the present application, receiving, by a core switch, traffic data, and distributing the traffic data to at least one security cluster switch, including:
the method comprises the steps of obtaining an operation mode, wherein the operation mode comprises a normal mode and an abnormal mode, the normal mode represents safety monitoring of traffic data, the abnormal mode represents a fault of safety equipment, and the traffic data is directly forwarded to first load balancing equipment through a core switch.
The working state of the data center network system is divided into a normal mode and an abnormal mode. The normal mode characterizes normal operation of the security device and the system may perform a standard security monitoring procedure on the traffic data. At this time, after the flow data is received by the core switch, the flow data is distributed to the security cluster switch according to a set policy, and then security monitoring is performed by the security device. The abnormal mode characterizes that the safety equipment fails, and the system is switched to a standby flow processing mechanism. In the mode, the flow data bypasses the safety equipment detection link and is directly forwarded to the first load balancing equipment through the core switch, so that the flow transmission is ensured not to be interrupted due to the fault of the safety equipment.
The system monitors the running state of the safety equipment in real time through the monitoring module, and the running state comprises indexes such as the health condition, response time, resource utilization rate and the like of the equipment. The monitoring module will send a heartbeat signal to the security device periodically (e.g., every minute) to check if the device is responding properly. In addition, fault alarm information of the safety equipment, such as hardware faults, software crashes and the like, can be detected. Based on these monitoring data, the system determines whether it is currently in normal mode or abnormal mode. For example, if the security device does not respond to the heartbeat signal within a specified time or issues a fault alert, the system determines an abnormal mode.
In case the operation mode is the normal mode, the step of receiving traffic data through the core switch and distributing the traffic data to the at least one security cluster switch is performed. The traffic distribution is performed in a normal mode in which the core switch serves as a core node of the data center network and receives traffic data from the external network. Such traffic data may come from different user requests such as web page access, video streaming media, e-commerce transactions, etc. And distributing the flow data to the security cluster switch, wherein the core switch forwards the flow data to at least one security cluster switch according to preset strategies (such as an access control list, strategy routing and the like) and routing information learned by dynamic routing protocols (such as BGP and OSPF).
In the implementation process of the embodiment, in a normal mode, the core switch distributes the flow data to the security cluster switch, and the security equipment monitors the security, so that the security of the flow data is ensured. The security device can comprehensively check and filter traffic data, prevent security threats such as malicious attacks and viruses from entering the internal network of the data center, and protect business systems and data assets of the data center. Meanwhile, in the mode, the flow data is subjected to a safety monitoring link, so that potential safety risks can be found and responded in time, and the overall safe and stable operation of the data center is maintained.
Optionally, in the embodiment of the present application, the routing table of the core switch includes a security layer route and a load balancing layer route, where the security layer route has a higher priority than the load balancing layer route, and the method further includes:
In an abnormal mode, the security layer route is closed, and the core switch forwards the received traffic data to a plurality of first load balancing devices.
In the abnormal mode, secSw detects a Sec failure through a dynamic routing protocol (such as OSPF DEAD INTERVAL) when the security device fails, and automatically withdraws the corresponding route, closing the security layer route. The method is characterized in that after a core switch and a security cluster switch detect the fault state of the security device through dynamic routing protocols (such as OSPF, BGP and the like), routing entries related to the security device are removed from a routing table. For example, there is a route in the routing table of the original core switch that points to the security cluster switch for forwarding traffic to the security device. When the security device fails, the security cluster switch may send a route withdrawal message, and after receiving the message, the core switch deletes the route entry associated with the security device, thereby closing the security layer route. In addition, the relevant static route configuration can be deleted manually or automatically when the safety equipment fault is detected through the static route configuration.
In the implementation process of the embodiment, in the abnormal mode, when the safety equipment fails, the flow data can be directly forwarded to the first load balancing equipment through the core switch, so that the continuity of the service is ensured. The interruption of flow transmission caused by the failure of the safety equipment is avoided, the service of the data center can be ensured to continuously run, the influence on the service caused by the failure is reduced, and the availability and the reliability of the system are improved.
Optionally, in an embodiment of the present application, the core switch forwards the traffic data after security monitoring to a plurality of first load balancing devices, including:
The core switch forwards the flow data after safety monitoring to the second core switch through a five-tuple hash algorithm, the second core switch distributes the flow data to the first load balancing device through an equivalent route, and the first load balancing device is four-layer load balancing device.
The second core switch is connected with the first load balancing device, the second core switch (namely the redundant node of the boundary core switch) and the intranet core switch are indirectly cooperated through the first load balancing layer, and the relationship is essentially a cascade structure with function decoupling. The fault isolation design is used for applying two scenes, namely scene 1, that is, the second core switch is faulty, the flow is automatically switched to the first core switch, and the intranet core layer is not affected. Scene 2, intranet core switch failure, dynamic routing protocol (such as OSPF) automatically guides the flow to the surviving intranet core switch, without influencing boundary core layer. Through the design, the load balancing layer is decoupled, fault domain isolation is realized, and single-point fault diffusion is avoided.
The core switch extracts five-tuple information (source IP, destination IP, source port, destination port, protocol type) of the traffic data and calculates a hash value using a preset hash algorithm. This hash value is used to determine the forwarding path for the traffic data.
And forwarding the traffic data to the second core switch by the core switch according to the calculated hash value. This ensures that traffic data for the same session is always forwarded over the same path, maintaining session consistency. For example, if all packets of a user session have the same five-tuple, they will be hashed to the same path, ultimately forwarding to the same second core switch.
Equivalent routing refers to multiple routing paths in the network with the same destination but different next hops, which paths are equivalent in cost or priority. When network equipment (such as a switch or a router) forwards traffic, the traffic is uniformly distributed on the equivalent paths according to the configuration of the equivalent routes, so that load balancing and redundancy backup of the traffic are realized.
The second core switch examines its routing table for equivalent routing entries that match the destination address of the traffic data. Equivalent routes generally refer to multiple routing paths with the same destination but different next hops.
The second core switch distributes traffic data evenly across the plurality of first load balancing devices according to the configuration of the equal-cost routes. This may be achieved by a variety of load balancing algorithms, such as polling, randomization, hashing, etc.
The four-layer load balancing device distributes the traffic data to a plurality of servers or service nodes at the back end by using a preset load balancing algorithm (such as polling, least connection number, hash and the like) according to the information of the IP address, the port number and the like of the traffic data.
In the implementation process of the embodiment, the flow data of the same session is always forwarded to the second core switch through the same path by the five-tuple hash algorithm of the core switch, so that the integrity of the session is improved. The second core switch distributes the traffic to a plurality of first load balancing devices through the equivalent route, so that load balancing and redundancy backup of the traffic are realized. The four-layer load balancing device distributes the flow according to the IP and the port information, so that the flow can be more accurately distributed to the back-end servers, and the load balancing of each server is ensured.
Optionally, in an embodiment of the present application, forwarding, by the intranet core switch, the traffic data to the second load balancing device includes:
The intranet core switch forwards the flow data to the second load balancing device through a five-tuple hash algorithm, wherein the second load balancing device is seven-layer load balancing device.
The intranet core exchanger analyzes the header information of the flow data and extracts five key characteristics of a source IP address, a destination IP address, a source port, a destination port and a protocol type. And calculating the five-tuple information by using a preset hash algorithm (such as consistent hash, FNV hash and the like) to generate a hash value. This hash value is used to determine the forwarding path for the traffic data. And forwarding the flow data to a specific second load balancing device by the intranet core switch according to the hash value.
It should be noted that, multiple IP packets of the same session need to be hashed to the same two load balancing devices, otherwise, the second load balancing device cannot identify the application layer request.
The seven-layer load balancing device analyzes application layer information of the flow data, such as URL of HTTP request, host head, cookie and the like. And distributing the traffic data to a plurality of servers or service nodes at the back end according to a preset seven-layer load balancing strategy (such as URL polling, URL hash, cookie hash, minimum request and the like).
In the implementation process of the embodiment, the seven-layer load balancing device distributes the traffic according to the application layer information, so that the traffic is more accurately distributed to the back-end servers, and the load balancing capability of each server is improved. The system has the advantages of improving the overall performance and the resource utilization rate of the system and avoiding overload or idle of the server caused by concentrated flow.
Optionally, in an embodiment of the present application, forwarding the traffic data to the corresponding security device by using a quintuple hash algorithm includes:
extracting five-tuple in the flow data, wherein the five-tuple comprises a source address, a destination address, a source port, a destination port and a protocol type. The method comprises the steps of calculating a hash value according to five-tuple, wherein the hash value is used for determining safety equipment corresponding to flow data, and forwarding the flow data to the corresponding safety equipment according to the hash value.
The five-tuple hash algorithm is described by way of example below, assuming a user accesses the HTTP service of a web site (server IP address 203.0.113.50) using the 80-port TCP protocol at a local browser (IP address 192.168.1.100). The browser randomly selects a source port, such as 54321. Then the five-tuple information in this traffic data is 192.168.1.100 (the IP address of the requesting user equipment). Destination address 203.0.113.50 (IP address of target web server). Source port 54321 (random port used by the application on the user device that initiated the request). Destination port 80 (port where the web server listens to HTTP services). Protocol type TCP (transport layer protocol).
These five parts are hashed using a specific hashing algorithm. For example, a simple FNV hash algorithm (a more suitable hash algorithm may be selected in the actual scenario) is used to combine the five-tuple into a string "192.168.1.100:54321:203.0.113.50:80:tcp", and then calculate a hash value for the string to obtain a hash result, for example, the hash value is 0x3A5B78.
Assume that a data center has a plurality of security devices (e.g., sec1, sec2, sec3, etc.), each security device corresponding to a hash value range. From the calculated hash value 0x3A5B78, it is determined that the traffic data should be forwarded to the corresponding secure device Sec2. The core switch forwards the traffic data to the security cluster switch, and the security cluster switch forwards the traffic data to the corresponding security device Sec2 according to the mapping relation between the pre-configured hash value and the security device.
In the implementation process of the embodiment, the flow is dispersed to a plurality of safety devices through a hash algorithm, so that overload of a single safety device due to excessive flow is avoided. Each safety device can share a part of flow for processing, so that the processing capacity and performance of the whole system are improved. The plurality of safety devices work in parallel, so that the hardware resources of the data center are fully utilized, and the resource utilization rate is improved. When the service growth needs to add new safety equipment, only the new safety equipment is added to the safety cluster switch, and the mapping relation between the hash value and the equipment is updated, so that part of traffic can be forwarded to the new equipment. The flexible expansion mode does not need to carry out large-scale reconstruction on the existing system, and the expansion cost and complexity are reduced.
Referring to fig. 2, a schematic diagram of a data center network architecture provided by an embodiment of the present application is shown, where the data center network architecture includes a core switch, a security cluster switch, a security device, a first load balancing device, a second load balancing device, and an intranet core switch.
The core switch is used for receiving traffic data and performing first-layer routing to the security cluster switch, the security cluster switch is used for forwarding the traffic data to the security device and realizing load balancing of the security device through dynamic routing, the security device is used for performing security monitoring on the traffic data and distributing routing, the first load balancing device is used for performing traffic distribution, the intranet core switch is used for connecting the first load balancing device and the second load balancing device to construct an intranet framework, and the second load balancing device is used for analyzing an application program protocol and performing secondary load balancing on the traffic data.
The core switch is connected with the security cluster switch and the first load balancing device, the security cluster switch is respectively connected with each component in the corresponding security device, and the intranet core switch is respectively connected with the first load balancing device and the second load balancing device.
The core switch serves as an ingress to the data center network and its primary function is to receive traffic data from the external network. The method has high-performance routing capability and can rapidly perform first-layer routing processing on the streaming data. The core switch is connected with the security cluster switch and the first load balancing equipment through a high-speed link, so that quick forwarding of data is ensured. The core switch generally adopts a multi-link connection mode to provide redundancy backup and prevent network interruption caused by single-point faults.
The security cluster switch is connected to the core switch and has the main responsibility of forwarding traffic data to the security device. And the security cluster switch is connected with each security device by adopting an independent link, so that the stability and the reliability of data transmission are ensured. The security cluster switch utilizes a dynamic routing protocol to know the state and the load condition of each security device in real time. When the traffic data arrives at the security cluster switch, the traffic data is reasonably distributed to each security device according to the information provided by the dynamic routing protocol, so that overload of a single security device is reduced, and meanwhile, the traffic data can efficiently pass through a security monitoring link.
The first load balancing device is connected with the core switch and has the main function of performing preliminary load balancing processing on the traffic data. The method receives traffic data from a core switch in a static routing or dynamic routing mode and distributes the traffic data to a plurality of back-end servers or service nodes according to a preset load balancing algorithm (such as polling, least connection and the like).
The connection relationship between the components can adopt high-speed Ethernet links, such as 10GE or 40GE links, so as to meet the requirement of large data volume transmission.
The intranet core switch is connected with the first load balancing device and the second load balancing device through multiple links to form a redundant backup network topology structure, and network interruption caused by single link faults is reduced. A link aggregation technology can be adopted to aggregate a plurality of physical links into one logical link, so that the bandwidth and the reliability of the link are improved.
In the implementation process of the embodiment, through multistage load balancing (first load balancing and second load balancing), traffic is reasonably distributed to a plurality of servers and equipment, single-point overload condition is improved, and overall processing capacity and response speed are improved. The five-tuple hash algorithm ensures that the same session data packet is processed through the same path, reduces unordered allocation time and improves user experience. The core switch is connected with multiple devices, and even if one device fails, other devices can still work, so that the network stability is ensured. New load balancing equipment, security equipment or servers can be conveniently added, linear expansion of network processing capacity is realized, and service growth requirements are met.
In an alternative embodiment, the number of various components in the data center network frame may be multiple, for example, there may be multiple core switches, the traffic collected by the remaining core switches may be forwarded to one of the core switches a, where the core switch a averages the traffic to multiple security cluster switches, and each security cluster switch has a corresponding plurality of security devices, and each security device is connected to its corresponding security cluster switch. The first load balancing devices can also be multiple, the core switch can distribute the safety monitored flow data to the first load balancing devices after receiving the safety monitored flow data, and each first load balancing device is connected with the intranet core switch and forwards the received flow data to the second load balancing devices.
It should be understood that, the apparatus corresponds to the foregoing data center traffic forwarding method embodiment, and is capable of executing the steps involved in the foregoing method embodiment, and specific functions of the apparatus may be referred to the foregoing description, and detailed descriptions are omitted herein as appropriate to avoid redundancy. The device includes at least one software functional module that can be stored in memory in the form of software or firmware (firmware) or cured in an Operating System (OS) of the device.
Please refer to fig. 3, which illustrates a schematic structural diagram of an electronic device according to an embodiment of the present application. An electronic device 300 according to an embodiment of the present application includes a processor 310 and a memory 320, where the memory 320 stores machine-readable instructions executable by the processor 310, which when executed by the processor 310, perform a method as described above.
The components shown in fig. 3 may be implemented in hardware, software, or a combination thereof. The electronic device 300 may be a physical device, such as a server, a PC, etc., or may be a virtual device, such as a virtual machine, a virtualized container, etc. The electronic device 300 is not limited to a single device, and may be a combination of a plurality of devices or a cluster of a large number of devices.
The embodiment of the application also provides a storage medium, wherein a computer program is stored on the storage medium, and the computer program is executed by a processor to execute the method.
The storage medium may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as static random access Memory (Static Random Access Memory, SRAM), electrically erasable Programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk.
Embodiments of the present application also provide a computer program product comprising computer program instructions which, when executed by a processor, perform a method as above.
In the embodiments of the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The foregoing description is merely an optional implementation of the embodiment of the present application, but the scope of the embodiment of the present application is not limited thereto, and any person skilled in the art may easily think about changes or substitutions within the technical scope of the embodiment of the present application, and the changes or substitutions are covered by the scope of the embodiment of the present application.

Claims (10)

1. The data center flow forwarding method is characterized by being applied to a data center network framework and comprising the following steps:
Receiving flow data through a core switch, distributing the flow data to at least one security cluster switch, performing security monitoring on the flow data by using security equipment corresponding to the security cluster switch, and returning the flow data after the security monitoring to the core switch;
the core switch forwards the flow data after safety monitoring to a plurality of first load balancing devices, wherein the first load balancing devices are used for distributing the flow;
the first load balancing device forwards the flow data to an intranet core switch;
And the intranet core switch forwards the flow data to second load balancing equipment, and the second load balancing equipment is utilized to respond to the flow data.
2. The method of claim 1, wherein security monitoring the traffic data with the security device corresponding to the security cluster switch and returning the security monitored traffic data to the core switch comprises:
the security device sends a security layer route to the corresponding security cluster switch;
The security cluster switch learns the security layer route through a dynamic routing protocol, and forwards the traffic data to the corresponding security device by utilizing a five-tuple hash algorithm according to the traffic data destination address;
And the security equipment carries out security monitoring on the received traffic data and returns the traffic data after the security monitoring to the core switch.
3. The method of claim 1, wherein receiving traffic data by a core switch and distributing the traffic data to at least one security cluster switch comprises:
The method comprises the steps of acquiring an operation mode, wherein the operation mode comprises a normal mode and an abnormal mode, the normal mode represents safety monitoring of flow data, the abnormal mode represents failure of safety equipment, and the flow data is directly forwarded to first load balancing equipment through a core switch;
and in the case that the operation mode is the normal mode, executing the step of receiving traffic data through a core switch and distributing the traffic data to at least one security cluster switch.
4. The method of claim 3, wherein the routing table of the core switch includes a security layer route and a load balancing layer route, wherein the security layer route has a higher priority than the load balancing layer route, wherein the method further comprises:
and in an abnormal mode, the security layer route is closed, and the core switch forwards the received traffic data to a plurality of first load balancing devices.
5. The method of claim 1, wherein the core switch forwards the security monitored traffic data to a plurality of first load balancing devices, comprising:
The core switch forwards the flow data after safety monitoring to a second core switch through a five-tuple hash algorithm, the second core switch distributes the flow data to the first load balancing device through an equivalent route, and the first load balancing device is four-layer load balancing device.
6. The method of claim 1, wherein the intranet core switch forwarding the traffic data to a second load balancing device, comprising:
And the intranet core switch forwards the flow data to the second load balancing device through a five-tuple hash algorithm, wherein the second load balancing device is seven-layer load balancing device.
7. The method of claim 2, wherein forwarding the traffic data to the corresponding secure device using a five-tuple hash algorithm comprises:
Extracting five-tuple in the flow data, wherein the five-tuple comprises a source address, a destination address, a source port, a destination port and a protocol type;
Calculating a hash value according to the five-tuple, wherein the hash value is used for determining the safety equipment corresponding to the flow data;
and forwarding the flow data to the corresponding safety equipment according to the hash value.
8. The data center network architecture is characterized by comprising a core switch, a security cluster switch, security equipment, first load balancing equipment, second load balancing equipment and an intranet core switch;
The core switch is used for receiving the flow data and performing first-layer routing to the security cluster switch;
The security cluster switch is used for forwarding the flow data to the security device and realizing load balancing of the security device through dynamic routing;
The safety equipment is used for carrying out safety monitoring on the flow data and distributing a route;
The first load balancing device is used for carrying out flow distribution;
the intranet core switch is used for connecting the first load balancing device and the second load balancing device to construct an intranet framework;
The second load balancing device is used for analyzing an application program protocol and carrying out secondary load balancing on the flow data;
The core switch is connected with the security cluster switch and the first load balancing device, the security cluster switch is respectively connected with each corresponding component in the security device, and the intranet core switch is respectively connected with the first load balancing device and the second load balancing device.
9. A computer program product comprising computer program instructions which, when executed by a processor, perform the method of any of claims 1 to 7.
10. A computer readable storage medium, characterized in that it has stored thereon computer program instructions which, when executed by a processor, perform the method according to any of claims 1 to 7.
CN202511049953.1A 2025-07-29 2025-07-29 Data center traffic forwarding method, program product, electronic device and storage medium Pending CN120729797A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202511049953.1A CN120729797A (en) 2025-07-29 2025-07-29 Data center traffic forwarding method, program product, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202511049953.1A CN120729797A (en) 2025-07-29 2025-07-29 Data center traffic forwarding method, program product, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN120729797A true CN120729797A (en) 2025-09-30

Family

ID=97169954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202511049953.1A Pending CN120729797A (en) 2025-07-29 2025-07-29 Data center traffic forwarding method, program product, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN120729797A (en)

Similar Documents

Publication Publication Date Title
US10534601B1 (en) In-service software upgrade of virtual router with reduced packet loss
CN107454155B (en) Fault processing method, device and system based on load balancing cluster
US9185031B2 (en) Routing control system for L3VPN service network
CN106375231B (en) A kind of flow switching method, equipment and system
US9042234B1 (en) Systems and methods for efficient network traffic forwarding
US8631113B2 (en) Intelligent integrated network security device for high-availability applications
US10148554B2 (en) System and methods for load placement in data centers
US9455995B2 (en) Identifying source of malicious network messages
WO2018077238A1 (en) Switch-based load balancing system and method
US9712649B2 (en) CCN fragmentation gateway
JP2017510197A (en) System and method for software-defined routing of traffic within and between autonomous systems with improved flow routing, scalability and security
CN113472646B (en) Data transmission method, node, network manager and system
EP2962429A1 (en) Traffic recovery in openflow networks
JP2007201966A (en) Traffic control method, apparatus and system
US9973578B2 (en) Real time caching efficient check in a content centric networking (CCN)
US12177128B2 (en) Methods and systems for autonomous rule-based task coordination amongst edge devices
KR101569857B1 (en) Method and system for detecting client causing network problem using client route control system
Thorat et al. Optimized self-healing framework for software defined networks
US10944695B2 (en) Uplink port oversubscription determination
WO2016187967A1 (en) Method and apparatus for realizing log transmission
CN118612145A (en) A link abnormality processing method, device and related equipment
CN112350988A (en) Method and device for counting byte number and connection number of security policy
CN120729797A (en) Data center traffic forwarding method, program product, electronic device and storage medium
CN113285836B (en) System and method for enhancing toughness of software system based on micro-service real-time migration
Konidis et al. Evaluating traffic redirection mechanisms for high availability servers

Legal Events

Date Code Title Description
PB01 Publication