[go: up one dir, main page]

CN113326100A - Cluster management method, device and equipment and computer storage medium - Google Patents

Cluster management method, device and equipment and computer storage medium Download PDF

Info

Publication number
CN113326100A
CN113326100A CN202110722748.2A CN202110722748A CN113326100A CN 113326100 A CN113326100 A CN 113326100A CN 202110722748 A CN202110722748 A CN 202110722748A CN 113326100 A CN113326100 A CN 113326100A
Authority
CN
China
Prior art keywords
node
nodes
cluster
pool
control node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110722748.2A
Other languages
Chinese (zh)
Other versions
CN113326100B (en
Inventor
洪亚苹
杨旭荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sangfor Technologies Co Ltd
Original Assignee
Sangfor Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sangfor Technologies Co Ltd filed Critical Sangfor Technologies Co Ltd
Priority to CN202110722748.2A priority Critical patent/CN113326100B/en
Publication of CN113326100A publication Critical patent/CN113326100A/en
Application granted granted Critical
Publication of CN113326100B publication Critical patent/CN113326100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Hardware Redundancy (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application discloses a cluster management method, a device, equipment and a computer storage medium, wherein the method comprises the following steps: the balance scheduler distributes the acquired service processing requests to target nodes in a stateless node pool of the cluster in a balance manner; each target node processes the received service processing request; and each target node accesses a control node in a stateful node pool of the cluster under the condition that the target node determines that the common data of the cluster needs to be accessed when processing the received service processing request so as to complete the processing of the service processing request.

Description

Cluster management method, device and equipment and computer storage medium
Technical Field
The embodiment of the application relates to the technical field of internet, and relates to but is not limited to a cluster management method, a device, equipment and a computer storage medium.
Background
In the existing cluster management method, when nodes are offline, voting is needed to form a new available cluster, and when the number of the nodes is large, the convergence time of an election mechanism is long, so that a service with slow service transfer is influenced; under the condition of processing service processing requests, all requests of a client are sent to a main node firstly, and the main node performs load balancing and distributes the load to other nodes in a cluster for processing, so that the pressure of the main node is high, and when the quantity of concurrent requests is large, the problem that a single node cannot load exists.
Disclosure of Invention
In view of this, embodiments of the present application provide a cluster management method, an apparatus, a device, and a computer storage medium.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a cluster management method, including: the balance scheduler distributes the acquired service processing requests to target nodes in a stateless node pool of the cluster in a balance manner; each target node processes the received service processing request; and each target node accesses a control node in a stateful node pool of the cluster under the condition that the target node determines that the common data of the cluster needs to be accessed when processing the received service processing request so as to complete the processing of the service processing request.
In a second aspect, an embodiment of the present application provides a cluster apparatus, where the apparatus includes: the balance scheduler is used for distributing the acquired service processing requests to target nodes in a stateless node pool of the cluster in a balance manner; each target node is used for processing the received service processing request; each target node is further configured to, when processing the received service processing request, access a control node in a stateful node pool of the cluster in a case where it is determined that access to the common data of the cluster is required, so as to complete processing of the service processing request.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor implements the above method when executing the program.
In a fourth aspect, embodiments of the present application provide a computer storage medium storing executable instructions for causing a processor to implement the above method when executed.
In the embodiment of the application, the balanced scheduling is realized through the balanced scheduler, and the problem of overlarge pressure of a single node in a cluster is effectively avoided. By means of classifying the cluster nodes into a stateless node pool and a stateful node pool, the target nodes in the stateless node pool are used for processing the service processing requests, and the control nodes in the stateful node pool are used for storing the public data of the cluster, so that the processing efficiency of the service processing requests is effectively improved.
Drawings
Fig. 1A is a schematic diagram of a system architecture of cluster management provided in an embodiment of the present application;
fig. 1B is a schematic flowchart of a cluster management method according to an embodiment of the present application;
fig. 1C is a schematic diagram of a cluster node according to an embodiment of the present application;
fig. 1D is a schematic flowchart illustrating a process for scheduling processing nodes in a stateless node pool by a balanced scheduler according to an embodiment of the present application;
fig. 1E is a flowchart of processing of a balanced scheduler when a processing node in a stateless node pool fails according to an embodiment of the present application;
FIG. 2A is a schematic diagram of a configuration interface according to an embodiment of the present disclosure;
fig. 2B is a schematic diagram of cluster service distribution provided in an embodiment of the present application;
fig. 2C is a schematic diagram of a node in a stateless node pool and a node in a stateful node pool according to an embodiment of the present application;
fig. 2D is a schematic flowchart of data storage service switching when a control node in a stateful node pool fails according to an embodiment of the present disclosure;
fig. 2E is a schematic processing flow diagram of a case where a control node in a stateful node pool fails according to an embodiment of the present application;
fig. 2F is a schematic diagram of a node in an enlarged stateful node pool according to an embodiment of the present application;
fig. 3A is a schematic flowchart of a scenario that a user accesses a cloud platform according to an embodiment of the present application;
fig. 3B is a flowchart illustrating a method for processing an access request according to an embodiment of the present application;
fig. 3C is a method for forwarding a service request by a proxy client under the condition that a control node changes according to an embodiment of the present application;
fig. 3D is a method for recovering a stateful node pool when it is detected that a master control node is offline according to the embodiment of the present application;
fig. 4 is a schematic structural diagram of a cluster management device provided in the embodiment of the present application;
fig. 5 is a hardware entity diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, specific technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
Virtual Internet Protocol (IP): the method is generally used in a scene with high service availability, and when a main server fails to provide service to the outside, the virtual IP is dynamically switched to a standby server, so that a user cannot sense the failure.
haproxy: for providing high availability, load balancing, and application proxies based on Transmission Control Protocol (TCP) and Hypertext Transfer Protocol (HTTP).
A pacemaker: a cluster resource manager. It exploits the messaging and membership management capabilities provided by the cluster infrastructure (heartbeat or corosync) to detect and recover from failures at the node or resource level to achieve maximum availability of cluster services.
corosyn c: and a part of the cluster management suite collects information such as heartbeat among nodes and provides node availability conditions for an upper layer.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It should be understood that some of the embodiments described herein are only for explaining the technical solutions of the present application, and are not intended to limit the technical scope of the present application.
Fig. 1A is a schematic system architecture diagram of cluster management according to an embodiment of the present invention, as shown in fig. 1A, the schematic system architecture diagram includes a balanced scheduler 11 and a cluster node pool 12, where the cluster node pool 12 includes a stateless node pool 121 and a stateful node pool 122, where,
and the balanced scheduler 11 is configured to provide an access entry, load-balance schedule the access request to nodes in the stateless node pool, and eliminate abnormal nodes in the stateless node pool according to the obtained internal load condition and the internal service state in the stateless node pool.
Stateless node pool 121, which may provide access services for the underlying service without storing data at any time access services are provided, may include processing node 1, processing node 2, processing node 3, and processing node 4. The processing node can be destroyed or created at will, and the data of the user cannot be lost under the condition of destroying the processing node; under the condition of processing the access service, different processing nodes can be switched and used arbitrarily, and the access service of a user is not influenced. In the implementation process, the node of the stateless node pool may be determined according to the operation speed of the node, for example, the node whose operation speed meets the requirement may be determined as the node of the stateless node pool according to actual needs.
Stateful node pool 122, which is used to service stored data, may include control node 1, control node 2, and control node 3. The control node can be used for storing the public data of the cluster, and the control node cannot be destroyed at will. In the implementation process, the node of the stateful node pool may be determined according to the storage performance and the operation speed of the node, for example, the node whose storage performance and operation speed meet the requirements may be determined as the node of the stateful node pool according to actual needs.
As shown in fig. 1B, a cluster management method provided in an embodiment of the present application includes:
step S110, the balance dispatcher distributes the acquired multiple service processing requests to target nodes in a stateless node pool of the cluster in a balance manner;
load balancing of the balanced scheduler means that when one node cannot support the existing access amount, a plurality of nodes can be deployed to form a cluster, and then service processing requests are distributed to each node under the cluster through load balancing, so that the purpose that the plurality of nodes share request pressure together is achieved.
In some embodiments, as shown in fig. 1A, the balanced scheduler 11 is configured to provide access entries for users to access the cluster, and load-balance schedule access requests of the users to target nodes (processing nodes) in the stateless node pool 121.
Fig. 1C is a schematic diagram of a cluster node according to an embodiment of the present application, and as shown in fig. 1C, the schematic diagram includes: nodes in the stateful node pool (control node 1, control node 2, and control node 3) and nodes in the stateless node pool (processing node 1, processing node 2, processing node 3 … … processing node n), wherein,
the nodes (control node 1, control node 2, and control node 3) in the stateful node pool are a specified number of nodes selected from the entire cluster nodes, and are used as the nodes of the stateful node pool and are responsible for running the storage service of the public data. Therefore, compared with the prior art, the number of nodes in the node pool with the state is small, and the re-election convergence time is short when the nodes fail.
The nodes in the stateless node pool (processing node 1, processing node 2, processing node 3 … …, processing node n) are all nodes except the nodes in the stateful node pool in the entire cluster node.
Step S120, each target node processes the received service processing request;
in some embodiments, the target node may not store data in the case of access services provided at any time. The target node may complete the processing of the service processing request without having to access the common data of the cluster.
Step S130, when processing the received service processing request, each target node accesses a control node in a stateful node pool of the cluster under the condition that it is determined that the common data of the cluster needs to be accessed, so as to complete processing of the service processing request.
In some embodiments, as shown in fig. 1A, a control node in the stateful node pool is used to store the common data clustered in the cluster, and in a case that a target node in the stateless node pool determines that the common data needs to be accessed, the control node may be accessed to complete processing of the service processing request.
In the embodiment of the application, the balanced scheduling is realized through the balanced scheduler, and the problem of overlarge pressure of a single node in a cluster is effectively avoided. By means of classifying the cluster nodes into a stateless node pool and a stateful node pool, the target nodes in the stateless node pool are used for processing the service processing requests, and the control nodes in the stateful node pool are used for storing the public data of the cluster, so that the processing efficiency of the service processing requests is effectively improved.
The step S110 "the equilibrium scheduler distributes the acquired multiple service processing requests to the target node in the stateless node pool of the cluster in an equilibrium manner" may be implemented by the following steps:
step S1101, the management component of the cluster configures a virtual internet protocol address in the balanced scheduler to provide an access entry of the service processing request;
in some embodiments, a cluster may provide a fixed access portal to the outside, such that no change in the external request portal is caused by any change in nodes inside the cluster or by any modification of the internet protocol address. The virtual internet protocol address may be configured at the equalization scheduler.
Step S1102, the balance scheduler acquires load information and service state information of each node in the stateless nodes;
in some embodiments, the load information of the node may include a consumption status of the node resource, and the measure of the load information includes a Processing capacity of a Central Processing Unit (CPU), a CPU utilization rate, a CPU ready queue length, a disk and memory available space, a process response time, and the like.
The service state information may include information on whether the node is available, and in case the node is unavailable, the service state information may be a node failure, and in case the node is available, the service state information may be available to the node.
The balanced scheduler may obtain load information and service state information for each of the stateless nodes.
Step S1103, the balanced scheduler distributes the plurality of service processing requests to the target nodes in the stateless node pool in a balanced manner according to the load information and the service state information of each node in the stateless node pool.
In some embodiments, as shown in fig. 1D, a schematic flowchart of a process for scheduling, by an equilibrium scheduler, a processing node in a stateless node pool is provided in an embodiment of the present application, where the scheduling process includes the following steps:
step 1, a balance scheduler configures virtual IP;
step 2, configuring a processing node in the state node pool as a cluster node list by the balanced scheduler;
step 3, the processing nodes in the cluster node list report the self load information and the service state information to the balanced scheduler at regular time;
and 4, the balanced scheduler reports the internal load condition and the internal service state at regular time according to the processing node and distributes the access request of the user to the target node in the state node pool in a balanced manner.
In some embodiments, as shown in fig. 1E, in a case that a processing node in a stateless node pool fails, a processing flow of an equilibrium scheduler provided in an embodiment of the present application includes:
step 1, under the condition that a processing node 1 in a state node pool has a fault or has a high load, the processing node 1 stops reporting data to a balanced scheduler;
step 2, the equilibrium dispatcher stops distributing the access request to the processing node 1;
and 3, the balance scheduler distributes the access request to available processing nodes (target nodes) in other state node pools, wherein the available processing nodes are the nodes with normal state of the reported balance adjuster.
In the embodiment of the application, as the virtual internet protocol address is configured in the balanced scheduler, the virtual internet protocol address does not drift due to the failure of the node in the cluster, and the service recovery time is short; the balance scheduler can distribute the plurality of service processing requests to the target nodes in the stateless node pool in a balanced manner according to the load information and the service state information of each node in the stateless node pool, so that the problem that the pressure of a certain node for processing the service requests is too high can be effectively avoided.
The step S1103, where the balanced scheduler distributes the plurality of service processing requests to the target nodes in the stateless node pool in a balanced manner according to the load information and the service state information of each node in the stateless node pool, may be implemented by:
step S1121, the equilibrium scheduler determines the node whose service state information in the stateless node pool is non-failure as a node to be allocated;
in some embodiments, the service state information is non-failed processing nodes, i.e., available processing nodes, such that the balanced scheduler determines these available processing nodes as the nodes to be allocated that can complete the service processing request.
Step S1122, the balancing scheduler determines the node to be allocated whose load information meets the load requirement as the target node;
in some embodiments, the load requirement may be set according to an actual situation, a node that has an excessively large load, that is, a node that does not satisfy the load requirement may be determined as the target node.
Step S1123, the equilibrium scheduler distributes the plurality of service processing requests to the target node in an equilibrium manner.
In the embodiment of the application, the balanced scheduler determines the node with the service state information being non-failure in the stateless node pool as the node to be allocated, and determines the node to be allocated with the load information meeting the load requirement as the target node, so that the obtained target node can effectively complete the service processing request.
The embodiment of the application provides a method for determining a master control node, a standby control node and a slave control node in control nodes, which comprises the following steps:
step 201, a management component of the cluster acquires a preset total number of control nodes;
step 202, the management component determines the number of the slave control nodes according to the total number of the control nodes;
in some embodiments, a master control node, a standby control node, and a plurality of slave control nodes may be provided, and in the case of determining the preset total number of control nodes, one master control node and one standby control node may be subtracted to obtain the number of slave control nodes.
Step 203, the management component obtains a performance index of each node of the cluster, wherein the performance index includes storage performance of the node and operation speed of the node;
and 204, determining the nodes with the operation speed meeting the first operation condition as the master control nodes, determining the nodes with the operation speed meeting the second operation condition as the standby control nodes, determining the nodes with the operation speed meeting the third operation condition and determining the nodes with the number meeting a number threshold as the slave control nodes by the management component, wherein the number threshold is determined according to the number of the slave control nodes.
In the embodiment of the application, a master control node, a standby control node and a slave control node can be determined according to the storage performance and the operation speed of the nodes, the master control node is used for providing the common data of the cluster, the standby control node is used for backing up the common data of the cluster, and the slave control node is used for replacing the standby control node under the condition that the standby control node fails. In this way, it can be ensured that effective common data of the cluster is provided in case of processing service access requiring access to the common data of the cluster.
The embodiment of the application provides another method for determining a master control node, a standby control node and a slave control node in a control node, which comprises the following steps:
step 210, the management component presents a configuration interface, and the configuration interface is used for configuring the master control node, the standby control node and the slave control node;
in some embodiments, such as the configuration interface shown in fig. 2A, a user may configure the master control node, the standby control node, and the slave control node by clicking the add node control 21.
Step 211, the management component receives configuration operations for the master control node, the standby control node and the slave control node based on the configuration interface;
step 212, the management component determines one of the master control node, one of the standby control nodes, and at least one of the slave control nodes based on the configuration operation.
In the embodiment of the application, a user can complete the configuration of the master control node, the standby control node and the slave control node in the configuration interface, and the management component determines one master control node, one standby control node and at least one slave control node in the cluster node pool according to the configuration of the user.
The embodiment of the application provides a method for replacing a fault control node, which comprises the following steps:
step 220, when the control node in the stateful node pool fails, the management client acquires the address of the new control node from the management component of the cluster;
in some embodiments, each node in the stateless node pool comprises a management client and a proxy client, such as the schematic of a cluster service distribution shown in fig. 2B, which comprises a traffic IP 1211, a stateless traffic service 1212, a proxy client 1213 and a management client 1214 disposed on each processing node in the stateless node pool 121, a stateful traffic service 1221, a data storage service 1222 and a management component 1223 disposed on each control node in the stateful node pool 122, wherein,
a service IP 1211, configured to provide a real service port IP of the stateless node, configured to an available node pool of the pre-scheduler;
stateless business services 1212 to process access requests but not data stores;
a management client 1213 for resetting the destination IP of the proxy client in case of the switching of the master control node, so that the proxy client can be connected to a new active node.
And the proxy client 1214 is used for forwarding the request for accessing the public data to the main control node in the stateful node pool to complete the request for accessing the public data under the condition that the public data needs to be accessed.
Stateful business services 1221, configured to provide services for operating public data, such as clearing resources, generating operation and maintenance report information, and the like;
and a data storage service 1222 for storing common data of the cluster, such as mysql, redis, mongo and other databases.
A management component 1223 for maintaining nodes in the stateful node pool, notifying control nodes in other stateful node pools and reassembling the stateful node pools in case of a failure of the master control node.
Fig. 2C is a schematic diagram of a node of a stateless node pool and a node of a stateful node pool provided in an embodiment of the present application, and as shown in fig. 2C, the node of the stateless node pool includes a proxy client 1213, which is used to access a data storage service of a master control node of the stateful node pool.
Step 221, the management client sends the address of the new control node to the proxy client;
step 222, the proxy client modifies the address of the public data accessing the cluster to the address of the new control node.
In some embodiments, the management client sends the address of the new control node to the proxy client, and as shown in fig. 2D, a schematic flow chart of data storage service switching in case of a failure of a control node in a stateful node pool includes:
step 1, a management client receives a cluster event notification, wherein the cluster event notification is used for notifying a control node of a proxy client fault and a determined new control node;
step 2, the management client informs the agent clients of the nodes in the stateless node pool, and changes agent configuration according to the failed control node and the determined new control node;
step 3, the proxy client disconnects the failed control node;
and 4, establishing connection between the proxy client and the new control node.
In the embodiment of the application, under the condition that a control node in a state node pool has a fault, a management client acquires the address of the new control node from a management component of a cluster, sends the address of the new control node to an agent client, and the agent client modifies the address of public data accessing the cluster into the address of the new control node. Therefore, perception of stateless service to public data is shielded by using the proxy client, local host programming of application service is realized, data storage service can be migrated under the condition that a control node fails, original data storage becomes unavailable, and at the moment, only a destination end (the address of a new control node) of the proxy client needs to be modified and reconnected, so that rapid transfer and recovery of failure service are realized.
The step 220 "in the case that the control node in the stateful node pool has a failure, the management client obtains the address of the new control node from the management component of the cluster" may be implemented by the following steps:
2201, switching the standby control node to a new main control node by the management component of the cluster under the condition that the main control node is determined to be in fault by voting in the nodes in the state node pool;
in a distributed system, a voting system can be relied on to determine whether the whole cluster can work normally; each node of the cluster under the default condition can have a certain number of votes in the hands of the node, when the node is isolated or fails, each node can detect heartbeat information according to the node and send the heartbeat information to other nodes of the cluster, and who fails is determined according to the number of the votes and can represent cluster work. Nodes that can continue to work on behalf of the cluster may be referred to as a majority party, which is the party with votes greater than half the total votes; and the party who is less than or equal to the total vote for the vote is called the minority.
Assuming that a cluster consists of 3 nodes a/B/C, when node a fails or is isolated from the B/C network, who is to represent cluster work at all? If A is to work on behalf of the cluster, the entire cluster will not be available, and if B/C is to work on behalf of the cluster, the cluster service is still available. When A fails or is isolated by the network, B sends heartbeat detection information of A to C, then C also sends heartbeat check information of A to B, the voting is that A obtains 2 votes, the whole cluster has 3 votes in total, and the A fails when half votes are already available, so the A cannot continue to represent the cluster to work; the remaining B and C are available to continue working on behalf of the cluster and service can be transferred to both nodes.
In some embodiments, a node in the pool of stateful nodes may switch the standby control incentive point to a new master control node when the master control node fails as determined by election voting.
Step 2202, said managing component determining one of said slave control nodes as a new standby control node among a plurality of said slave control nodes;
step 2203, the management component selects a new slave control node from the nodes in the stateless node pool and adds the new slave control node to the stateful node pool;
step 2204, the management component sends the address of the new master control node to the management client.
Fig. 2E is a schematic processing flow diagram in the case that a control node in a stateful node pool fails according to an embodiment of the present application, and as shown in fig. 2E, the flow includes:
step 1, removing a state node pool from a control node with a fault by a cluster management component;
step 2, the cluster management component informs the management client of the failure of the control node;
step 3, the cluster management component moves a determined stateless node in the stateless node pool out of the stateless node pool;
and 4, adding the selected stateless nodes into the stateful node pool by the cluster management component to form a new stateful node pool.
In the embodiment of the application, under the condition that the management component of the cluster determines that the main control node has a fault according to the voting of the available nodes of the cluster, the standby control node is switched to a new main control node, and the address of the new main control node is sent to the management client. Therefore, the number of the nodes in the stateful node pool is constant, and the stateful node pool is mainly responsible for running storage service of public data.
The embodiment of the application provides a method for adding or deleting nodes in a stateless node pool, which comprises the following steps:
step 230, the management component of the cluster presents a configuration interface, and the configuration interface is used for adding or deleting nodes in the stateless node pool;
in some embodiments, a user may add or delete nodes in the stateless node pool by clicking on an add node control 21, such as the configuration interface described in FIG. 2A.
231, the management component receives an operation of adding or deleting a node based on the configuration interface;
step 232, the management component adds or reduces nodes in the stateless node pool based on the operation of adding or deleting nodes to realize the expansion or reduction of the management scale.
Fig. 2F is a schematic diagram of expanding nodes in a stateless node pool according to an embodiment of the present application, as shown in fig. 2F, the schematic diagram includes a stateless node pool 22 before expansion and a stateless node pool 23 after expansion, where,
the stateless node pool 22 before expansion comprises a processing node 1, a processing node 2 and a processing node 3 which respectively manage 1000 resources, and the nodes are used for processing all external service requests
The expanded stateless node pool 23 includes a processing node 1, a processing node 2, a processing node 3, a processing node 4, and a processing node 5, which respectively manage 1000 resources, where the processing node 4 and the processing node 5 are expanded processing nodes.
As can be seen from fig. 2F, the processing nodes in the stateless node pool can be arbitrarily increased or decreased for horizontal expansion, and will not participate in voting of the cluster available node elections; therefore, even if the number of processing nodes is large, the convergence time of the election mechanism is not influenced. The management of large-scale resources can be realized after the scale of the processing nodes is increased.
In the embodiment of the application, the processing nodes in the stateless node pool can be increased or decreased arbitrarily to expand or decrease the management scale.
In the prior art, the problems in cluster management are as follows:
(1) each node is provided with a placemaker cluster component and a corosync cluster component, when the nodes are offline, voting is needed to form a new available cluster, and when the number of the nodes is large, the convergence time of an election mechanism is long, so that the service with slow service transfer is influenced. Therefore, the method cannot support the arbitrary expansion of the nodes and is not used in the management scale.
(2) The main control node triggers the virtual IP drift off line, needs to reset the virtual IP at other nodes and informs the adjacent gateway to update the ARP information due to the change of the physical address, and can be recovered within a period of time, so that the service is influenced.
(3) When a client request is proxied through haproxy, all requests of the client are sent to the main node firstly, and the main node performs load balancing distribution to other nodes in the cluster for processing, so that the pressure of the main node is high, and when the quantity of concurrent requests is large, a single node can not load, and the scale is not changed.
Fig. 3A is a schematic flowchart of a scenario where a user accesses a cloud platform according to an embodiment of the present application, and as shown in fig. 3A, the schematic flowchart includes the following steps:
step S301, when a user accesses the cluster from the public network, the access request is dispatched to each stateless node in the cluster in a balanced way through a balanced dispatcher;
step S302, all business services in the stateless node access stateful nodes with public data storage services through the proxy client.
In the embodiment of the application, the access entrance is provided through the balanced scheduler and balanced scheduling is realized, so that the problem that the single node in the cluster is not available in scale due to the fact that all requests of the client are sent to the main node firstly when the client is proxied through haproxy, and the main node carries out load balancing distribution to other nodes in the cluster for processing is solved.
By classifying the cluster nodes into stateless nodes and stateful nodes, the stateless nodes can be increased or decreased arbitrarily to realize the transverse expansion of the management scale; the method has the advantages that the stateful nodes are formed by selecting a small number of nodes, the election convergence time of the available nodes of the new cluster is shortened when the nodes fail, and therefore rapid transfer and recovery of fault service are achieved, the problem that a pacemaker cluster component and a corosync cluster component are arranged on each node is solved, when the nodes are offline, voting is needed to form the new available cluster, and when the number of the nodes is large, the election mechanism convergence time is long, so that service transfer is slow, is influenced. Therefore, the method cannot support the arbitrary expansion of the nodes and is not used in the management scale.
Fig. 3B is a flowchart illustrating a method for processing an access request according to an embodiment of the present application, where as shown in fig. 3B, the method includes:
step S311, receiving an external access request by a target node stateless service in the stateless node pool;
step S312, the target node in the stateless node pool determines whether the access request needs to access the public data;
in a case where it is determined that the access to the common data is not required, jumping to step S314; in the case where it is determined that the access to the common data is required, it jumps to step S313.
Step S313, under the condition that the public data is determined to need to be accessed, the proxy client of the target node in the stateless node pool forwards the access request;
step S314, the proxy client of the target node in the stateless node pool accesses the data storage service of the stateful node pool;
and step S315, processing the access request.
In the embodiment of the application, the perception of the stateless service to the public data is shielded by using the proxy client, the local host programming of the application service is realized, and the processing logic for the external request service is in the target node after being distributed by the scheduler, so that the whole service logic architecture becomes simple.
Fig. 3C is a method for forwarding a service request by a proxy client in a case that a control node changes according to an embodiment of the present application, that is, in the above embodiment, step S313 "in a case that it is determined that access to common data is needed, a proxy client of a target node in a stateless node pool forwards an access request", and in a case that a message of a control node change sent by a management component is received, the method includes the following steps:
step S3131, the proxy client receives a control node change message, wherein the control node change message includes a failed control node and a determined new control node;
step S3132, the proxy client informs the proxy client of the node in the stateless node pool, and the external service request is blocked and not forwarded;
step S3133, the proxy client changes the destination address of the forwarding service request into the address of the new control node;
step S3134, the proxy client recovers and retries forwarding the blocked request.
In the embodiment of the application, under the condition that the master control node in the stateful node pool has a failure, the data storage service on the failed master control node is migrated, the original data storage becomes unavailable, and at the moment, only the destination of the proxy client of the processing node in the stateless node pool needs to be modified and reconnected. The internal service of the processing node in the stateless node pool has no perception on the fault of the main control node in the stateful node pool, and the customer service is not influenced.
Fig. 3D is a method for recovering a stateful node pool when it is detected that a master control node is offline according to an embodiment of the present disclosure, and as shown in fig. 3D, the method includes:
step S321, the management component detects that the master control node is off-line;
step S322, the management component moves the failed main control node out of the stateful node pool;
in some embodiments, the management component determines the offline master control node as the failed master control node and moves the failed master control node out of the stateful node pool.
Step S323, the management component switches the standby control node into a main control node;
step S324, the management component informs the cluster client that the master control node changes;
step S325, the management component selects a node meeting the conditions from the stateless node pool and moves out of the stateless node pool;
step S326, the management component adds the selected stateless node to a stateful node pool to recombine the stateful node pool;
step S327, recovering the node pool with the state.
In the embodiment of the application, the management component can effectively recombine the node pool with the state under the condition that the main control node is determined to be in fault, and service processing of the cluster is not influenced.
Based on the foregoing embodiments, an embodiment of the present application provides a cluster management apparatus, where the apparatus includes modules, each module includes sub-modules, and each sub-module includes a unit, and may be implemented by a processor in an electronic device; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 4 is a schematic structural diagram of a cluster management apparatus provided in an embodiment of the present application, and as shown in fig. 4, the apparatus 400 includes:
the balanced scheduler 401 is configured to distribute the acquired multiple service processing requests to target nodes in a stateless node pool of the cluster in a balanced manner;
each target node 402 is configured to process a received service processing request;
each target node 402 is further configured to, when it is determined that access to the common data of the cluster is required during processing of the received service processing request, access a control node 403 in a stateful node pool of the cluster to complete processing of the service processing request.
In some embodiments, the balancing scheduler is further configured to obtain load information and service status information of each of the stateless nodes; the balance scheduler is further configured to distribute the plurality of service processing requests to target nodes in the stateless node pool in a balanced manner according to the load information and the service state information of each node in the stateless node pool.
In some embodiments, the balancing scheduler is further configured to determine a node in the stateless node pool, where the service state information is non-failure, as a node to be allocated; the balance scheduler is further configured to determine a node to be allocated, for which the load information meets the load requirement, as the target node; the balance scheduler is further configured to distribute the plurality of service processing requests to the target node in a balance manner.
In some embodiments, the control nodes in the stateful node pool include a master control node, a standby control node, and a slave control node, and the apparatus further includes a management component of the cluster, where the management component of the cluster is configured to obtain a preset total number of control nodes; the management component is further used for determining the number of the slave control nodes according to the total number of the control nodes; the management component is further configured to obtain a performance index of each node of the cluster, where the performance index includes storage performance of the node and an operation speed of the node; the management component is further configured to determine, among nodes whose storage performance satisfies a storage condition, a node whose operation speed satisfies a first operation condition as the master control node, a node whose operation speed satisfies a second operation condition as the standby control node, and a node whose operation speed satisfies a third operation condition and whose number satisfies a number threshold value as the slave control node, where the number threshold value is determined according to the number of the slave control nodes.
In some embodiments, the control nodes in the stateful node pool include a master control node, a standby control node, and a slave control node, and the apparatus further includes a management component of the cluster, where the management component of the cluster is configured to present a configuration interface, and the configuration interface is configured to configure the master control node, the standby control node, and the slave control node; the management component is further configured to receive configuration operations on the master control node, the slave control node and the standby control node, respectively, based on the first configuration interface; the management component is further configured to determine one master control node, one standby control node, and at least one slave control node based on the configuration operation.
In some embodiments, each node in the stateless node pool comprises a management client and a proxy client, wherein, in case of a failure of a control node in the stateful node pool, the management client is configured to obtain an address of the new control node from a management component of the cluster; the management client is further used for sending the address of the new control node to the agent client; and the proxy client is used for modifying the address of the public data accessing the cluster into the address of the new control node.
In some embodiments, the control nodes in the stateful node pool include a master control node, a standby control node, and a slave control node, and the management component of the cluster is further configured to switch the standby control node to a new master control node in a case where a failure of the master control node is determined by voting among the nodes in the stateful node pool; said management component further configured to determine one of said slave control nodes as a new standby control node among a plurality of said slave control nodes; the management component is further used for selecting a new slave control node from the nodes in the stateless node pool and adding the new slave control node to the stateful node pool; the management component is further configured to send the address of the new master control node to the management client.
In some embodiments, the management component is further configured to present a configuration interface, the configuration interface configured to add or delete nodes in the stateless node pool; the management component is further used for receiving the operation of adding or deleting the nodes based on the configuration interface; the management component is further configured to add or reduce nodes in the stateless node pool based on the operation of adding or deleting nodes, so as to implement expansion or reduction of a management scale.
In some embodiments, the management component is further configured to configure a virtual internet protocol address at the balanced scheduler to provide an access entry for the service processing request.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the flow control method is implemented in the form of a software functional module and is sold or used as a standalone product, the flow control method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing an electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the flow control method provided in the above embodiments.
Correspondingly, an embodiment of the present application provides an electronic device, and fig. 5 is a schematic diagram of a hardware entity of the electronic device provided in the embodiment of the present application, as shown in fig. 5, the hardware entity of the device 500 includes: the flow control method comprises a memory 501 and a processor 502, wherein the memory 501 stores a computer program capable of running on the processor 502, and the processor 502 executes the program to realize the steps of the flow control method provided in the above embodiments.
The Memory 501 is configured to store instructions and applications executable by the processor 502, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 502 and modules in the electronic device 500, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing an electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method for cluster management, the method comprising:
the balance scheduler distributes the acquired service processing requests to target nodes in a stateless node pool of the cluster in a balance manner;
each target node processes the received service processing request;
and each target node accesses a control node in a stateful node pool of the cluster under the condition that the target node determines that the common data of the cluster needs to be accessed when processing the received service processing request so as to complete the processing of the service processing request.
2. The method of claim 1, wherein the balanced scheduler evenly distributes the obtained plurality of traffic processing requests to target nodes in a stateless node pool of the cluster, comprising:
the balance scheduler acquires load information and service state information of each node in the stateless nodes;
and the balance scheduler distributes the service processing requests to target nodes in the stateless node pool in a balanced manner according to the load information and the service state information of each node in the stateless node pool.
3. The method of claim 2, wherein the balanced scheduler is configured to distribute the plurality of service processing requests to the target nodes in the stateless node pool in a balanced manner according to the load information and the service status information reported by the nodes in the stateless node pool, and the method comprises:
the balanced scheduler determines the node with the service state information being non-failure in the stateless node pool as a node to be allocated;
the balancing scheduler determines the node to be distributed with the load information meeting the load requirement as the target node;
and the equilibrium dispatcher distributes the plurality of service processing requests to the target node in an equilibrium manner.
4. The method of any of claims 1 to 3, wherein the control nodes in the stateful node pool comprise a master control node, a standby control node, and a slave control node, the method further comprising:
the management component of the cluster acquires the total number of preset control nodes;
the management component determines the number of the slave control nodes according to the total number of the control nodes;
the management component acquires a performance index of each node of the cluster, wherein the performance index comprises the storage performance of the node and the operation speed of the node;
the management component determines a node with an operation speed meeting a first operation condition as the master control node, determines a node with an operation speed meeting a second operation condition as the standby control node, and determines a node with an operation speed meeting a third operation condition and a number meeting a number threshold as the slave control node, wherein the number threshold is determined according to the number of the slave control nodes.
5. The method of any of claims 1 to 3, wherein the control nodes in the stateful node pool comprise a master control node, a standby control node, and a slave control node, the method further comprising:
a management component of the cluster presents a configuration interface, and the configuration interface is used for configuring the master control node, the standby control node and the slave control node;
the management component receives configuration operations of the master control node, the standby control node and the slave control node respectively based on the first configuration interface;
the management component determines one of the master control nodes, one of the standby control nodes, and at least one of the slave control nodes based on the configuration operation.
6. The method of any of claims 1 to 3, wherein each node in the stateless node pool comprises a management client and a proxy client, the method further comprising:
when a control node in the stateful node pool fails, the management client acquires the address of the new control node from a management component of the cluster;
the management client sends the address of the new control node to the agent client;
and the proxy client modifies the address of the public data accessing the cluster into the address of the new control node.
7. The method of claim 6, wherein the control nodes in the stateful node pool comprise a master control node, a standby control node, and a slave control node, and wherein in the event of a failure of a control node in the stateful node pool, the obtaining, by the management client, the address of the new control node from a management component of the cluster comprises:
under the condition that a management component of the cluster determines that the main control node is in fault by using election voting in nodes in a stateful node pool, the standby control node is switched to a new main control node;
the management component determines one slave control node from a plurality of slave control nodes as a new standby control node;
the management component selects a new slave control node from the nodes in the stateless node pool and adds the new slave control node to the stateful node pool;
and the management component sends the address of the new master control node to the management client.
8. The method of claim 1, wherein the method further comprises:
a management component of the cluster presents a configuration interface, and the configuration interface is used for adding or deleting nodes in the stateless node pool;
the management component receives the operation of adding or deleting nodes based on the configuration interface;
and the management component adds or reduces nodes in the stateless node pool based on the operation of adding or deleting the nodes so as to realize the expansion or reduction of the management scale.
9. The method of claim 1, wherein the method further comprises:
and the management component of the cluster configures a virtual Internet protocol address in the equilibrium dispatcher so as to provide an access entry of the service processing request.
10. A cluster management apparatus, the apparatus comprising a balanced scheduler, a target node in a stateless pool of nodes of the cluster, and a control node in a stateful pool of nodes of the cluster, characterized in that:
the balance scheduler is used for distributing the acquired multiple service processing requests to target nodes in a stateless node pool of the cluster in a balance manner;
each target node is used for processing the received service processing request;
each target node is further configured to, when processing the received service processing request, access a control node in a stateful node pool of the cluster in a case where it is determined that access to the common data of the cluster is required, so as to complete processing of the service processing request.
11. An electronic device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 9 when executing the program.
12. A computer storage medium having stored thereon executable instructions for causing a processor to perform the steps of the method of any one of claims 1 to 9 when executed.
CN202110722748.2A 2021-06-29 2021-06-29 Cluster management method, device, equipment and computer storage medium Active CN113326100B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110722748.2A CN113326100B (en) 2021-06-29 2021-06-29 Cluster management method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110722748.2A CN113326100B (en) 2021-06-29 2021-06-29 Cluster management method, device, equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN113326100A true CN113326100A (en) 2021-08-31
CN113326100B CN113326100B (en) 2024-04-09

Family

ID=77425097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110722748.2A Active CN113326100B (en) 2021-06-29 2021-06-29 Cluster management method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113326100B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114124903A (en) * 2021-11-15 2022-03-01 新华三大数据技术有限公司 Virtual IP address management method and device
CN115904822A (en) * 2022-12-21 2023-04-04 长春吉大正元信息技术股份有限公司 Cluster repairing method and device

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050198335A1 (en) * 2001-02-06 2005-09-08 Microsoft Corporation Distributed load balancing for single entry-point systems
WO2011140951A1 (en) * 2010-08-25 2011-11-17 华为技术有限公司 Method, device and system for load balancing
US20120159523A1 (en) * 2010-12-17 2012-06-21 Microsoft Corporation Multi-tenant, high-density container service for hosting stateful and stateless middleware components
CN103227754A (en) * 2013-04-16 2013-07-31 浪潮(北京)电子信息产业有限公司 Dynamic load balancing method of high-availability cluster system, and node equipment
WO2014054075A1 (en) * 2012-10-04 2014-04-10 Hitachi, Ltd. System management method, and computer system
CN106603592A (en) * 2015-10-15 2017-04-26 中国电信股份有限公司 Application cluster migrating method and migrating device based on service model
CN106790692A (en) * 2017-02-20 2017-05-31 郑州云海信息技术有限公司 A kind of load-balancing method and device of many clusters
CN107925876A (en) * 2015-08-14 2018-04-17 瑞典爱立信有限公司 Node and method for handling mobility procedures of wireless devices
CN109343963A (en) * 2018-10-30 2019-02-15 杭州数梦工场科技有限公司 A kind of the application access method, apparatus and relevant device of container cluster
US10216770B1 (en) * 2014-10-31 2019-02-26 Amazon Technologies, Inc. Scaling stateful clusters while maintaining access
CN110727709A (en) * 2019-10-10 2020-01-24 北京优炫软件股份有限公司 Cluster database system
CN110798517A (en) * 2019-10-22 2020-02-14 雅马哈发动机(厦门)信息系统有限公司 Decentralized cluster load balancing method and system, mobile terminal and storage medium
KR102112047B1 (en) * 2019-01-29 2020-05-18 주식회사 리얼타임테크 Method for adding node in hybride p2p type cluster system
EP3702918A1 (en) * 2007-04-25 2020-09-02 Alibaba Group Holding Limited Method and apparatus for cluster data processing
CN112015544A (en) * 2020-06-30 2020-12-01 苏州浪潮智能科技有限公司 Load balancing method, device and equipment of k8s cluster and storage medium
CN112445623A (en) * 2020-12-14 2021-03-05 招商局金融科技有限公司 Multi-cluster management method and device, electronic equipment and storage medium
CN112492022A (en) * 2020-11-25 2021-03-12 上海中通吉网络技术有限公司 Cluster, method, system and storage medium for improving database availability
CN112671928A (en) * 2020-12-31 2021-04-16 北京天融信网络安全技术有限公司 Equipment centralized management architecture, load balancing method, electronic equipment and storage medium

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050198335A1 (en) * 2001-02-06 2005-09-08 Microsoft Corporation Distributed load balancing for single entry-point systems
EP3702918A1 (en) * 2007-04-25 2020-09-02 Alibaba Group Holding Limited Method and apparatus for cluster data processing
WO2011140951A1 (en) * 2010-08-25 2011-11-17 华为技术有限公司 Method, device and system for load balancing
US20120159523A1 (en) * 2010-12-17 2012-06-21 Microsoft Corporation Multi-tenant, high-density container service for hosting stateful and stateless middleware components
CN103092697A (en) * 2010-12-17 2013-05-08 微软公司 Multi-tenant, high-density container service for hosting stateful and stateless middleware components
WO2014054075A1 (en) * 2012-10-04 2014-04-10 Hitachi, Ltd. System management method, and computer system
CN103227754A (en) * 2013-04-16 2013-07-31 浪潮(北京)电子信息产业有限公司 Dynamic load balancing method of high-availability cluster system, and node equipment
US10216770B1 (en) * 2014-10-31 2019-02-26 Amazon Technologies, Inc. Scaling stateful clusters while maintaining access
CN107925876A (en) * 2015-08-14 2018-04-17 瑞典爱立信有限公司 Node and method for handling mobility procedures of wireless devices
CN106603592A (en) * 2015-10-15 2017-04-26 中国电信股份有限公司 Application cluster migrating method and migrating device based on service model
CN106790692A (en) * 2017-02-20 2017-05-31 郑州云海信息技术有限公司 A kind of load-balancing method and device of many clusters
CN109343963A (en) * 2018-10-30 2019-02-15 杭州数梦工场科技有限公司 A kind of the application access method, apparatus and relevant device of container cluster
KR102112047B1 (en) * 2019-01-29 2020-05-18 주식회사 리얼타임테크 Method for adding node in hybride p2p type cluster system
CN110727709A (en) * 2019-10-10 2020-01-24 北京优炫软件股份有限公司 Cluster database system
CN110798517A (en) * 2019-10-22 2020-02-14 雅马哈发动机(厦门)信息系统有限公司 Decentralized cluster load balancing method and system, mobile terminal and storage medium
CN112015544A (en) * 2020-06-30 2020-12-01 苏州浪潮智能科技有限公司 Load balancing method, device and equipment of k8s cluster and storage medium
CN112492022A (en) * 2020-11-25 2021-03-12 上海中通吉网络技术有限公司 Cluster, method, system and storage medium for improving database availability
CN112445623A (en) * 2020-12-14 2021-03-05 招商局金融科技有限公司 Multi-cluster management method and device, electronic equipment and storage medium
CN112671928A (en) * 2020-12-31 2021-04-16 北京天融信网络安全技术有限公司 Equipment centralized management architecture, load balancing method, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114124903A (en) * 2021-11-15 2022-03-01 新华三大数据技术有限公司 Virtual IP address management method and device
CN115904822A (en) * 2022-12-21 2023-04-04 长春吉大正元信息技术股份有限公司 Cluster repairing method and device

Also Published As

Publication number Publication date
CN113326100B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
US11765110B2 (en) Method and system for providing resiliency in interaction servicing across data centers
CN112671882B (en) Same-city double-activity system and method based on micro-service
CN102611735B (en) A kind of load-balancing method of application service and system
US8032780B2 (en) Virtualization based high availability cluster system and method for managing failure in virtualization based high availability cluster system
US10523586B2 (en) Port switch service system
EP1810447B1 (en) Method, system and program product for automated topology formation in dynamic distributed environments
CN100549960C (en) The troop method and system of the quick application notification that changes in the computing system
CN108712464A (en) A kind of implementation method towards cluster micro services High Availabitity
US9075660B2 (en) Apparatus and method for providing service availability to a user via selection of data centers for the user
JP2019522293A (en) Acceleration resource processing method and apparatus
CN102143046A (en) Load balancing method, equipment and system
CN103581276A (en) Cluster management device and system, service client side and corresponding method
CN109802986B (en) Device management method, system, device and server
CN114900526B (en) Load balancing method and system, computer storage medium and electronic equipment
CN113326100B (en) Cluster management method, device, equipment and computer storage medium
CN117492944A (en) Task scheduling method and device, electronic equipment and readable storage medium
CN109733444B (en) Database system and train monitoring and management equipment
EP3457668B1 (en) Clustering in unified communication and collaboration services
US7519855B2 (en) Method and system for distributing data processing units in a communication network
JP2016177324A (en) Information processing apparatus, information processing system, information processing method, and program
CN116112569B (en) Micro-service scheduling method and management system
CN111092754A (en) Real-time access service system and implementation method thereof
CN116069860A (en) Data synchronization method and multi-activity system
CN114553704A (en) Method and system for supporting multiple devices to access server simultaneously to realize capacity expansion and contraction
Dimitrov et al. Low-cost Open Data As-a-Service in the Cloud.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant