Disclosure of Invention
In view of the foregoing, the present invention provides an nginnx load multi-cluster access offloading method and related apparatus that overcomes or at least partially solves the foregoing problems.
In a first aspect, an nginnx load multi-cluster access splitting method includes:
when the front end of a cluster accesses to an initial page of a system, a flow controller intercepts and analyzes an Nginx request to obtain source cluster information of the Nginx request, wherein the flow controller is arranged in Nginx load balancing;
According to the source cluster information, a reverse proxy and a cluster background are dynamically set so as to distribute the Nginx request to the corresponding cluster background through the reverse proxy;
intercepting the Nginx requests of other modules at the front end of the cluster aiming at the initial page of the system, and distributing the Nginx requests of other modules to corresponding cluster background through the reverse proxy according to the source cluster information of the Nginx requests of other modules.
Optionally, in some optional embodiments, when the front end of the cluster accesses the initial page of the system, the flow controller intercepts and parses the ngnx request to obtain source cluster information of the ngnx request, including:
When the front end of a cluster accesses a system initial page, a flow controller intercepts an Nginx request of the front end of the cluster aiming at the system initial page;
analyzing header information in the Nginx request;
And matching corresponding source cluster information according to the header information.
Optionally, in some optional embodiments, the matching corresponding source cluster information according to the header information includes:
calling a preset rule according to the header information, and determining a cluster front-end source of the Nginx request;
according to the source of the front end of the cluster, setting cluster zone bit information of the Nginx request as source cluster information;
and storing cluster flag bit information of the Nginx request.
Optionally, in some optional embodiments, the dynamically setting a reverse proxy and a cluster background according to the source cluster information, so as to distribute the nginnx request to the corresponding cluster background through the reverse proxy includes:
According to the stored cluster zone bit information, dynamically setting a reverse proxy corresponding to the cluster zone bit information and a cluster background, wherein the cluster background is a cluster for processing an Nginx request with the cluster zone bit information;
and distributing the Nginx request to the cluster background for processing through the reverse proxy.
Optionally, in some optional embodiments, the intercepting the nminb request of the other module at the front end of the cluster for the initial page of the system, and distributing the nminb request of the other module to the corresponding background of the cluster through the reverse proxy according to source cluster information of the nminb request of the other module includes:
intercepting Nginx requests of other modules at the front end of the cluster aiming at the initial page of the system;
Analyzing the Nginx requests of the other modules to obtain source cluster information of the Nginx requests of the other modules;
And determining a corresponding reverse proxy according to the source cluster information, and distributing Nginx requests of the other modules to corresponding cluster background through the reverse proxy.
The Nginx load multi-cluster access distribution device comprises a source cluster information analysis unit, an agent setting unit and other request interception units;
The source cluster information analysis unit is used for intercepting and analyzing an Nginx request by the flow controller when the front end of the cluster accesses the initial page of the system to obtain source cluster information of the Nginx request, wherein the flow controller is arranged in Nginx load balancing;
The agent setting unit is used for dynamically setting a reverse agent and a cluster background according to the source cluster information so as to distribute the Nginx request to the corresponding cluster background through the reverse agent;
the other request interception unit is configured to intercept the nginnx requests of the other modules at the front end of the cluster for the initial page of the system, and distribute the nginnx requests of the other modules to the corresponding cluster background through the reverse proxy according to source cluster information of the nginnx requests of the other modules.
Optionally, in some optional embodiments, the source cluster information parsing unit includes an nmginx request interception subunit, a header information parsing subunit, and a source cluster information matching subunit;
The Nginx request interception subunit is used for intercepting the Nginx request of the front end of the cluster aiming at the system initial page by the flow controller when the front end of the cluster accesses the system initial page;
The header information analysis subunit is configured to analyze header information in the nginnx request;
The source cluster information matching subunit is configured to match corresponding source cluster information according to the header information.
Optionally, in some optional embodiments, the source cluster information matching subunit includes a rule calling subunit, a flag bit information setting subunit, and a flag bit information storing subunit;
The rule calling subunit is used for calling a preset rule according to the header information to determine a cluster front-end source of the Nginx request;
the flag bit information setting subunit is configured to set, according to the source of the front end of the cluster, cluster flag bit information of the nginnx request as the source cluster information;
the flag bit information storage subunit is used for storing cluster flag bit information of the Nginx request.
In a third aspect, a computer readable storage medium has a program stored thereon, which when executed by a processor implements the nginnx load multi-cluster access splitting method of any of the above.
The electronic equipment comprises at least one processor, at least one memory and a bus, wherein the at least one memory and the bus are connected with the processor, the processor and the memory are used for completing communication with each other through the bus, and the processor is used for calling program instructions in the memory so as to execute the Nginx load multi-cluster access shunting method.
According to the method and the related device for the N ginx load multi-cluster access distribution, when the front end of a cluster accesses the initial page of a system, a flow controller intercepts and analyzes N ginx requests to obtain source cluster information of the N ginx requests, wherein the flow controller is arranged in N ginx load balancing, a reverse proxy and a cluster background are dynamically arranged according to the source cluster information so as to distribute the N ginx requests to the corresponding cluster background through the reverse proxy, other modules at the front end of the cluster can intercept N ginx requests of the initial page of the system, and the N ginx requests of other modules are distributed to the corresponding cluster background through the reverse proxy according to the source cluster information of the N ginx requests of the other modules. Therefore, according to the invention, the flow controller is arranged in the Nginx load balancing, and then the Nginx requests are analyzed and distributed to the corresponding cluster background based on the flow controller, so that the Nginx requests can be distributed to the correct cluster background for processing under the multi-cluster cross-domain access scene, and the control method is controlled based on the Nginx requests, does not depend on users, can cover all users, and has wide coverage.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Detailed Description
With the rapid development of information technology, software service systems often face upgrading and transformation, and in traditional system upgrading and release, on-line shutdown is required, and announcements are sent in advance, so that inconvenience is brought to users. At the same time, development and operation personnel are also extremely challenged. In the system upgrading iteration process, the control flow points to the upgrading cluster service, version switching is gradually realized, the fault influence is controlled within an acceptable range, and the online risk is reduced. The safe, stable and reliable operation of the system is ensured, and the user experience is improved.
The current common version iteration mode is gray level release, and gray level release is usually adopted for service based on Nginx load balancing, namely, a part of users use the existing version service, a part of users start to use a new version, the application range is gradually enlarged, and finally all users are migrated to the new version. The method switches the traffic in the dimension of the user, can not cover all users, and can not ensure that the system is accessed on the same cluster when the modules are switched for the system which is independently deployed by a plurality of modules and accessed through domain names, thereby causing the situation of request errors.
As shown in fig. 1, the specific system structure of the existing nginnx load balancing is often for version release early test running, when a program accesses, if the access from the cluster a is not controlled, the access is not necessarily distributed to a background service of the cluster a, and the access may be separated to a background service of the cluster B, so that a request is disordered, an error may be caused, and the same is true for the cluster B.
In the case where each module is accessed by accessing an independent domain name, front-end loading confusion may arise. For example, in cluster a, a switch to module two access is currently performed, and this may be distributed to module two of cluster B, causing a page load error, or the like.
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
As shown in FIG. 2, the invention provides an Nginx load multi-cluster access shunting method, which comprises S100, S200 and S300;
S100, when a front end of a cluster accesses an initial page of a system, a flow controller intercepts and analyzes an Nginx request to obtain source cluster information of the Nginx request, wherein the flow controller is arranged in Nginx load balancing;
Alternatively, the current application system realizes the parallelization of a plurality of clusters through the nmginx load balancing, and the cluster information can be configured in the nmginx load balancing to achieve the aim of multi-cluster load, so that the front-end request (for example, the front end of the cluster a in fig. 1) can be distributed to the back-end service (for example, the background of the cluster a in fig. 1) through the nmginx load balancing, which is not limited by the invention.
Alternatively, as mentioned above, the existing solution mainly has the problems that after the system is updated, the flow is switched according to the dimension of the user, so that the whole coverage cannot be realized, meanwhile, the system is mostly separated from the foreground and the background, the system is relatively complex, and a plurality of modules are independently deployed. When resources of the deployment cluster are insufficient, a new cluster is often required to be independently established to meet the resource requirement, at this time, identical services may exist on different clusters, and different clusters may need to request to circulate inside the cluster and cannot circulate to other clusters, if an intermediate router does not control, the request cannot be ensured to circulate in the same cluster, and the situation of accessing different cluster services across domains occurs, so that the problem of service logic error reporting is caused.
That is, after a system update, an nmginx request that allows access to a new system typically accesses the initial page of the system of the new system first. Thus, the flow controller of the present invention may intercept and parse the nmginx request to obtain information (i.e., source cluster information) of the front end of the cluster (i.e., the cluster from which the nmginx request originated) for subsequent setting of the reverse proxy and cluster information, which the present invention is not limited to.
Optionally, the flow controller is set in the nginnx load balancing, so that it is convenient to intercept each nginnx request at the front end of the cluster, then analyze the source cluster information, and distribute the flow, where the specific system architecture is shown in fig. 3, and the invention is not limited to this.
Nginx, a high-performance HTTP and reverse proxy web server, also provides services such as IMAP (InternetMessageAccessProtocol ), POP3 (Post Office Protocol-Version 3, post office Protocol Version 3), and SMTP (SIMPLE MAILTRANSFER Protocol, standard Protocol for email transport). The method is characterized by less occupied memory and strong concurrency capability.
Load balancing, english name LoadBalance, means that load (work task) is balanced and shared to a plurality of operation units to run, such as FTP server, web server, enterprise core application server and other main task servers, so as to cooperatively complete the work task.
Optionally, in some optional embodiments, the step S100 includes steps 1.1, 1.2 and 1.3;
Step 1.1, when a cluster front end accesses a system initial page, a flow controller intercepts an Nginx request of the cluster front end aiming at the system initial page;
step 1.2, analyzing header information in the Nginx request;
Step 1.3, matching corresponding source cluster information according to the header information.
Optionally, the invention can analyze the http request information (i.e. the information of the Nginx request) of the initial page of the system by intercepting the access request of the initial page of the system, analyzing the information in the request header for matching the cluster information, and the invention is not limited to this.
Alternatively, the http-hypertext transfer protocol (HyperTextTransferProtocol, HTTP) is a simple request-response protocol that typically runs on top of TCP.
Optionally, in certain optional embodiments, the step 1.3 includes a step 1.31, a step 1.32 and a step 1.33;
Step 1.31, calling a preset rule according to the header information, and determining a cluster front-end source of the Nginx request;
step 1.32, setting cluster flag bit information of the Nginx request as source cluster information according to the source of the front end of the cluster;
and step 1.33, storing cluster flag bit information of the Nginx request.
Optionally, the invention can determine the source of the request according to the relevant information in the header in the request of the initial page of the system and the corresponding rule, and set the value of the corresponding cluster flag bit. For example, the value of the cluster flag of cluster a is set to 0, and the value of the cluster flag of cluster B is set to 1, which is not limited by the present invention.
Optionally, according to the matching result, the invention can store the value of the cluster flag bit as the basis of setting the cluster for the subsequent access request, generally derived from the access request of the same cluster, and the reverse proxy setting and the cluster background setting of the access request are consistent, which is not limited by the invention.
S200, dynamically setting a reverse proxy and a cluster background according to the source cluster information so as to distribute the Nginx request to the corresponding cluster background through the reverse proxy;
optionally, in some optional embodiments, the step S200 includes a step 2.1 and a step 2.2;
Step 2.1, dynamically setting a reverse proxy corresponding to the cluster zone bit information and a cluster background according to the stored cluster zone bit information, wherein the cluster background is a cluster for processing an Nginx request with the cluster zone bit information;
And 2.2, distributing the Nginx request to the cluster background for processing through the reverse proxy.
Optionally, the invention can dynamically set a reverse proxy in the Nginx load balancing according to the cluster flag bit value, and set a corresponding cluster so as to ensure that each module accesses in one cluster at the front end and the rear end, and the invention is not limited to this.
S300, intercepting Nginx requests of other modules at the front end of the cluster aiming at the initial page of the system, and distributing the Nginx requests of other modules to corresponding cluster background through the reverse proxy according to source cluster information of the Nginx requests of other modules.
The invention can be used for distributing the access request of other modules of the same cluster to the corresponding cluster background for processing directly according to the set parameters after the reverse proxy and the cluster background are set in the front. Of course, the invention can also set the corresponding reverse proxy and the cluster background again according to the access requests of other modules of the same cluster, and then distribute the access requests, and the invention is not limited to this.
Optionally, in the context of the present invention, the existing cluster service and the data storage medium used to upgrade the cluster service are required to be consistent to ensure consistency of system data and functions.
Optionally, in some optional embodiments, the step S300 includes a step 3.1, a step 3.2, and a step 3.3;
Step 3.1, intercepting Nginx requests of other modules at the front end of the cluster aiming at the initial page of the system;
Step 3.2, analyzing the Nginx requests of the other modules to obtain source cluster information of the Nginx requests of the other modules;
and 3.3, determining a corresponding reverse proxy according to the source cluster information, and distributing Nginx requests of the other modules to corresponding cluster background through the reverse proxy.
Optionally, the existing nmginx load balancing performs load balancing on multiple clusters, so that the problem of multi-cluster cross-domain control access flow cannot be directly solved, while other schemes only perform load balancing on background services, perform distribution services on client requests by taking users as dimensions, and cannot cover users in a full range. The invention ensures the stability of the cluster flow through the flow controller, and avoids the problem of incomplete coverage of users.
In the invention, the flow controller is added in Nginx load balancing, interception analysis is carried out on the access request of the front end, access cluster information is determined through the initial page of the system, a reverse proxy is dynamically set, the access cluster is designated, the access flow is ensured to be stabilized in one cluster, and the flexibility and the flow controllability of the whole architecture are enhanced.
As shown in fig. 4, the present invention provides an nginnx load multi-cluster access splitting device, which includes a source cluster information parsing unit 100, an agent setting unit 200, and other request intercepting units 300;
The source cluster information analyzing unit 100 is configured to intercept and analyze an ngnx request by using a flow controller when the front end of a cluster accesses a system initial page, so as to obtain source cluster information of the ngnx request, where the flow controller is set in an ngnx load balancing;
The agent setting unit 200 is configured to dynamically set a reverse agent and a cluster background according to the source cluster information, so that the nginnx request is distributed to the corresponding cluster background through the reverse agent;
the other request intercepting unit 300 is configured to intercept the nginnx requests of other modules at the front end of the cluster for the initial page of the system, and distribute the nginnx requests of other modules to the corresponding background of the cluster through the reverse proxy according to the source cluster information of the nginnx requests of other modules.
Optionally, in some optional embodiments, the source cluster information parsing unit 100 includes an nmginx request interception subunit, a header information parsing subunit, and a source cluster information matching subunit;
The Nginx request interception subunit is used for intercepting the Nginx request of the front end of the cluster aiming at the system initial page by the flow controller when the front end of the cluster accesses the system initial page;
The header information analysis subunit is configured to analyze header information in the nginnx request;
The source cluster information matching subunit is configured to match corresponding source cluster information according to the header information.
Optionally, in some optional embodiments, the source cluster information matching subunit includes a rule calling subunit, a flag bit information setting subunit, and a flag bit information storing subunit;
The rule calling subunit is used for calling a preset rule according to the header information to determine a cluster front-end source of the Nginx request;
the flag bit information setting subunit is configured to set, according to the source of the front end of the cluster, cluster flag bit information of the nginnx request as the source cluster information;
the flag bit information storage subunit is used for storing cluster flag bit information of the Nginx request.
Optionally, in some optional embodiments, the proxy setting unit 200 includes a proxy cluster setting subunit and a request distributing subunit;
The agent cluster setting subunit is used for dynamically setting a reverse agent corresponding to the cluster zone bit information and a cluster background according to the stored cluster zone bit information, wherein the cluster background is a cluster for processing an Nginx request with the cluster zone bit information;
And the request distribution subunit is used for distributing the Nginx request to the cluster background for processing through the reverse proxy.
Optionally, in some optional embodiments, the other request intercepting unit 300 includes other intercepting subunits, other parsing subunits, and other distributing subunits;
The other interception subunit is configured to intercept an Nginx request of the other module at the front end of the cluster for the initial page of the system;
The other analyzing subunit is configured to analyze the nginnx requests of the other modules to obtain source cluster information of the nginnx requests of the other modules;
And the other distributing subunit is used for determining a corresponding reverse proxy according to the source cluster information and distributing the Nginx requests of the other modules to corresponding cluster background through the reverse proxy.
The present invention provides a computer readable storage medium having stored thereon a program which, when executed by a processor, implements the nginnx load multi-cluster access splitting method of any of the above.
As shown in fig. 5, the present invention provides an electronic device 70, where the electronic device 70 includes at least one processor 701, and at least one memory 702 and a bus 703 connected to the processor 701, where the processor 701 and the memory 702 complete communication with each other through the bus 703, and the processor 701 is configured to call a program instruction in the memory 702 to execute the nginnx load multiple cluster access splitting method described in any one of the foregoing.
In the present invention, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.