CN112953984B - Data processing method, device, medium and system - Google Patents
Data processing method, device, medium and system Download PDFInfo
- Publication number
- CN112953984B CN112953984B CN201911256210.6A CN201911256210A CN112953984B CN 112953984 B CN112953984 B CN 112953984B CN 201911256210 A CN201911256210 A CN 201911256210A CN 112953984 B CN112953984 B CN 112953984B
- Authority
- CN
- China
- Prior art keywords
- server
- url
- request
- target server
- score
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 230000004044 response Effects 0.000 claims description 43
- 238000000034 method Methods 0.000 claims description 15
- 238000003860 storage Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 14
- 238000009825 accumulation Methods 0.000 description 9
- 230000015654 memory Effects 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000004075 alteration Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1023—Server selection for load balancing based on a hash applied to IP addresses or costs
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The present disclosure relates to a data processing method, apparatus, medium and system. The data processing method comprises the following steps: the first service server receives the URL request; determining a target server based on the URL of each request; when the target server is the first service server, responding to the request by the first service server; and forwarding the URL request to a target server when the target server is not the first service server. Only one cache file needs to be stored in the node, so that space occupation is reduced, a lower-layer CPU is consumed for kernel calculation, and the limitation of performance bottlenecks of an application layer is avoided. And forwarding balance can be achieved for all request types, and the application is wide.
Description
Technical Field
This document relates to content distribution networks, and more particularly, to data processing methods, apparatus, media, and systems.
Background
The CDN system is used as a platform for bearing the client flow, the server processes the client request and gives a response result, and the server processes different services at different response consumption speeds, so that service accumulation is easy to cause. In the related art, the load balancing method has two disadvantages, namely, the back-end request is directly weighted on the load balancing server, the cache can not be ensured to be reused, and the keep-alive persistent link of the special URL can not be maintained. Secondly, the hot spot URL strategy is changed into a strategy of directly dispersing the hot spot URL strategy to each cache server by finding the hot spot URL, but when a domain name request is wholly burst, the hot spot is caught and the elbow is broken when no obvious URL hot spot exists, because the hot spot finding system mainly carries out the hash mode diffusion of modification aiming at a single URL hot spot, the domain name level hot spot cannot be identified and cannot be processed. In addition, in the prior art, by directly calculating hardware information such as a back-end server CPU (Central processing Unit), a disk and the like as an allocation weight, special services are difficult to be ensured to be concentrated in the same service processor. Secondly, the bearing capacity of the server changes in real time when running, if the weight is modified at any time, the cache cannot be continuously utilized, so that a large number of source return requests are caused, and the CDN service cannot tolerate the requests.
Disclosure of Invention
To overcome the problems in the related art, a data processing method, apparatus, medium, and system are provided herein.
According to a first aspect herein, there is provided a data processing method, applied to a service server, comprising:
the first service server receives the URL request;
determining a target server based on the URL of each request;
when the target server is the first service server, responding to the request by the first service server; and forwarding the URL request to a target server when the target server is not the first service server.
The determining the target server based on the URL of each request includes:
when the server in the node has the cache file of the URL, determining the server with the cache file of the URL as a target server;
and when the node server does not have the cached file of the URL, determining a target server according to the consumption ability score of the node server.
The determining the target server according to the consumption ability score of the server in the node comprises:
when the consumption capacity score of the first service server meets a threshold value, determining the first service server as a target server, or determining a server with the highest consumption capacity score in a node as a target server;
and when the consumption capacity score of the first service server does not meet the threshold value, determining the server with the highest consumption capacity score in the node as a target server.
The consumption ability score includes: current request stacking number score, current request stacking trend score, server configured score sum.
When the URL is plural, the forwarding the request for the URL to the target server includes: and calculating a forwarding proportion according to the consumption capability score of the target server, and sending the URL request to the target server according to the forwarding proportion.
The forwarding proportion= (number of request piles/total number of requests) × (target server consumption capability score/first traffic server consumption capability score).
Said responding, by the first service server, the request when the target server is the first service server comprises: and if the first service server does not have the cache file of the URL, acquiring a source file by the first service server and responding to the request of the URL, and caching the response file.
According to another aspect herein, there is provided a data processing apparatus for use in a service server, comprising:
the receiving module is used for receiving the URL request by the first service server;
a target server determining module for determining a target server based on the URL of each request;
the forwarding processing module is used for responding to the request by the first service server when the target server is the first service server; and forwarding the URL request to a target server when the target server is not the service server.
The target server determining module determining the target server includes:
when the server in the node has the cache file of the URL, determining the server with the cache file of the URL as a target server;
and when the node server does not have the cached file of the URL, determining a target server according to the consumption ability score of the node server.
The determining the target server according to the consumption ability score of the server in the node comprises:
when the consumption capacity score of the first service server meets a threshold value, determining the first service server as a target server, or determining a server with the highest consumption capacity score in a node as a target server;
and when the consumption capacity score of the first service server does not meet the threshold value, determining the server with the highest consumption capacity score in the node as a target server.
The consumption ability score includes: current request stacking number score, current request stacking trend score, server configured score sum.
When the URL is plural, the forwarding the request for the URL to the target server includes: and calculating a forwarding proportion according to the consumption capability score of the target server, and sending the URL request to the target server according to the forwarding proportion.
The forwarding proportion= (number of request piles/total number of requests) × (target server consumption capability score/first traffic server consumption capability score).
Said responding, by the first service server, the request when the target server is the first service server comprises: and if the first service server does not have the cache file of the URL, acquiring a source file by the first service server and responding to the request of the URL, and caching the response file.
According to another aspect herein, there is provided a computer readable storage medium having stored thereon a computer program which when executed implements the steps of a data processing method.
According to another aspect herein, there is provided a data processing system comprising a data processing apparatus as described above.
The service server is configured, so that the service server further executes a data processing method, smooth operation of the service can be ensured under the condition of overall sudden occurrence of domain names of non-URL hot spots, the whole group of machines in the node is fully utilized, and the service bearing capacity is enhanced; the whole group only needs to store one cache file, so that the occupied space is reduced. Before and after the whole burst of the domain name of the non-URL hotspot, the data processor system continues to execute, and the continuous service of the system is less influenced. The bottom CPU is consumed to perform kernel calculation by a 4-layer packet forwarding technology, and the method is not limited by the performance bottleneck of an application layer. And the 302 scheduling mode is not used, and all request types can reach forwarding balance aiming at all files, so that the application is wide. The cache index data is stored in the URL record without external system storage. More than one server supporting data processing can be used, even all servers in the node can be used, if the target server downloads the load capacity after forwarding the request, the machine with the first consumption capacity score calculated at the current moment is taken as a bearing machine together, and the like, so that the whole group of fine data processing is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the disclosure, and do not constitute a limitation on the disclosure. In the drawings:
fig. 1 is a schematic diagram of load balancing through a load balancer.
Fig. 2 is a schematic diagram of service equalization implemented by a service server.
FIG. 3 is a flowchart illustrating a method of data processing according to an exemplary embodiment.
Fig. 4 is a block diagram of a data processing apparatus according to an exemplary embodiment.
FIG. 5 is a block diagram of a computer device, according to an example embodiment.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments herein more apparent, the technical solutions in the embodiments herein will be clearly and completely described below with reference to the accompanying drawings in the embodiments herein, and it is apparent that the described embodiments are some, but not all, embodiments herein. All other embodiments, based on the embodiments herein, which a person of ordinary skill in the art would obtain without undue burden, are within the scope of protection herein. It should be noted that, without conflict, the embodiments and features of the embodiments herein may be arbitrarily combined with each other.
Fig. 1 is a schematic diagram of load balancing through a load balancer. Referring to fig. 1, in the related art, each service server (cache) receives a request and a response request, and a load balancer (switch) achieves an average allocation of resources through a HASH algorithm. However, the bearer capability of the service server may vary from service to service. Different requests consume the CPU of the service server, IO resources are different, and response time of the service server for processing different requests is also different, so that request accumulation of certain servers is easy to cause, namely the number of requests distributed to the servers exceeds the processing capacity of the servers, and task overload is caused. As shown in fig. 1, load balancers switchA, switchB, and switchC allocate the same type of requests to cache1 according to a preset load balancing rule, when the cache1 processes a client request, performance may be reduced, and the load balancers may allocate a new client request to other service servers, where the other service servers need to re-acquire the cache file, so that multiple servers slow the same response file, and resource waste is caused. In addition, load balancing rules cannot be adjusted when there are a large number of users accessing different URLs under the same domain name at the moment (e.g., shopping sites at the time of double eleven shopping knots) as in hot spot requests at the burst domain name level. For example, in fig. 1, too many requests are generated based on domain name domainB, and a large number of clients access to various pages in the website of domainB, such as domainB/2.Txt, domainB/3.Txt, … …, in the same period, while such requests are not generated based on a single URL, the load balancing rule cannot be adjusted, which easily results in overload of the task of the cache 1.
In order to solve the above problems, in the data processing method provided herein, a service server is set, so that the service server has a function of forwarding a request. Fig. 2 is a schematic diagram of service equalization implemented by a service server. Referring to FIG. 2, taking cache1 as an example, domainB/2.Txt, domainB/3.Txt are requests for different URLs for domain name domainB, which, when many such requests result in a reduced cache1 capability or request stacking. The cache1 determines a target server based on the URL of each request according to the URL of the current request, forwards the requests to other service servers, and even if the requests of hot spot domain names burst, the requests are piled up, the cache1 can forward the overloaded requests to other servers according to the data processing method, so that the piled requests are spread to other servers in the node.
Fig. 3 is a flow chart of a data processing method. Referring to fig. 3, the data processing method is applied to a service server, and includes:
in step S31, the first service server receives the URL request.
Step S32, determining a target server based on the URL of each request.
Step S33, when the target server is the service server, the service server responds to the request; and forwarding the URL request to a target server when the target server is not the service server.
In this embodiment, the service server receives the requests of multiple clients forwarded by the load balancing server, or the requests forwarded by other service servers in the same node, but does not respond to all the requests, and determines the target server according to the URL of each request.
In one embodiment, step S32, based on the URL of each request, determines the target server includes:
when the server in the node has the cache file of the URL, determining the server with the cache file of the URL as a target server;
and when the node server does not have the cached file of the URL, determining a target server according to the consumption ability score of the node server.
The service server firstly checks whether the server in the node caches the response file of the URL according to the URL of each request. And establishing a cache index record, recording response files cached by each server, and inquiring the cache index record to know whether the cache corresponding to the URL exists or not and on which server the cache corresponding to the URL exists. If so, determining the server which caches the response file of the URL as a target server, and forwarding the request of the URL to the target server. Therefore, only one part of response file of the URL needs to be cached in a server in the node, and storage resources are saved. If not, the target server is determined from the consumption capability scores of servers within the node. And the requests are scattered to the server with the lowest load capacity and the best performance, so that the response speed is increased, and the user experience is improved.
The target server may be the first service server, and may be other servers. When the target server is a service server, the service server responds to the client request; when the target server is not the service server, the URL request is forwarded to the target server.
Even if the load balancer distributes the same URL request to different service servers, the cache file can be reused through balancing among the service servers, so that a large number of source-returning requests are avoided.
In one embodiment, determining the target server based on the consumption capability scores of the servers within the node comprises:
when the consumption capacity score of the service server meets a threshold value, determining the service server as a target server, or determining a server with the highest consumption capacity score in the node as the target server; when the consumption ability score of the service server meets the threshold value, the service server can be selected to respond to the request, or a server with higher consumption ability can be selected from the node servers to respond to the request.
And when the consumption capacity score of the service server does not meet the threshold value, determining the server with the highest consumption capacity score in the node as the target server. If the consumption capacity score of the service server does not meet the threshold, the service server may reach the service load or request accumulation occurs, and the currently unresponsive request is forwarded to the server with better consumption capacity in the node, so as to disperse the accumulation requests and accelerate the response speed of the requests.
The consumption ability score includes: current request stacking number score, current request stacking trend score, server configured score sum. For example, counting the current request accumulation number of each server in the node, if the request accumulation number is 0, indicating that the requests distributed to the server can be processed, and counting the score as 5; for the server with the highest request accumulation number in the node, other requests cannot be processed any more in the period, and the score is 0; for example, counting the request stacking trend of each server in the node, wherein the server score with the minimum request stacking trend is 5, and the server score with the maximum request stacking trend is 0; according to server configuration, CPU core number, load value, disk read-write capability and the like, configuring the highest server score as 5 and configuring the lowest server score as 0; and adding all scores of the server to obtain all consumption ability scores of the server. The higher the consumption capability score, the more requests the server can receive, and the faster the requests can be processed. The above score is only used for illustrating how to calculate the consumption ability score, and in practical application, other more detailed score scores can be introduced to reflect the consumption ability of the server more accurately, without limitation.
In one embodiment, when the URL is plural, forwarding the request of the URL to the target server includes: and calculating a forwarding proportion according to the consumption capability score of the target server, and sending the URL request to the target server according to the forwarding proportion. Forwarding ratio= (number of request piles/total number of requests) × (target server per consumption capability score/first service server per consumption capability score).
The number of request piles is generally far smaller than the total number of requests, and is approximately between 0.01 and 0.1 percent of the total number of requests under normal conditions, and the overload requests are forwarded according to the forwarding proportion, so that the overload requests are closer to the consumption capacity of the target server.
When a large number of access requests, especially sudden domain name hot spots, occur instantaneously, such as a Beijing east website in twenty times, a large number of users access different pages in the same website, namely, the domain names of a large number of URLs are the same, and when the access quantity of each URL does not reach the URL hot spot level set by a load balancer, a large number of different URL requests of the same domain name are distributed to the same service server through a hash algorithm by the load balancer, so that the consumption capacity of the service server is reached and exceeded, and request accumulation is generated. The traffic server may send URL requests to the target server in a forwarding proportion. After receiving the forwarded requests, the target server re-determines a new target server based on the URL of each request and forwards the requests to the new target server according to the method described above. And by analogy, a plurality of or all servers in the node share the client request together, share the cache resource and realize the effective utilization of the resource. When requests are piled up, piled requests can be rapidly dispersed to a plurality of servers, so that the response speed of the requests is increased, and the user experience is improved.
In an embodiment, when the URLs are multiple, multiple target servers are determined based on the URL of each request, and according to the consumption ability scores of the multiple target servers, the stacked requests are sent to the multiple target servers at a time according to the corresponding proportion, so that the effect of stacking request dispersion is further accelerated.
In one embodiment, if none of the service servers in the node has a cached file for a URL, the determined destination server obtains the source file and caches the response file in response to the request for the URL. When a server in a node does not have a cache file of a certain URL, a target server returns a source to pull a response file, the response file is cached in the target server, the cache index record is updated at the same time, the updated cache index record is synchronized to all servers in the node, and when other servers receive the request of the URL again, the request is forwarded to the target server, so that the source is prevented from being pulled again, and the internal consumption of a system is reduced.
In one embodiment, when the target server is not the first service server, the target server responding to the request for the URL includes: and modifying the source address of the response message as the address of the first service server, and sending the response message to a load balancer for forwarding the request of the URL.
For a better understanding of the data processing method herein, an example is illustrated:
taking the cache1 in fig. 2 as an example, the cache1 receives a request of a client allocated by a load balancer through a hash algorithm. Wherein, domainA/1.Txt is forwarded through load balancer switch a, domainB/2.Txt is forwarded through load balancer switch b, and domainbB/3.Txt is forwarded through load balancer switch c.
For a domainA/1.Txt request, the cache1 first searches whether the response file is already cached in the cache index record, the cache file is found as a result of the search, and in the cache1, the cache1 directly responds to the request, a response message is sent to the switch cha, and the switch cha forwards the response message to the corresponding client. Or, the cache index record is queried, all servers in the node do not have cache response files, but the consumption capability of the cache1 is enough, the cache1 does not need to forward the request, the cache1 is used for pulling 1.Txt from a source, and after the 1.Txt is cached locally, the request is responded, and a response message is sent to the switch A.
For the domainB/2.Txt request, the cache1 searches the cache index record, finds that the 2.TXT response file is cached on the cache2, then forwards the domainB/2.Txt to the cache2, and the cache2 responds to the domainB/2.Txt request by using the local cache file, modifies the source address of the response message into the address of the cache1, sends the address to the SwitchB, and forwards the response message to the corresponding client by the SwitchB.
For the domainB/3.Txt request, cache1 looks up the cache index record, finding that none of the servers in the node have a cache response file. Meanwhile, because of the burst of domain name hot spot requests, a large number of requests of domain B/2.Txt, domain B/3.Txt, domain B/4.Txt … … and the like for accessing the same domain name are sent to the node, and request accumulation occurs in the cache 1. cache1 calculates the consumption capability scores of the servers in the node, finds that cache3 consumes the capability score highest, and then forwards the domainB/3.Txt request to cache3. The cache3 acquires the source file 3.TXT, generates a response message after the source file is cached locally, modifies the source address of the response message into the address of the cache1, and modifies the target address of the response message into SwitchC.
Through the embodiment, the data processing method provided by the embodiment only needs to store one cache file for the whole group, so that the occupied space is reduced. The bottom CPU is consumed to perform kernel calculation by a 4-layer packet forwarding technology, and the method is not limited by the performance bottleneck of an application layer. And the 302 scheduling mode is not used, so that forwarding balance can be achieved for all request types, and the method is wide in application range. The cache index data is stored in the service server without external system storage. More than one server supporting data processing may be present, even all servers within a node. And after receiving the forwarding request, the target server further performs data processing according to the URL of the request, even if the machine with the first consumption capacity score at the current moment is calculated as a bearing machine together because the bearing capacity is downloaded after receiving the forwarding request, and the like, so that the whole group of fine data processing is realized. Under the overall sudden situation of the domain name of the non-URL hotspot, smooth operation of the service can be ensured, the whole group of machines in the node is fully utilized, and the service bearing capacity is enhanced; before and after the whole burst of the domain name of the non-URL hotspot, the load balancing system continues to execute, and the influence on the continuous service of the system is small.
Fig. 4 is a block diagram of a data processing apparatus. Referring to fig. 4, the data processing apparatus is applied to a service server, and includes a receiving module 401, a destination server determining module 402, and a forwarding processing module 403.
The receiving module 401 is configured to receive a negative URL request for the first service server;
the target server determination module 402 is configured to determine a target server based on the URL of each request;
the forwarding processing module 403 is configured to respond to the request by the first service server when the target server is the first service server; and forwarding the URL request to a target server when the target server is not the service server.
The target server determination module 402 determines that the target server includes:
when the server in the node has the cache file of the URL, determining the server with the cache file of the URL as a target server;
and when the node server does not have the cached file of the URL, determining a target server according to the consumption ability score of the node server.
The target server determining module determines a target server according to the consumption ability scores of the servers in the nodes, and the target server determining module comprises:
when the consumption capacity score of the first service server meets a threshold value, determining the first service server as a target server, or determining a server with the highest consumption capacity score in a node as a target server;
and when the consumption capacity score of the first service server does not meet the threshold value, determining the server with the highest consumption capacity score in the node as a target server.
The consumption ability score includes: current request stacking number score, current request stacking trend score, server configured score sum.
When the URL is plural, forwarding the request of the URL to the target server includes: and calculating a forwarding proportion according to the consumption capability score of the target server, and sending the URL request to the target server according to the forwarding proportion.
Forwarding ratio= (number of request piles/total number of requests) × (target server consumption capability score/the traffic server consumption capability score).
When the target server is the first service server, responding to the request by the first service server includes: if the first service server does not have the cache file of the URL, the first service server acquires the source file and responds to the request of the URL, and the response file is cached.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 5 is a block diagram illustrating a computer device 500 for data processing according to an example embodiment. For example, the computer device 500 may be provided as a server. Referring to fig. 5, the computer apparatus 500 includes a processor 501, and the number of processors may be set to one or more as needed. The computer device 500 further comprises a memory 502 for storing instructions, such as application programs, executable by the processor 501. The number of the memories can be set to one or more according to the requirement. Which may store one or more applications. The processor 501 is configured to execute instructions to perform the data processing methods described above.
It will be apparent to one of ordinary skill in the art that embodiments herein may be provided as a method, apparatus (device), or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
The description herein is with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments herein. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional identical elements in an article or apparatus that comprises the element.
While preferred embodiments herein have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all alterations and modifications as fall within the scope herein.
It will be apparent to those skilled in the art that various modifications and variations can be made herein without departing from the spirit and scope of the disclosure. Thus, given that such modifications and variations herein fall within the scope of the claims herein and their equivalents, such modifications and variations are intended to be included herein.
Claims (16)
1. A data processing method applied to a service server, comprising:
the first service server receives a URL request, and the URL request is forwarded by different load balancers;
determining a target server based on the URL of each request;
when the target server is the first service server, responding to the request by the first service server; and when the target server is not the first service server, forwarding the URL request to the target server so that the target server obtains a response file of the URL request, generating a response message, modifying a source address of the response message to be a first service server address, and sending the response message to a load balancer for forwarding the request of the URL.
2. The data processing method of claim 1, wherein the determining the target server based on the URL of each request comprises:
when the server in the node has the cache file of the URL, determining the server with the cache file of the URL as a target server;
and when the node server does not have the cached file of the URL, determining a target server according to the consumption ability score of the node server.
3. The data processing method of claim 2, wherein the determining the target server based on the consumption capability scores of the servers within the node comprises:
when the consumption capacity score of the first service server meets a threshold value, determining the first service server as a target server, or determining a server with the highest consumption capacity score in a node as a target server;
and when the consumption capacity score of the first service server does not meet the threshold value, determining the server with the highest consumption capacity score in the node as a target server.
4. A data processing method as claimed in claim 3, characterized in that,
the consumption ability score includes: current request stacking number score, current request stacking trend score, server configured score sum.
5. The data processing method of claim 4, wherein when the URL is plural, the forwarding the request of the URL to the target server includes: and calculating a forwarding proportion according to the consumption capability score of the target server, and sending the URL request to the target server according to the forwarding proportion.
6. The data processing method of claim 5, wherein the forwarding ratio = (number of requests stacked/total number of requests) × (target server consumption capability score/first traffic server consumption capability score).
7. The data processing method of claim 2, wherein the responding by the first service server to the request when the target server is the first service server comprises: and if the first service server does not have the cache file of the URL, acquiring a source file by the first service server and responding to the request of the URL, and caching the response file.
8. A data processing apparatus for use with a service server, comprising:
the receiving module is used for receiving the URL request by the first service server, and the URL request is forwarded by different load balancers;
a target server determining module for determining a target server based on the URL of each request;
the forwarding processing module is used for responding to the request by the first service server when the target server is the first service server; and when the target server is not the service server, forwarding the URL request to the target server so that the target server obtains a response file of the URL request, generating a response message, modifying a source address of the response message as a first service server address, and sending the response message to a load balancer for forwarding the request of the URL.
9. The data processing apparatus of claim 8, wherein the target server determination module determining a target server comprises:
when the server in the node has the cache file of the URL, determining the server with the cache file of the URL as a target server;
and when the node server does not have the cached file of the URL, determining a target server according to the consumption ability score of the node server.
10. The data processing apparatus of claim 9, wherein the determining the target server based on the consumption capability scores of the servers within the node comprises:
when the consumption capacity score of the first service server meets a threshold value, determining the first service server as a target server, or determining a server with the highest consumption capacity score in a node as a target server;
and when the consumption capacity score of the first service server does not meet the threshold value, determining the server with the highest consumption capacity score in the node as a target server.
11. The data processing apparatus of claim 10, wherein the consumption capability score comprises: current request stacking number score, current request stacking trend score, server configured score sum.
12. The data processing apparatus of claim 11, wherein when the URL is plural, the forwarding the request for the URL to the target server comprises: and calculating a forwarding proportion according to the consumption capability score of the target server, and sending the URL request to the target server according to the forwarding proportion.
13. The data processing apparatus of claim 12, wherein the forwarding ratio = (number of requests stacked/total number of requests) × (target server consumption capability score/first traffic server consumption capability score).
14. The data processing apparatus of claim 9, wherein the responding to the request by the first service server when the target server is the first service server comprises: and if the first service server does not have the cache file of the URL, acquiring a source file by the first service server and responding to the request of the URL, and caching the response file.
15. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed, implements the steps of the method according to any one of claims 1-7.
16. A data processing system, characterized in that the system comprises a data processing device according to any of claims 8-14.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911256210.6A CN112953984B (en) | 2019-12-10 | 2019-12-10 | Data processing method, device, medium and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911256210.6A CN112953984B (en) | 2019-12-10 | 2019-12-10 | Data processing method, device, medium and system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112953984A CN112953984A (en) | 2021-06-11 |
| CN112953984B true CN112953984B (en) | 2023-07-28 |
Family
ID=76225321
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201911256210.6A Active CN112953984B (en) | 2019-12-10 | 2019-12-10 | Data processing method, device, medium and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112953984B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113472901B (en) * | 2021-09-02 | 2022-01-11 | 深圳市信润富联数字科技有限公司 | Load balancing method, device, equipment, storage medium and program product |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107835437A (en) * | 2017-10-20 | 2018-03-23 | 广东省南方数字电视无线传播有限公司 | Dispatching method and device based on more caching servers |
| CN109327550A (en) * | 2018-11-30 | 2019-02-12 | 网宿科技股份有限公司 | An access request allocation method, device, storage medium and computer equipment |
| CN109542613A (en) * | 2017-09-22 | 2019-03-29 | 中兴通讯股份有限公司 | Distribution method, device and the storage medium of service dispatch in a kind of CDN node |
| CN109995881A (en) * | 2019-04-30 | 2019-07-09 | 网易(杭州)网络有限公司 | The load-balancing method and device of cache server |
| CN110493053A (en) * | 2019-08-22 | 2019-11-22 | 北京首都在线科技股份有限公司 | Merge monitoring method, device, terminal and the storage medium of content distributing network |
Family Cites Families (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7912978B2 (en) * | 2000-07-19 | 2011-03-22 | Akamai Technologies, Inc. | Method for determining metrics of a content delivery and global traffic management network |
| CN104283963B (en) * | 2014-10-21 | 2018-02-09 | 无锡云捷科技有限公司 | A kind of CDN load-balancing methods of Distributed Cooperative formula |
| CN104980482B (en) * | 2014-12-24 | 2019-09-13 | 深圳市腾讯计算机系统有限公司 | Document sending method and device, document receiving method and device |
| CN105262841A (en) * | 2015-11-06 | 2016-01-20 | 浪潮软件集团有限公司 | CDN network load balancing implementation method and CDN scheduling server |
| CN105376317A (en) * | 2015-11-19 | 2016-03-02 | 网宿科技股份有限公司 | Load balancing control method and load balancing control device |
| CN105933398A (en) * | 2016-04-18 | 2016-09-07 | 乐视控股(北京)有限公司 | Access request forwarding method and system in content distribution network |
| CN108076092A (en) * | 2016-11-14 | 2018-05-25 | 北大方正集团有限公司 | Web server resources balance method and device |
| CN109995839B (en) * | 2018-01-02 | 2021-11-19 | 中国移动通信有限公司研究院 | Load balancing method, system and load balancer |
| CN109246229B (en) * | 2018-09-28 | 2021-08-27 | 网宿科技股份有限公司 | Method and device for distributing resource acquisition request |
| CN110308983B (en) * | 2019-04-19 | 2022-04-05 | 中国工商银行股份有限公司 | Resource load balancing method and system, service node and client |
| CN110049130B (en) * | 2019-04-22 | 2020-07-24 | 北京邮电大学 | A method and device for service deployment and task scheduling based on edge computing |
| CN110198307B (en) * | 2019-05-10 | 2021-05-18 | 深圳市腾讯计算机系统有限公司 | Method, device and system for selecting mobile edge computing node |
-
2019
- 2019-12-10 CN CN201911256210.6A patent/CN112953984B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109542613A (en) * | 2017-09-22 | 2019-03-29 | 中兴通讯股份有限公司 | Distribution method, device and the storage medium of service dispatch in a kind of CDN node |
| CN107835437A (en) * | 2017-10-20 | 2018-03-23 | 广东省南方数字电视无线传播有限公司 | Dispatching method and device based on more caching servers |
| CN109327550A (en) * | 2018-11-30 | 2019-02-12 | 网宿科技股份有限公司 | An access request allocation method, device, storage medium and computer equipment |
| CN109995881A (en) * | 2019-04-30 | 2019-07-09 | 网易(杭州)网络有限公司 | The load-balancing method and device of cache server |
| CN110493053A (en) * | 2019-08-22 | 2019-11-22 | 北京首都在线科技股份有限公司 | Merge monitoring method, device, terminal and the storage medium of content distributing network |
Non-Patent Citations (4)
| Title |
|---|
| "Load-Balancing and High-Availability for a Machine Learning Architecture";F Dubosson;《IEEE》;全文 * |
| "Towards an improved heuristic genetic algorithm for static content delivery in cloud storage";Z Zheng;《Computers & Electrical Engineering》;全文 * |
| 互联网内容缓存系统的设计与应用;邓鸿辉;;中国有线电视(第03期);全文 * |
| 融合P2P技术的云平台快速内容分发方法;刘靖;赵文举;;计算机应用(第01期);全文 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112953984A (en) | 2021-06-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11431791B2 (en) | Content delivery method, virtual server management method, cloud platform, and system | |
| US10404790B2 (en) | HTTP scheduling system and method of content delivery network | |
| JP7609470B2 (en) | Context-aware route computation and selection | |
| US7127513B2 (en) | Method and apparatus for distributing requests among a plurality of resources | |
| US10033548B2 (en) | Method, system, service selection entity, and service management entity for selecting service provision entity | |
| US20140280606A1 (en) | Method and Apparatus for Content Management | |
| CA2939917A1 (en) | Content delivery network architecture with edge proxy | |
| CN111464649B (en) | Access request source returning method and device | |
| CN112087382B (en) | Service routing method and device | |
| CN111464661A (en) | Load balancing method and device, proxy equipment, cache equipment and service node | |
| WO2019148569A1 (en) | Method and system for sending request for acquiring data resource | |
| JP5177919B2 (en) | Index server and method | |
| CN117695617A (en) | Cloud game data optimization processing method and device | |
| CN113676514A (en) | File source returning method and device | |
| CN112953984B (en) | Data processing method, device, medium and system | |
| CN112311826A (en) | Method, device and system for processing access request in content distribution system | |
| CN114615333B (en) | Resource access request processing method, device, equipment and medium | |
| EP2874368A1 (en) | Method and device for generating aggregate layer networkmap and aggregate layer costmap | |
| CN112953985B (en) | Request data processing method, device, medium and system | |
| WO2021017970A1 (en) | Method and apparatus for scheduling access request, medium, and device | |
| CN114172912A (en) | Networking method of hybrid distributed network | |
| CN114500663B (en) | Scheduling method, device, equipment and storage medium of content distribution network equipment | |
| CN110968419A (en) | Data receiving method and device | |
| TW201832519A (en) | Flow entry management system applied to SDN network based upon user grouping and method thereof prevent virtual machine from generating network delay and packet loss due to overloading through grouping distributed mechanism having load balancing | |
| WO2023087820A1 (en) | Load balancing method and device, and computer-readable storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |