Disclosure of Invention
In the prior art, a content management system indiscriminately pushes a task to each edge cache server, so that the edge cache servers need to process tasks which do not belong to the edge cache servers, and the processing efficiency of the edge cache servers is affected.
In a first aspect, an embodiment of the present application provides a task pushing method, including:
a node autonomous server receives a task pushing request, wherein the task pushing request carries an identification of a target object and an operation task of the target object;
the node autonomous server inquires a cache list according to the identification of the target object, and determines an edge cache server for caching the target object, wherein the cache list correspondingly stores the identification of the target object cached by each edge cache server, and is determined according to a cache log in each edge cache server;
and the node autonomous server pushes the operation task of the target object to an edge cache server caching the target object, and the edge cache server caching the target object executes the operation task of the target object.
Optionally, the cache list includes a plurality of bloom filters, and one bloom filter correspondingly stores a hash value of an object cached by one edge cache server;
the node autonomous server inquires a cache list according to the identification of the target object, and determines an edge cache server caching the target object, comprising:
the node autonomous server performs hash mapping on the identifier of the target object to determine a target hash value of the target object;
the node autonomous server searches a bloom filter corresponding to each edge cache server according to the target hash value;
and the node autonomous server determines an edge cache server corresponding to the bloom filter containing the target hash value as an edge cache server for caching the target object.
Optionally, the cache list is determined according to cache logs in each edge cache server, and includes:
for each edge cache server, the node autonomous server collects a cache log from the edge cache server, wherein the cache log stores operation records of each object in the edge cache server;
when the node autonomous server determines that the cache log is a newly added cache object of the edge cache server, acquiring an identifier of the newly added object from the cache log;
the node autonomous server performs Hash mapping on the identification of the newly added object to determine the Hash value of the newly added object;
and the node autonomous server adds the hash value of the newly added object to a bloom filter corresponding to the edge cache server.
Optionally, the method further comprises:
when the node autonomous server determines that the cache log is the cache object deleted by the edge cache server, acquiring an identifier of the deleted object from the cache log;
the node autonomous server performs hash mapping on the identifier of the deleted object to determine a hash value of the deleted object;
and the node autonomous server deletes the hash value of the deleted object from the bloom filter corresponding to the edge cache server.
Optionally, the operation task of the target object is an update instruction of the target object;
the node autonomous server pushes the operation task of the target object to an edge cache server caching the target object, and the edge cache server caching the target object executes the operation task of the target object, including:
and the node autonomous server pushes the update instruction of the target object to an edge cache server caching the target object, the edge cache server caching the target object obtains the update content of the target object, and the old version content of the target object is replaced by the update content of the target object.
Optionally, the operation task of the target object is a deletion instruction of the target object;
the node autonomous server pushes the operation task of the target object to an edge cache server caching the target object, and the edge cache server caching the target object executes the operation task of the target object, including:
and the node autonomous server pushes the deletion instruction of the target object to an edge cache server caching the target object, and the edge cache server caching the target object inquires the target object from the cached object and deletes the target object.
In a second aspect, an embodiment of the present application provides an autonomous node server, including:
the system comprises a receiving module, a sending module and a processing module, wherein the receiving module is used for receiving a task pushing request, and the task pushing request carries an identification of a target object and an operation task of the target object;
the query module is used for querying a cache list according to the identification of the target object and determining an edge cache server for caching the target object, wherein the cache list correspondingly stores the identification of the target object cached by each edge cache server, and the cache list is determined according to a cache log in each edge cache server;
and the pushing module is used for pushing the operation task of the target object to the edge cache server caching the target object so as to enable the edge cache server caching the target object to execute the operation task of the target object.
Optionally, the cache list includes a plurality of bloom filters, and one bloom filter correspondingly stores a hash value of an object cached by one edge cache server;
the query module is specifically configured to:
performing hash mapping on the identifier of the target object, and determining a target hash value of the target object;
searching a bloom filter corresponding to each edge cache server according to the target hash value;
and determining the edge cache server corresponding to the bloom filter containing the target hash value as the edge cache server for caching the target object.
Optionally, the query module is specifically configured to:
for each edge cache server, acquiring a cache log from the edge cache server, wherein the cache log stores operation records of each object in the edge cache server;
when the cache log is determined to be the newly added cache object of the edge cache server, acquiring an identifier of the newly added object from the cache log;
performing hash mapping on the identifier of the newly added object, and determining a hash value of the newly added object;
and adding the hash value of the newly added object to a bloom filter corresponding to the edge cache server.
Optionally, the query module is further configured to:
when the cache log is determined to be the cache object deleted by the edge cache server, acquiring an identifier of the deleted object from the cache log;
performing hash mapping on the identifier of the deleted object to determine a hash value of the deleted object;
and deleting the hash value of the deleted object from the bloom filter corresponding to the edge cache server.
Optionally, the operation task of the target object is an update instruction of the target object;
the pushing module is specifically configured to:
and pushing the update instruction of the target object to an edge cache server caching the target object so that the edge cache server caching the target object obtains the update content of the target object and replaces the old version content of the target object with the update content of the target object.
Optionally, the operation task of the target object is a deletion instruction of the target object;
the pushing module is specifically configured to:
and pushing the deletion instruction of the target object to an edge cache server caching the target object, so that the edge cache server caching the target object inquires the target object from the cached object and deletes the target object.
In a third aspect, an embodiment of the present application provides an edge cache server, including:
the receiving module is used for receiving an operation task of a target object pushed by the node autonomous server;
and the edge cache server is determined by querying a cache list according to the identifier of the target object carried in the received task push request by the node autonomous server, and the cache list correspondingly stores the identifier of the object cached by each edge cache server.
In a fourth aspect, an embodiment of the present application provides a task pushing device, including at least one processing unit and at least one storage unit, where the storage unit stores a computer program, and when the program is executed by the processing unit, the processing unit is caused to execute the steps of the method in the first aspect.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program executable by a task pushing device, and when the program runs on the task pushing device, the program causes the task pushing device to perform the steps of the method of the first aspect.
Since the cache list of the node autonomous server correspondingly stores the identification of the object cached by each edge cache server, when receiving the task pushing request, the node autonomous server can firstly query the cache list according to the identification of the target object carried in the task pushing request, determine the edge cache server caching the target object, and then only needs to push the operation task of the target object to the edge cache server caching the target object, without pushing the operation task of the target object to all the edge cache servers. Therefore, each edge cache server only needs to process the tasks of the cached webpages, so that the efficiency of the edge cache servers in processing the tasks is improved, and the satisfaction of clients needing to push the tasks is improved. The node autonomous server does not need to push the operation tasks of the target object to all the edge cache servers, so that the pressure of the node autonomous server is reduced, and the waste of broadband resources is avoided.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In a specific practical process, the inventor of the present invention finds that, when a client submits a task to be pushed to an edge cache server to a content management system, the content management system will indiscriminately push the task to each edge cache server corresponding to the client. However, for some tasks of updating or deleting the web pages cached in the edge cache server, only the edge cache server caching the web pages needs to execute the tasks, and the indifferent pushing task causes the edge cache server not caching the corresponding web pages to also execute the tasks. When a large number of clients push tasks at the same time, the edge cache server not only needs to process the tasks of the web pages cached by the edge cache server, but also needs to process the tasks of the web pages which are not cached by the edge cache server, so that the pressure of the edge cache server is greatly increased, the efficiency of the edge cache server in processing the tasks is reduced, and a large number of tasks are accumulated in the edge cache server to wait for processing. However, the tasks that the client requests to push are generally configured with a timeout, which results in increased customer complaints when a large number of tasks are timed out.
For this reason, the inventor of the present invention considers that, for the task of updating or deleting the web page cached in the edge cache server, a differential push method may be adopted, that is, the task of updating or deleting the web page is pushed to the edge cache server caching the web page, and is not pushed to the edge cache server not caching the web page. Specifically, the node autonomous server receives a task pushing request, wherein the task pushing request carries an identifier of a target object and an operation task of the target object, and the operation task at least comprises updating and deleting. Then, according to the identification of the target object, inquiring the cache list, and determining an edge cache server for caching the target object. The cache list correspondingly stores the identification of the target object cached by each edge cache server. And the node autonomous server acquires the cache logs of each edge cache server to update the cache list. And then pushing the operation task of the target object to an edge cache server caching the target object so that the edge cache server caching the target object executes the operation task of the target object. Therefore, the edge cache server only needs to process the operation tasks of the cached webpages, the task processing efficiency can be effectively improved, the overtime task processing is avoided, and the customer complaints are reduced. In addition, the node autonomous server does not need to push the tasks to each edge cache server, and broadband resources are saved.
The task pushing method in the embodiment of the present application may be applied to an application scenario as shown in fig. 1, where the application scenario includes a content management system 101, a node autonomic server 102, an edge cache server 103, a client terminal 104, and a user terminal 105.
The client terminal 104 is an electronic device with network communication capability, which may be a smart phone, a tablet computer, a portable personal computer, or the like. The client terminal 104 is connected to the content management system 101 through a wireless network. The content management system 101 is connected to one or more node autonomous servers 102 through a wireless network, one server is selected as a node autonomous server 102 in an area or a machine room, and the node autonomous server 102 manages an edge cache server 103 in the area or the machine room. The node autonomic server 102 is connected to one or more edge cache servers 103. The user terminal 105 is an electronic device with network communication capability, which may be a smart phone, a tablet computer, a portable personal computer, or the like. The user terminal 105 is connected to the edge cache server 103 through a wireless network, the user terminal 105 may request web page content, such as a panning web page, a live web page, and the like, from the edge cache server 103, and the user terminal 105 may also request APP content, such as a panning APP, a live APP, a Tencent video APP, and the like, from the edge cache server 103.
Taking the example of the user accessing the web page as an example, after the user inputs the website address or clicks the web page link at the user terminal 105, the user terminal 105 sends a web page request to the edge cache server 103, and the edge cache server 103 queries the cached web page. When the edge cache server 103 caches the web page requested by the user terminal 105, the edge cache server 103 directly transmits the cached web page to the user terminal 105. When the edge cache server 103 does not cache the web page requested by the user terminal 105, the web page is acquired from the origin server of the web page and is transmitted to the user terminal 105. The edge cache server 103 also caches the web pages retrieved from the origin server.
When a client updates or deletes a web page in the source server, a corresponding cached web page in the edge cache server 103 also needs to be updated or deleted, so that the web page acquired by the user from the edge cache server 103 is the same as the web page acquired from the source server. Therefore, in this embodiment of the application, the client terminal 104 sends a task pushing request to the content management system 101, where the task pushing request carries an identifier of the target object and an operation task of the target object, and the operation task at least includes updating and deleting. The content management system 101 sends a task push request to the node autonomic server 102 corresponding to the client. The cache list of the node autonomic server 102 stores hash values of objects cached in all the edge cache servers 103 corresponding to the client. The node autonomous server 102 queries the cache list according to the identifier of the target object, and determines an edge cache server 103 for caching the target object. The task push request is then sent to the edge cache server 103 that caches the target object. The edge cache server 103 that caches the target object performs the operation task of the target object.
Further, in the application scenario diagram shown in fig. 1, a schematic structural diagram of the node autonomous server 102 is shown in fig. 2, where the node autonomous server 102 includes: a communication layer 1021, a resolution layer 1022, a service layer 1023, a data layer 1024, and a storage layer 1025.
The communication layer 1021 mainly includes an acquisition module and a Remote Procedure Call (RPC) module, where the acquisition module is used to acquire a cache log of the edge cache server, and the RPC module is used to interact with the content management system. The parsing layer 1022 includes a cache parsing module and a communication protocol parsing module. The cache analysis module is used for analyzing the cache log of the edge cache server and the invalid data record format, and adding an analysis structure into a counting bloom filter in the data layer 1024. The communication protocol analysis module is used for analyzing a message protocol of communication between the content management system and the node autonomous server and can support JSON, protobuf, xml and the like. The functions of the service layer 1023 include hash mapping, cache queries, cache logs, monitoring statistics, system logs, etc. The data layer 1024 includes count bloom filters corresponding to all edge cache servers under the node autonomic server 102. The storage layer 1025 is a count bloom filter record of the persistent data layer 1024 to prevent the problem of push miss caused by data loss when the node autonomous server 102 machine is down or restarted.
Based on the application scenario diagram shown in fig. 1 and the structure of the node autonomous server shown in fig. 2, an embodiment of the present application provides a flow of a task pushing method, where the flow of the method may be interactively executed by the node autonomous server and the edge cache server, as shown in fig. 3, the method includes the following steps:
step S301, the node autonomous server receives a task pushing request, wherein the task pushing request carries an identifier of a target object and an operation task of the target object.
The target object can be the content cached on the edge cache server such as webpage content, APP content and the like. The address of the object may be a domain name, a Uniform Resource Locator (URL), or the like. The operation task may be an update instruction, a delete instruction, or the like. And the content management system receives a task pushing request sent by a client, and then determines a corresponding node autonomous server according to the identification of the target object carried in the task pushing request. Generally, each client is configured with a node autonomous server in advance, a corresponding client can be determined according to the identification of a target object, and then the node autonomous server corresponding to the identification of the target object is determined, and then a task push request is sent to the node autonomous server.
Step S302, the node autonomous service inquires a cache list according to the identification of the target object and determines an edge cache server for caching the target object.
The cache list correspondingly stores the identification of the target object cached by each edge cache server, and is determined according to the cache log in each edge cache server.
Step S303, the node autonomous service pushes the operation task of the target object to an edge cache server for caching the target object.
In step S304, the edge cache server executes the operation task of the target object.
Since the cache list correspondingly stores the identifiers of the objects cached by each edge cache server, when a task pushing request is received, the cache list can be queried according to the identifiers of the target objects carried in the task pushing request, the edge cache servers caching the target objects are determined, and then the operation tasks of the target objects only need to be pushed to the edge cache servers caching the target objects, and the operation tasks of the target objects do not need to be pushed to all the edge cache servers. Therefore, each edge cache server only needs to process the tasks of the cached webpages, so that the efficiency of the edge cache servers in processing the tasks is improved, and the satisfaction of clients needing to push the tasks is improved. The node autonomous server does not need to push the operation tasks of the target object to all the edge cache servers, so that the pressure of the node autonomous server is reduced, and the waste of broadband resources is avoided.
In the above step S302, the storage list is determined according to the cache log in each edge cache server.
For each edge cache server, the node autonomous server collects cache logs from the edge cache servers, and the cache logs store operation records of each object in the edge cache servers. Specifically, when the cache log is determined to be the newly added cache object of the edge cache server, the identifier of the newly added object is collected from the cache log, and the identifier of the newly added object is added into the cache list. And when the cache log is determined to be the edge cache server deletion cache object, acquiring the identifier of the deletion object from the cache log, and deleting the identifier of the deletion object from the cache list.
The following is an exemplary description of the cache log.
Illustratively, when receiving a web page request www.abc.com sent by a user, the edge cache server obtains the web page from the source server and caches the web page, and records the web page address in a cache log: www.abc.com and web page operations: and newly adding a cache webpage.
Illustratively, when a web page www.abc.com cached in the edge cache server expires, the edge cache server automatically deletes the web page www.abc.com, records the web page address in the cache log: www.abc.com and web page operations: and deleting the cached webpage.
Alternatively, the node autonomous server may periodically collect the cache log of the edge cache server, for example, the node autonomous server collects the cache log of the edge cache server for one day at zero point every day. The node autonomous server can also monitor the cache logs of the edge cache server in real time, and when newly added records are added in the cache logs, the newly added records are collected. For example, when the edge cache server adds a cache web page www.abc.com, the node autonomous server collects a web page address www.abc.com and a web page operation added in the cache log: newly adds a cached web page and then adds the web page address www.abc.com to the cache list. For another example, when the edge cache server deletes the cached netpage www.abc.com, the node autonomous server collects the deleted netpage address www.abc.com in the cache log. The cache log of the edge cache server is monitored, and the cache list in the node autonomous server is updated in time, so that when a task pushing request of a client is received, the edge cache server of a current cache target object can be determined according to the cache list, and therefore omission is avoided when a task is pushed.
For a cache list in a node autonomous server, the embodiments of the present application provide at least the following implementation manners:
in a possible implementation manner, the cache list in the node autonomous server may be a general table, and the identifier of the object cached by each edge cache server and the identifier of each edge cache server are stored in the cache list in correspondence.
In one possible implementation, the cache list in the node autonomous server includes a plurality of sub-lists, and one sub-list is corresponding to an identifier of an object cached by one edge cache server.
Illustratively, the cache list of the node autonomous server includes a sublist a and a sublist B, and corresponds to the edge cache server 1 and the edge cache server 2 managed by the node autonomous server. The acquisition module periodically acquires cache logs of 2 edge cache servers, and adds the web page address www.abc.com to the sublist a when the cache log of the edge cache server 1 is the newly added cache web page www.abc.com. When the cache log of the edge cache server 2 is www.12.com expired, the web page address www.12.com is queried in the sub-list B and deleted.
In a possible implementation manner, the cache list includes a plurality of bloom filters, and one bloom filter corresponds to a hash value of an object cached by one edge cache server.
The bloom filter may be a counting bloom filter, which adds a deletion operation based on the original bloom filter. The bloom filter is a random data structure with high space efficiency, uses a bit array to represent a set, can judge whether an element belongs to the set or not, and has the characteristics of small occupied memory and high searching efficiency. The counting bloom filter is characterized in that each bit in a bit array of the bloom filter is expanded into a counter (counter), 1 is respectively added to k (k is the number of hash functions) counter values corresponding to the counter (counter) when an element is inserted, and 1 is subtracted from the k counter values corresponding to the counter when the element is deleted, so that the deleting operation of the bloom filter is supported.
For each edge cache server, the node autonomous server collects cache logs from the edge cache servers, and the cache logs store operation records of each object in the edge cache servers. And when the node autonomous server determines that the cache log is the newly added cache object of the edge cache server, acquiring the identification of the newly added object from the cache log. And performing Hash mapping on the identification of the newly added object, determining the Hash value of the newly added object, and then adding the Hash value of the newly added object to a bloom filter corresponding to the edge cache server.
For example, the initial state of the bit array of the set count bloom filter is shown in FIG. 4, which is a set of 10 short integer counters, each bit of which is 0. Setting the number of the hash functions to be 3, setting the newly added webpage address to be www.abc.com, adopting the 3 hash functions to hash and map the newly added webpage address to the counter array of the counting bloom filter, and adding 1 to the counter corresponding to the 1 st, 3 th and 7 th counters in the counter array of the counting bloom filter, as shown in fig. 5. If the new web page www.123.com is added, 3 hash functions are used to hash and map the address of the new web page to the counter array of the counting bloom filter, and then the corresponding counter in the counter array of the counting bloom filter is added with 1, at this time, the counter array of the counting bloom filter is as shown in fig. 5. After the two newly added webpages are subjected to hash mapping, the 3 rd counter in the bit array of the counting bloom filter is added with 1, so that the 3 rd counter in the counter array of the counting bloom filter r is 2.
For each edge cache server, when determining that the cache log is the cache object deleted by the edge cache server, the node autonomous server collects an identifier of the deleted object from the cache log, performs hash mapping on the identifier of the deleted object, determines a hash value of the deleted object, and then deletes the hash value of the deleted object from a bloom filter corresponding to the edge cache server.
Illustratively, the current state of the counter array for setting the count bloom filter is shown in FIG. 6. The number of the hash functions is 3, the deleted web page address is www.abc.com, the deleted web page address is hash-mapped to the counter array of the counting bloom filter by using the 3 hash functions, corresponding to the 1 st, 3 rd and 7 th counters in the counter array of the counting bloom filter, and then the corresponding counter in the bit array of the counting bloom filter is decremented by 1, as shown in fig. 7. And the counting bloom filter is adopted to store the address hash value of the webpage cached by each edge cache server, so that the occupied memory of the node autonomous server is small.
Further, in step S302, the node autonomous server queries the cache list according to the address of the target object, and determines an edge cache server caching the target object, as shown in fig. 8, which specifically includes the following steps:
in step S801, hash mapping is performed on the object of the target object, and a target hash value of the target object is determined.
Step S802, a bloom filter corresponding to each edge cache server is searched according to the target hash value.
In step S803, the edge cache server corresponding to the bloom filter containing the target hash value is determined as the edge cache server caching the target object.
Illustratively, the address of the target object is set to www.abc.com, and the cache list of the node autonomous server includes count bloom filter a and count bloom filter B, which correspond to edge cache server 1 and edge cache server 2 managed by the node autonomous server. The current state of the counter array of the counting bloom filter a is shown in fig. 9, the address hash of the target object is mapped into the counter array of the counting bloom filter a by using 3 hash functions, which correspond to the 1 st, 3 rd and 7 th counters in the counter array of the counting bloom filter a, and at this time, the 1 st, 3 th and 7 th counters in the counter array shown in fig. 9 are not all 0, and it is determined that the target object is cached in the edge cache server 1. As shown in fig. 10, the current state of the counter array of the counting bloom filter a is that 3 hash functions are used to hash and map the address of the target object into the counter array of the counting bloom filter B, which corresponds to the 1 st, 3 rd, and 7 th counters in the counter array of the counting bloom filter B, and at this time, the 7 th counter in the 1 st, 3 th, and 7 th counters in the counter array shown in fig. 10 is 0, and it is determined that the target object is not cached in the edge cache server 2. Thus, edge cache server 1 is determined to be the edge cache server that caches target object www.abc.com, while edge cache server 2 is not the edge cache server that caches target object www.abc.com. The edge cache server of the cache target object is searched from the counting bloom filter through Hash mapping, so that the searching efficiency is high, and the processing efficiency of the task is improved.
In the above step S303 and step S304, when the operation task of the target object is an update instruction of the target object, the node autonomous server pushes the update instruction of the target object to the edge cache server that caches the target object. And the edge cache server for caching the target object acquires the updated content of the target object and updates the target object.
In a possible implementation manner, when the node autonomous server pushes the update instruction of the target object to the edge cache server caching the target object, the edge cache server caching the target object sends the update content of the target object to the edge cache server caching the target object, and the edge cache server caching the target object replaces the old version content of the target object with the update content of the target object. Illustratively, when a client needs to update an image in a webpage, and a task pushing request is sent to the node autonomous server, a new image is carried in the task pushing request, so that the node autonomous server sends an update instruction of the webpage to an edge cache server caching the webpage, and simultaneously sends the new image to the edge cache server caching the webpage, and the edge cache server replaces an old image in the webpage with the new image.
In a possible implementation manner, when the edge cache server caching the target object receives the update instruction of the target page, the edge cache server acquires the update content of the target object from the source server of the target object, and then replaces the old version content of the target object with the update content of the target object. Illustratively, when the edge cache server receives an instruction for updating the commodity price in the web page, the edge cache server obtains a new commodity price from the source server of the web page, and then replaces the old commodity price with the new commodity price in the web page.
In the above step S303 and step S304, when the operation task of the target object is a deletion instruction of the target object, the node autonomous server pushes the deletion instruction of the target object to the edge cache server that caches the target object, and the edge cache server that caches the target object queries the cached web page for the target object and deletes the target object. Optionally, after the edge cache server deletes the target object, a new record will be added to the cache log, and the node autonomous server collects the new record and then deletes the identifier of the target object in the cache list according to the record. When a client needs to update or delete a target object cached in the edge cache server, only the updated or deleted operation task is sent to the edge cache server caching the target object instead of being pushed to each edge cache server, so that the pressure of the edge cache servers is reduced, and the task processing efficiency is improved.
In order to better explain the embodiment of the application, a task pushing method provided by the embodiment of the application is described below with reference to a specific implementation scenario, where a customer is set as an e-commerce, a target object is a detail page of a commodity a, and the e-commerce modifies the price of the commodity a in order to promote sales on holidays. As shown in fig. 11, the e-commerce sends a task push request to the content management system 1101, where the task push request carries the URL of the item a detail page and an operation instruction for modifying the price of the item a. The content management system 1101 selects the node autonomous server 1102 corresponding to the e-commerce according to the URL of the product a detail page. The content management system 1101 sends a task push request to the node autonomic server 1102. Node autonomic server 1102 includes a counting bloom filter based indexing system 11021 therein. The acquisition system 1104 acquires the cache log of the edge cache server 1103 in real time, and the acquisition module of the index system 11021 acquires the cache log of the edge cache server from the acquisition system 1104 and updates the counting bloom filter in the index system 11021 according to the cache log. The indexing system 11021 hashes the URL of the item a detail page to determine the hash value of the item a detail page. Then, a count bloom filter corresponding to each edge cache server 1103 is queried, and the edge cache server 1103 corresponding to the count bloom filter containing the hash value of the article a detail page is determined as the edge cache server 1103 caching the article a detail page. And sending the operation task of modifying the price of the commodity A to an edge cache server 1103 for caching the detail page of the commodity A, wherein the edge cache server 1103 modifies the commodity price in the cached detail page of the commodity A.
Because the operation tasks of the target object are pushed to the edge cache servers caching the target object, rather than the operation tasks of the target object being pushed to all the edge cache servers, each edge cache server only needs to process the tasks of the web pages cached by the edge cache server, and therefore the efficiency of the edge cache servers in processing the tasks and the satisfaction degree of clients are improved. The node autonomous server does not need to push the operation tasks of the target object to all the edge cache servers, so that the pressure of the node autonomous server is reduced, and the waste of broadband resources is avoided.
Based on the same technical concept, an embodiment of the present application provides a node autonomic server, as shown in fig. 12, the apparatus 1200 includes: a receiving module 1201, a query module 1202, a pushing module 1203 and an updating module 1204.
A receiving module 1201, configured to receive a task pushing request, where the task pushing request carries an identifier of a target object and an operation task of the target object;
the query module 1202 is configured to query a cache list according to the identifier of the target object, and determine an edge cache server that caches the target object, where the cache list correspondingly stores the identifier of the target object cached by each edge cache server, and the cache list is determined according to a cache log in each edge cache server;
a pushing module 1203, configured to push the operation task of the target object to an edge cache server that caches the target object, so that the edge cache server that caches the target object executes the operation task of the target object.
Optionally, the cache list includes a plurality of bloom filters, and one bloom filter correspondingly stores a hash value of an object cached by one edge cache server;
the query module 1202 is specifically configured to:
performing hash mapping on the identifier of the target object, and determining a target hash value of the target object;
searching a bloom filter corresponding to each edge cache server according to the target hash value;
and determining the edge cache server corresponding to the bloom filter containing the target hash value as the edge cache server for caching the target object.
Optionally, the query module 1202 is specifically configured to:
for each edge cache server, acquiring a cache log from the edge cache server, wherein the cache log stores operation records of each object in the edge cache server;
when the cache log is determined to be the newly added cache object of the edge cache server, acquiring an identifier of the newly added object from the cache log;
performing hash mapping on the identifier of the newly added object, and determining a hash value of the newly added object;
and adding the hash value of the newly added object to a bloom filter corresponding to the edge cache server.
The query module 1202 is further configured to:
when the cache log is determined to be the cache object deleted by the edge cache server, acquiring an identifier of the deleted object from the cache log;
performing hash mapping on the identifier of the deleted object to determine a hash value of the deleted object;
and deleting the hash value of the deleted object from the bloom filter corresponding to the edge cache server.
Optionally, the operation task of the target object is an update instruction of the target object;
the pushing module 1203 is specifically configured to:
and pushing the update instruction of the target object to an edge cache server caching the target object so that the edge cache server caching the target object obtains the update content of the target object and replaces the old version content of the target object with the update content of the target object.
Optionally, the operation task of the target object is a deletion instruction of the target object;
the pushing module 1203 is specifically configured to:
and pushing the deletion instruction of the target object to an edge cache server caching the target object, so that the edge cache server caching the target object inquires the target object from the cached object and deletes the target object.
Based on the same technical concept, an embodiment of the present application provides an edge cache server, as shown in fig. 13, the apparatus 1300 includes: a receiving module 1301 and a processing module 1302.
A receiving module 1301, configured to receive an operation task of a target object pushed by a node autonomous server;
a processing module 1302, configured to execute an operation task of the target object, where the edge cache server is determined by querying, by the node autonomous server, a cache list according to an identifier of the target object carried in the received task push request, and the cache list correspondingly stores identifiers of objects cached by each edge cache server.
Optionally, the operation task of the target object is an update instruction of the target object;
the receiving module 1301 is specifically configured to:
receiving an update instruction of the target object pushed by the node autonomous server;
the processing module 1302 is specifically configured to:
and acquiring the updating content of the target object, and updating the target object according to the updating content of the target object.
Optionally, the operation task of the target object is a deletion instruction of the target object;
the receiving module 1301 is specifically configured to:
receiving a deletion instruction of the target object pushed by the node autonomous server;
the processing module 1302 is specifically configured to:
and querying the target object from the cached webpage and deleting the target object.
Based on the same technical concept, the embodiment of the present application provides a task pushing device, as shown in fig. 14, including at least one processor 1401 and a memory 1402 connected to the at least one processor, where a specific connection medium between the processor 1401 and the memory 1402 is not limited in this embodiment, and the processor 1401 and the memory 1402 are connected through a bus in fig. 14 as an example. The bus may be divided into an address bus, a data bus, a control bus, etc.
In the embodiment of the present application, the memory 1402 stores instructions executable by the at least one processor 1401, and the at least one processor 1401 may execute the steps included in the task pushing method by executing the instructions stored in the memory 1402.
The processor 1401 is a control center of the task pushing device, and may connect various parts of the task pushing device by using various interfaces and lines, and push a task by executing or executing instructions stored in the memory 1402 and calling data stored in the memory 1402. Alternatively, the processor 1401 may include one or more processing units, and the processor 1401 may integrate an application processor, which mainly handles an operating system, a user interface, application programs, and the like, and a modem processor, which mainly handles wireless communication. It will be appreciated that the modem processor described above may not be integrated into processor 1401. In some embodiments, processor 1401 and memory 1402 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 1401 may be a general-purpose processor such as a Central Processing Unit (CPU), a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Memory 1402, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 1402 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. Memory 1402 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 1402 in the embodiments of the present application may also be a circuit or any other device capable of performing a storage function for storing program instructions and/or data.
Based on the same technical concept, embodiments of the present application further provide a computer-readable storage medium, where computer instructions are stored, and when the computer instructions are executed on a task pushing device, the task pushing device is caused to perform the steps of the method for installing an application program as described above.
It should be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.