[go: up one dir, main page]

CN109167840B - Task pushing method, node autonomous server and edge cache server - Google Patents

Task pushing method, node autonomous server and edge cache server Download PDF

Info

Publication number
CN109167840B
CN109167840B CN201811220162.0A CN201811220162A CN109167840B CN 109167840 B CN109167840 B CN 109167840B CN 201811220162 A CN201811220162 A CN 201811220162A CN 109167840 B CN109167840 B CN 109167840B
Authority
CN
China
Prior art keywords
target object
server
edge cache
cache server
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811220162.0A
Other languages
Chinese (zh)
Other versions
CN109167840A (en
Inventor
林智贤
林汉荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wangsu Science and Technology Co Ltd
Original Assignee
Wangsu Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wangsu Science and Technology Co Ltd filed Critical Wangsu Science and Technology Co Ltd
Priority to CN201811220162.0A priority Critical patent/CN109167840B/en
Publication of CN109167840A publication Critical patent/CN109167840A/en
Application granted granted Critical
Publication of CN109167840B publication Critical patent/CN109167840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/06Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for block-wise or stream coding, e.g. DES systems or RC4; Hash functions; Pseudorandom sequence generators
    • H04L9/0643Hash functions, e.g. MD5, SHA, HMAC or f9 MAC

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Engineering (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请实施例提供了一种任务推送方法、节点自治服务器及边缘缓存服务器,涉及内容分发网络技术领域,该方法包括:节点自治服务器在接收到任务推送请求时,先根据任务推送请求中携带的目标对象的标识查询缓存列表,确定缓存目标对象的边缘缓存服务器,然后将目标对象的操作任务推送至缓存目标对象的边缘缓存服务器,而不需要将目标对象的操作任务推送至所有的边缘缓存服务器。因此,每一个边缘缓存服务器只需要对自身缓存的网页的任务进行处理,从而提高了边缘缓存服务器处理任务的效率以及客户的满意度。由于节点自治服务器不需要将目标对象的操作任务推送至所有的边缘缓存服务器,从而也减小了节点自治服务器的压力,避免了宽带资源浪费。

Figure 201811220162

Embodiments of the present application provide a task push method, a node autonomous server, and an edge cache server, which relate to the technical field of content distribution networks. The method includes: when the node autonomous server receives a task push request, the The identifier of the target object queries the cache list, determines the edge cache server that caches the target object, and then pushes the operation task of the target object to the edge cache server that caches the target object, instead of pushing the operation task of the target object to all edge cache servers . Therefore, each edge cache server only needs to process the task of the webpage cached by itself, thereby improving the efficiency of the edge cache server in processing tasks and the satisfaction of customers. Since the node autonomous server does not need to push the operation tasks of the target object to all edge cache servers, the pressure on the node autonomous server is also reduced, and the waste of bandwidth resources is avoided.

Figure 201811220162

Description

Task pushing method, node autonomous server and edge cache server
Technical Field
The present invention relates to the technical field of Content Delivery Networks (CDN), and in particular, to a task pushing method, a node autonomous server, and an edge cache server.
Background
In a content distribution network, a client such as an e-commerce and the like submits content to be pushed to a content management system in advance, then the content management system pushes the content to a specific edge cache server according to the requirement of the client, and the edge cache server caches the content pushed by the content management system. When a user accesses the edge cache server, if the edge cache server has cache contents, the edge cache server directly sends the cache contents to the user, if the edge server does not have the cache contents, the content is pulled from a data source and sent to the user, wherein the data source is a server corresponding to customers such as e-commerce, and the edge cache server simultaneously caches the content pulled from the data source. When customers such as e-commerce need to update or delete content on the edge cache servers, the operation tasks are submitted to the content management system, and then are pushed to each edge cache server indiscriminately by the content management system, and some edge servers do not cache the content corresponding to the operation task, but the edge cache servers still need to process the pushed operation tasks, so that the pressure of the edge cache servers is increased, and the processing efficiency is reduced.
Disclosure of Invention
In the prior art, a content management system indiscriminately pushes a task to each edge cache server, so that the edge cache servers need to process tasks which do not belong to the edge cache servers, and the processing efficiency of the edge cache servers is affected.
In a first aspect, an embodiment of the present application provides a task pushing method, including:
a node autonomous server receives a task pushing request, wherein the task pushing request carries an identification of a target object and an operation task of the target object;
the node autonomous server inquires a cache list according to the identification of the target object, and determines an edge cache server for caching the target object, wherein the cache list correspondingly stores the identification of the target object cached by each edge cache server, and is determined according to a cache log in each edge cache server;
and the node autonomous server pushes the operation task of the target object to an edge cache server caching the target object, and the edge cache server caching the target object executes the operation task of the target object.
Optionally, the cache list includes a plurality of bloom filters, and one bloom filter correspondingly stores a hash value of an object cached by one edge cache server;
the node autonomous server inquires a cache list according to the identification of the target object, and determines an edge cache server caching the target object, comprising:
the node autonomous server performs hash mapping on the identifier of the target object to determine a target hash value of the target object;
the node autonomous server searches a bloom filter corresponding to each edge cache server according to the target hash value;
and the node autonomous server determines an edge cache server corresponding to the bloom filter containing the target hash value as an edge cache server for caching the target object.
Optionally, the cache list is determined according to cache logs in each edge cache server, and includes:
for each edge cache server, the node autonomous server collects a cache log from the edge cache server, wherein the cache log stores operation records of each object in the edge cache server;
when the node autonomous server determines that the cache log is a newly added cache object of the edge cache server, acquiring an identifier of the newly added object from the cache log;
the node autonomous server performs Hash mapping on the identification of the newly added object to determine the Hash value of the newly added object;
and the node autonomous server adds the hash value of the newly added object to a bloom filter corresponding to the edge cache server.
Optionally, the method further comprises:
when the node autonomous server determines that the cache log is the cache object deleted by the edge cache server, acquiring an identifier of the deleted object from the cache log;
the node autonomous server performs hash mapping on the identifier of the deleted object to determine a hash value of the deleted object;
and the node autonomous server deletes the hash value of the deleted object from the bloom filter corresponding to the edge cache server.
Optionally, the operation task of the target object is an update instruction of the target object;
the node autonomous server pushes the operation task of the target object to an edge cache server caching the target object, and the edge cache server caching the target object executes the operation task of the target object, including:
and the node autonomous server pushes the update instruction of the target object to an edge cache server caching the target object, the edge cache server caching the target object obtains the update content of the target object, and the old version content of the target object is replaced by the update content of the target object.
Optionally, the operation task of the target object is a deletion instruction of the target object;
the node autonomous server pushes the operation task of the target object to an edge cache server caching the target object, and the edge cache server caching the target object executes the operation task of the target object, including:
and the node autonomous server pushes the deletion instruction of the target object to an edge cache server caching the target object, and the edge cache server caching the target object inquires the target object from the cached object and deletes the target object.
In a second aspect, an embodiment of the present application provides an autonomous node server, including:
the system comprises a receiving module, a sending module and a processing module, wherein the receiving module is used for receiving a task pushing request, and the task pushing request carries an identification of a target object and an operation task of the target object;
the query module is used for querying a cache list according to the identification of the target object and determining an edge cache server for caching the target object, wherein the cache list correspondingly stores the identification of the target object cached by each edge cache server, and the cache list is determined according to a cache log in each edge cache server;
and the pushing module is used for pushing the operation task of the target object to the edge cache server caching the target object so as to enable the edge cache server caching the target object to execute the operation task of the target object.
Optionally, the cache list includes a plurality of bloom filters, and one bloom filter correspondingly stores a hash value of an object cached by one edge cache server;
the query module is specifically configured to:
performing hash mapping on the identifier of the target object, and determining a target hash value of the target object;
searching a bloom filter corresponding to each edge cache server according to the target hash value;
and determining the edge cache server corresponding to the bloom filter containing the target hash value as the edge cache server for caching the target object.
Optionally, the query module is specifically configured to:
for each edge cache server, acquiring a cache log from the edge cache server, wherein the cache log stores operation records of each object in the edge cache server;
when the cache log is determined to be the newly added cache object of the edge cache server, acquiring an identifier of the newly added object from the cache log;
performing hash mapping on the identifier of the newly added object, and determining a hash value of the newly added object;
and adding the hash value of the newly added object to a bloom filter corresponding to the edge cache server.
Optionally, the query module is further configured to:
when the cache log is determined to be the cache object deleted by the edge cache server, acquiring an identifier of the deleted object from the cache log;
performing hash mapping on the identifier of the deleted object to determine a hash value of the deleted object;
and deleting the hash value of the deleted object from the bloom filter corresponding to the edge cache server.
Optionally, the operation task of the target object is an update instruction of the target object;
the pushing module is specifically configured to:
and pushing the update instruction of the target object to an edge cache server caching the target object so that the edge cache server caching the target object obtains the update content of the target object and replaces the old version content of the target object with the update content of the target object.
Optionally, the operation task of the target object is a deletion instruction of the target object;
the pushing module is specifically configured to:
and pushing the deletion instruction of the target object to an edge cache server caching the target object, so that the edge cache server caching the target object inquires the target object from the cached object and deletes the target object.
In a third aspect, an embodiment of the present application provides an edge cache server, including:
the receiving module is used for receiving an operation task of a target object pushed by the node autonomous server;
and the edge cache server is determined by querying a cache list according to the identifier of the target object carried in the received task push request by the node autonomous server, and the cache list correspondingly stores the identifier of the object cached by each edge cache server.
In a fourth aspect, an embodiment of the present application provides a task pushing device, including at least one processing unit and at least one storage unit, where the storage unit stores a computer program, and when the program is executed by the processing unit, the processing unit is caused to execute the steps of the method in the first aspect.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program executable by a task pushing device, and when the program runs on the task pushing device, the program causes the task pushing device to perform the steps of the method of the first aspect.
Since the cache list of the node autonomous server correspondingly stores the identification of the object cached by each edge cache server, when receiving the task pushing request, the node autonomous server can firstly query the cache list according to the identification of the target object carried in the task pushing request, determine the edge cache server caching the target object, and then only needs to push the operation task of the target object to the edge cache server caching the target object, without pushing the operation task of the target object to all the edge cache servers. Therefore, each edge cache server only needs to process the tasks of the cached webpages, so that the efficiency of the edge cache servers in processing the tasks is improved, and the satisfaction of clients needing to push the tasks is improved. The node autonomous server does not need to push the operation tasks of the target object to all the edge cache servers, so that the pressure of the node autonomous server is reduced, and the waste of broadband resources is avoided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is an application scenario diagram provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a node autonomous server according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a task pushing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a counting bloom filter according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a counting bloom filter according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a counting bloom filter according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a counting bloom filter according to an embodiment of the present application;
fig. 8 is a schematic flowchart of a method for determining an edge cache server according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a counting bloom filter according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a counting bloom filter according to an embodiment of the present application;
fig. 11 is an application scenario diagram provided in the embodiment of the present application;
fig. 12 is a schematic structural diagram of a node autonomous server according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an edge cache server according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of a task pushing device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In a specific practical process, the inventor of the present invention finds that, when a client submits a task to be pushed to an edge cache server to a content management system, the content management system will indiscriminately push the task to each edge cache server corresponding to the client. However, for some tasks of updating or deleting the web pages cached in the edge cache server, only the edge cache server caching the web pages needs to execute the tasks, and the indifferent pushing task causes the edge cache server not caching the corresponding web pages to also execute the tasks. When a large number of clients push tasks at the same time, the edge cache server not only needs to process the tasks of the web pages cached by the edge cache server, but also needs to process the tasks of the web pages which are not cached by the edge cache server, so that the pressure of the edge cache server is greatly increased, the efficiency of the edge cache server in processing the tasks is reduced, and a large number of tasks are accumulated in the edge cache server to wait for processing. However, the tasks that the client requests to push are generally configured with a timeout, which results in increased customer complaints when a large number of tasks are timed out.
For this reason, the inventor of the present invention considers that, for the task of updating or deleting the web page cached in the edge cache server, a differential push method may be adopted, that is, the task of updating or deleting the web page is pushed to the edge cache server caching the web page, and is not pushed to the edge cache server not caching the web page. Specifically, the node autonomous server receives a task pushing request, wherein the task pushing request carries an identifier of a target object and an operation task of the target object, and the operation task at least comprises updating and deleting. Then, according to the identification of the target object, inquiring the cache list, and determining an edge cache server for caching the target object. The cache list correspondingly stores the identification of the target object cached by each edge cache server. And the node autonomous server acquires the cache logs of each edge cache server to update the cache list. And then pushing the operation task of the target object to an edge cache server caching the target object so that the edge cache server caching the target object executes the operation task of the target object. Therefore, the edge cache server only needs to process the operation tasks of the cached webpages, the task processing efficiency can be effectively improved, the overtime task processing is avoided, and the customer complaints are reduced. In addition, the node autonomous server does not need to push the tasks to each edge cache server, and broadband resources are saved.
The task pushing method in the embodiment of the present application may be applied to an application scenario as shown in fig. 1, where the application scenario includes a content management system 101, a node autonomic server 102, an edge cache server 103, a client terminal 104, and a user terminal 105.
The client terminal 104 is an electronic device with network communication capability, which may be a smart phone, a tablet computer, a portable personal computer, or the like. The client terminal 104 is connected to the content management system 101 through a wireless network. The content management system 101 is connected to one or more node autonomous servers 102 through a wireless network, one server is selected as a node autonomous server 102 in an area or a machine room, and the node autonomous server 102 manages an edge cache server 103 in the area or the machine room. The node autonomic server 102 is connected to one or more edge cache servers 103. The user terminal 105 is an electronic device with network communication capability, which may be a smart phone, a tablet computer, a portable personal computer, or the like. The user terminal 105 is connected to the edge cache server 103 through a wireless network, the user terminal 105 may request web page content, such as a panning web page, a live web page, and the like, from the edge cache server 103, and the user terminal 105 may also request APP content, such as a panning APP, a live APP, a Tencent video APP, and the like, from the edge cache server 103.
Taking the example of the user accessing the web page as an example, after the user inputs the website address or clicks the web page link at the user terminal 105, the user terminal 105 sends a web page request to the edge cache server 103, and the edge cache server 103 queries the cached web page. When the edge cache server 103 caches the web page requested by the user terminal 105, the edge cache server 103 directly transmits the cached web page to the user terminal 105. When the edge cache server 103 does not cache the web page requested by the user terminal 105, the web page is acquired from the origin server of the web page and is transmitted to the user terminal 105. The edge cache server 103 also caches the web pages retrieved from the origin server.
When a client updates or deletes a web page in the source server, a corresponding cached web page in the edge cache server 103 also needs to be updated or deleted, so that the web page acquired by the user from the edge cache server 103 is the same as the web page acquired from the source server. Therefore, in this embodiment of the application, the client terminal 104 sends a task pushing request to the content management system 101, where the task pushing request carries an identifier of the target object and an operation task of the target object, and the operation task at least includes updating and deleting. The content management system 101 sends a task push request to the node autonomic server 102 corresponding to the client. The cache list of the node autonomic server 102 stores hash values of objects cached in all the edge cache servers 103 corresponding to the client. The node autonomous server 102 queries the cache list according to the identifier of the target object, and determines an edge cache server 103 for caching the target object. The task push request is then sent to the edge cache server 103 that caches the target object. The edge cache server 103 that caches the target object performs the operation task of the target object.
Further, in the application scenario diagram shown in fig. 1, a schematic structural diagram of the node autonomous server 102 is shown in fig. 2, where the node autonomous server 102 includes: a communication layer 1021, a resolution layer 1022, a service layer 1023, a data layer 1024, and a storage layer 1025.
The communication layer 1021 mainly includes an acquisition module and a Remote Procedure Call (RPC) module, where the acquisition module is used to acquire a cache log of the edge cache server, and the RPC module is used to interact with the content management system. The parsing layer 1022 includes a cache parsing module and a communication protocol parsing module. The cache analysis module is used for analyzing the cache log of the edge cache server and the invalid data record format, and adding an analysis structure into a counting bloom filter in the data layer 1024. The communication protocol analysis module is used for analyzing a message protocol of communication between the content management system and the node autonomous server and can support JSON, protobuf, xml and the like. The functions of the service layer 1023 include hash mapping, cache queries, cache logs, monitoring statistics, system logs, etc. The data layer 1024 includes count bloom filters corresponding to all edge cache servers under the node autonomic server 102. The storage layer 1025 is a count bloom filter record of the persistent data layer 1024 to prevent the problem of push miss caused by data loss when the node autonomous server 102 machine is down or restarted.
Based on the application scenario diagram shown in fig. 1 and the structure of the node autonomous server shown in fig. 2, an embodiment of the present application provides a flow of a task pushing method, where the flow of the method may be interactively executed by the node autonomous server and the edge cache server, as shown in fig. 3, the method includes the following steps:
step S301, the node autonomous server receives a task pushing request, wherein the task pushing request carries an identifier of a target object and an operation task of the target object.
The target object can be the content cached on the edge cache server such as webpage content, APP content and the like. The address of the object may be a domain name, a Uniform Resource Locator (URL), or the like. The operation task may be an update instruction, a delete instruction, or the like. And the content management system receives a task pushing request sent by a client, and then determines a corresponding node autonomous server according to the identification of the target object carried in the task pushing request. Generally, each client is configured with a node autonomous server in advance, a corresponding client can be determined according to the identification of a target object, and then the node autonomous server corresponding to the identification of the target object is determined, and then a task push request is sent to the node autonomous server.
Step S302, the node autonomous service inquires a cache list according to the identification of the target object and determines an edge cache server for caching the target object.
The cache list correspondingly stores the identification of the target object cached by each edge cache server, and is determined according to the cache log in each edge cache server.
Step S303, the node autonomous service pushes the operation task of the target object to an edge cache server for caching the target object.
In step S304, the edge cache server executes the operation task of the target object.
Since the cache list correspondingly stores the identifiers of the objects cached by each edge cache server, when a task pushing request is received, the cache list can be queried according to the identifiers of the target objects carried in the task pushing request, the edge cache servers caching the target objects are determined, and then the operation tasks of the target objects only need to be pushed to the edge cache servers caching the target objects, and the operation tasks of the target objects do not need to be pushed to all the edge cache servers. Therefore, each edge cache server only needs to process the tasks of the cached webpages, so that the efficiency of the edge cache servers in processing the tasks is improved, and the satisfaction of clients needing to push the tasks is improved. The node autonomous server does not need to push the operation tasks of the target object to all the edge cache servers, so that the pressure of the node autonomous server is reduced, and the waste of broadband resources is avoided.
In the above step S302, the storage list is determined according to the cache log in each edge cache server.
For each edge cache server, the node autonomous server collects cache logs from the edge cache servers, and the cache logs store operation records of each object in the edge cache servers. Specifically, when the cache log is determined to be the newly added cache object of the edge cache server, the identifier of the newly added object is collected from the cache log, and the identifier of the newly added object is added into the cache list. And when the cache log is determined to be the edge cache server deletion cache object, acquiring the identifier of the deletion object from the cache log, and deleting the identifier of the deletion object from the cache list.
The following is an exemplary description of the cache log.
Illustratively, when receiving a web page request www.abc.com sent by a user, the edge cache server obtains the web page from the source server and caches the web page, and records the web page address in a cache log: www.abc.com and web page operations: and newly adding a cache webpage.
Illustratively, when a web page www.abc.com cached in the edge cache server expires, the edge cache server automatically deletes the web page www.abc.com, records the web page address in the cache log: www.abc.com and web page operations: and deleting the cached webpage.
Alternatively, the node autonomous server may periodically collect the cache log of the edge cache server, for example, the node autonomous server collects the cache log of the edge cache server for one day at zero point every day. The node autonomous server can also monitor the cache logs of the edge cache server in real time, and when newly added records are added in the cache logs, the newly added records are collected. For example, when the edge cache server adds a cache web page www.abc.com, the node autonomous server collects a web page address www.abc.com and a web page operation added in the cache log: newly adds a cached web page and then adds the web page address www.abc.com to the cache list. For another example, when the edge cache server deletes the cached netpage www.abc.com, the node autonomous server collects the deleted netpage address www.abc.com in the cache log. The cache log of the edge cache server is monitored, and the cache list in the node autonomous server is updated in time, so that when a task pushing request of a client is received, the edge cache server of a current cache target object can be determined according to the cache list, and therefore omission is avoided when a task is pushed.
For a cache list in a node autonomous server, the embodiments of the present application provide at least the following implementation manners:
in a possible implementation manner, the cache list in the node autonomous server may be a general table, and the identifier of the object cached by each edge cache server and the identifier of each edge cache server are stored in the cache list in correspondence.
In one possible implementation, the cache list in the node autonomous server includes a plurality of sub-lists, and one sub-list is corresponding to an identifier of an object cached by one edge cache server.
Illustratively, the cache list of the node autonomous server includes a sublist a and a sublist B, and corresponds to the edge cache server 1 and the edge cache server 2 managed by the node autonomous server. The acquisition module periodically acquires cache logs of 2 edge cache servers, and adds the web page address www.abc.com to the sublist a when the cache log of the edge cache server 1 is the newly added cache web page www.abc.com. When the cache log of the edge cache server 2 is www.12.com expired, the web page address www.12.com is queried in the sub-list B and deleted.
In a possible implementation manner, the cache list includes a plurality of bloom filters, and one bloom filter corresponds to a hash value of an object cached by one edge cache server.
The bloom filter may be a counting bloom filter, which adds a deletion operation based on the original bloom filter. The bloom filter is a random data structure with high space efficiency, uses a bit array to represent a set, can judge whether an element belongs to the set or not, and has the characteristics of small occupied memory and high searching efficiency. The counting bloom filter is characterized in that each bit in a bit array of the bloom filter is expanded into a counter (counter), 1 is respectively added to k (k is the number of hash functions) counter values corresponding to the counter (counter) when an element is inserted, and 1 is subtracted from the k counter values corresponding to the counter when the element is deleted, so that the deleting operation of the bloom filter is supported.
For each edge cache server, the node autonomous server collects cache logs from the edge cache servers, and the cache logs store operation records of each object in the edge cache servers. And when the node autonomous server determines that the cache log is the newly added cache object of the edge cache server, acquiring the identification of the newly added object from the cache log. And performing Hash mapping on the identification of the newly added object, determining the Hash value of the newly added object, and then adding the Hash value of the newly added object to a bloom filter corresponding to the edge cache server.
For example, the initial state of the bit array of the set count bloom filter is shown in FIG. 4, which is a set of 10 short integer counters, each bit of which is 0. Setting the number of the hash functions to be 3, setting the newly added webpage address to be www.abc.com, adopting the 3 hash functions to hash and map the newly added webpage address to the counter array of the counting bloom filter, and adding 1 to the counter corresponding to the 1 st, 3 th and 7 th counters in the counter array of the counting bloom filter, as shown in fig. 5. If the new web page www.123.com is added, 3 hash functions are used to hash and map the address of the new web page to the counter array of the counting bloom filter, and then the corresponding counter in the counter array of the counting bloom filter is added with 1, at this time, the counter array of the counting bloom filter is as shown in fig. 5. After the two newly added webpages are subjected to hash mapping, the 3 rd counter in the bit array of the counting bloom filter is added with 1, so that the 3 rd counter in the counter array of the counting bloom filter r is 2.
For each edge cache server, when determining that the cache log is the cache object deleted by the edge cache server, the node autonomous server collects an identifier of the deleted object from the cache log, performs hash mapping on the identifier of the deleted object, determines a hash value of the deleted object, and then deletes the hash value of the deleted object from a bloom filter corresponding to the edge cache server.
Illustratively, the current state of the counter array for setting the count bloom filter is shown in FIG. 6. The number of the hash functions is 3, the deleted web page address is www.abc.com, the deleted web page address is hash-mapped to the counter array of the counting bloom filter by using the 3 hash functions, corresponding to the 1 st, 3 rd and 7 th counters in the counter array of the counting bloom filter, and then the corresponding counter in the bit array of the counting bloom filter is decremented by 1, as shown in fig. 7. And the counting bloom filter is adopted to store the address hash value of the webpage cached by each edge cache server, so that the occupied memory of the node autonomous server is small.
Further, in step S302, the node autonomous server queries the cache list according to the address of the target object, and determines an edge cache server caching the target object, as shown in fig. 8, which specifically includes the following steps:
in step S801, hash mapping is performed on the object of the target object, and a target hash value of the target object is determined.
Step S802, a bloom filter corresponding to each edge cache server is searched according to the target hash value.
In step S803, the edge cache server corresponding to the bloom filter containing the target hash value is determined as the edge cache server caching the target object.
Illustratively, the address of the target object is set to www.abc.com, and the cache list of the node autonomous server includes count bloom filter a and count bloom filter B, which correspond to edge cache server 1 and edge cache server 2 managed by the node autonomous server. The current state of the counter array of the counting bloom filter a is shown in fig. 9, the address hash of the target object is mapped into the counter array of the counting bloom filter a by using 3 hash functions, which correspond to the 1 st, 3 rd and 7 th counters in the counter array of the counting bloom filter a, and at this time, the 1 st, 3 th and 7 th counters in the counter array shown in fig. 9 are not all 0, and it is determined that the target object is cached in the edge cache server 1. As shown in fig. 10, the current state of the counter array of the counting bloom filter a is that 3 hash functions are used to hash and map the address of the target object into the counter array of the counting bloom filter B, which corresponds to the 1 st, 3 rd, and 7 th counters in the counter array of the counting bloom filter B, and at this time, the 7 th counter in the 1 st, 3 th, and 7 th counters in the counter array shown in fig. 10 is 0, and it is determined that the target object is not cached in the edge cache server 2. Thus, edge cache server 1 is determined to be the edge cache server that caches target object www.abc.com, while edge cache server 2 is not the edge cache server that caches target object www.abc.com. The edge cache server of the cache target object is searched from the counting bloom filter through Hash mapping, so that the searching efficiency is high, and the processing efficiency of the task is improved.
In the above step S303 and step S304, when the operation task of the target object is an update instruction of the target object, the node autonomous server pushes the update instruction of the target object to the edge cache server that caches the target object. And the edge cache server for caching the target object acquires the updated content of the target object and updates the target object.
In a possible implementation manner, when the node autonomous server pushes the update instruction of the target object to the edge cache server caching the target object, the edge cache server caching the target object sends the update content of the target object to the edge cache server caching the target object, and the edge cache server caching the target object replaces the old version content of the target object with the update content of the target object. Illustratively, when a client needs to update an image in a webpage, and a task pushing request is sent to the node autonomous server, a new image is carried in the task pushing request, so that the node autonomous server sends an update instruction of the webpage to an edge cache server caching the webpage, and simultaneously sends the new image to the edge cache server caching the webpage, and the edge cache server replaces an old image in the webpage with the new image.
In a possible implementation manner, when the edge cache server caching the target object receives the update instruction of the target page, the edge cache server acquires the update content of the target object from the source server of the target object, and then replaces the old version content of the target object with the update content of the target object. Illustratively, when the edge cache server receives an instruction for updating the commodity price in the web page, the edge cache server obtains a new commodity price from the source server of the web page, and then replaces the old commodity price with the new commodity price in the web page.
In the above step S303 and step S304, when the operation task of the target object is a deletion instruction of the target object, the node autonomous server pushes the deletion instruction of the target object to the edge cache server that caches the target object, and the edge cache server that caches the target object queries the cached web page for the target object and deletes the target object. Optionally, after the edge cache server deletes the target object, a new record will be added to the cache log, and the node autonomous server collects the new record and then deletes the identifier of the target object in the cache list according to the record. When a client needs to update or delete a target object cached in the edge cache server, only the updated or deleted operation task is sent to the edge cache server caching the target object instead of being pushed to each edge cache server, so that the pressure of the edge cache servers is reduced, and the task processing efficiency is improved.
In order to better explain the embodiment of the application, a task pushing method provided by the embodiment of the application is described below with reference to a specific implementation scenario, where a customer is set as an e-commerce, a target object is a detail page of a commodity a, and the e-commerce modifies the price of the commodity a in order to promote sales on holidays. As shown in fig. 11, the e-commerce sends a task push request to the content management system 1101, where the task push request carries the URL of the item a detail page and an operation instruction for modifying the price of the item a. The content management system 1101 selects the node autonomous server 1102 corresponding to the e-commerce according to the URL of the product a detail page. The content management system 1101 sends a task push request to the node autonomic server 1102. Node autonomic server 1102 includes a counting bloom filter based indexing system 11021 therein. The acquisition system 1104 acquires the cache log of the edge cache server 1103 in real time, and the acquisition module of the index system 11021 acquires the cache log of the edge cache server from the acquisition system 1104 and updates the counting bloom filter in the index system 11021 according to the cache log. The indexing system 11021 hashes the URL of the item a detail page to determine the hash value of the item a detail page. Then, a count bloom filter corresponding to each edge cache server 1103 is queried, and the edge cache server 1103 corresponding to the count bloom filter containing the hash value of the article a detail page is determined as the edge cache server 1103 caching the article a detail page. And sending the operation task of modifying the price of the commodity A to an edge cache server 1103 for caching the detail page of the commodity A, wherein the edge cache server 1103 modifies the commodity price in the cached detail page of the commodity A.
Because the operation tasks of the target object are pushed to the edge cache servers caching the target object, rather than the operation tasks of the target object being pushed to all the edge cache servers, each edge cache server only needs to process the tasks of the web pages cached by the edge cache server, and therefore the efficiency of the edge cache servers in processing the tasks and the satisfaction degree of clients are improved. The node autonomous server does not need to push the operation tasks of the target object to all the edge cache servers, so that the pressure of the node autonomous server is reduced, and the waste of broadband resources is avoided.
Based on the same technical concept, an embodiment of the present application provides a node autonomic server, as shown in fig. 12, the apparatus 1200 includes: a receiving module 1201, a query module 1202, a pushing module 1203 and an updating module 1204.
A receiving module 1201, configured to receive a task pushing request, where the task pushing request carries an identifier of a target object and an operation task of the target object;
the query module 1202 is configured to query a cache list according to the identifier of the target object, and determine an edge cache server that caches the target object, where the cache list correspondingly stores the identifier of the target object cached by each edge cache server, and the cache list is determined according to a cache log in each edge cache server;
a pushing module 1203, configured to push the operation task of the target object to an edge cache server that caches the target object, so that the edge cache server that caches the target object executes the operation task of the target object.
Optionally, the cache list includes a plurality of bloom filters, and one bloom filter correspondingly stores a hash value of an object cached by one edge cache server;
the query module 1202 is specifically configured to:
performing hash mapping on the identifier of the target object, and determining a target hash value of the target object;
searching a bloom filter corresponding to each edge cache server according to the target hash value;
and determining the edge cache server corresponding to the bloom filter containing the target hash value as the edge cache server for caching the target object.
Optionally, the query module 1202 is specifically configured to:
for each edge cache server, acquiring a cache log from the edge cache server, wherein the cache log stores operation records of each object in the edge cache server;
when the cache log is determined to be the newly added cache object of the edge cache server, acquiring an identifier of the newly added object from the cache log;
performing hash mapping on the identifier of the newly added object, and determining a hash value of the newly added object;
and adding the hash value of the newly added object to a bloom filter corresponding to the edge cache server.
The query module 1202 is further configured to:
when the cache log is determined to be the cache object deleted by the edge cache server, acquiring an identifier of the deleted object from the cache log;
performing hash mapping on the identifier of the deleted object to determine a hash value of the deleted object;
and deleting the hash value of the deleted object from the bloom filter corresponding to the edge cache server.
Optionally, the operation task of the target object is an update instruction of the target object;
the pushing module 1203 is specifically configured to:
and pushing the update instruction of the target object to an edge cache server caching the target object so that the edge cache server caching the target object obtains the update content of the target object and replaces the old version content of the target object with the update content of the target object.
Optionally, the operation task of the target object is a deletion instruction of the target object;
the pushing module 1203 is specifically configured to:
and pushing the deletion instruction of the target object to an edge cache server caching the target object, so that the edge cache server caching the target object inquires the target object from the cached object and deletes the target object.
Based on the same technical concept, an embodiment of the present application provides an edge cache server, as shown in fig. 13, the apparatus 1300 includes: a receiving module 1301 and a processing module 1302.
A receiving module 1301, configured to receive an operation task of a target object pushed by a node autonomous server;
a processing module 1302, configured to execute an operation task of the target object, where the edge cache server is determined by querying, by the node autonomous server, a cache list according to an identifier of the target object carried in the received task push request, and the cache list correspondingly stores identifiers of objects cached by each edge cache server.
Optionally, the operation task of the target object is an update instruction of the target object;
the receiving module 1301 is specifically configured to:
receiving an update instruction of the target object pushed by the node autonomous server;
the processing module 1302 is specifically configured to:
and acquiring the updating content of the target object, and updating the target object according to the updating content of the target object.
Optionally, the operation task of the target object is a deletion instruction of the target object;
the receiving module 1301 is specifically configured to:
receiving a deletion instruction of the target object pushed by the node autonomous server;
the processing module 1302 is specifically configured to:
and querying the target object from the cached webpage and deleting the target object.
Based on the same technical concept, the embodiment of the present application provides a task pushing device, as shown in fig. 14, including at least one processor 1401 and a memory 1402 connected to the at least one processor, where a specific connection medium between the processor 1401 and the memory 1402 is not limited in this embodiment, and the processor 1401 and the memory 1402 are connected through a bus in fig. 14 as an example. The bus may be divided into an address bus, a data bus, a control bus, etc.
In the embodiment of the present application, the memory 1402 stores instructions executable by the at least one processor 1401, and the at least one processor 1401 may execute the steps included in the task pushing method by executing the instructions stored in the memory 1402.
The processor 1401 is a control center of the task pushing device, and may connect various parts of the task pushing device by using various interfaces and lines, and push a task by executing or executing instructions stored in the memory 1402 and calling data stored in the memory 1402. Alternatively, the processor 1401 may include one or more processing units, and the processor 1401 may integrate an application processor, which mainly handles an operating system, a user interface, application programs, and the like, and a modem processor, which mainly handles wireless communication. It will be appreciated that the modem processor described above may not be integrated into processor 1401. In some embodiments, processor 1401 and memory 1402 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 1401 may be a general-purpose processor such as a Central Processing Unit (CPU), a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Memory 1402, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 1402 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. Memory 1402 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 1402 in the embodiments of the present application may also be a circuit or any other device capable of performing a storage function for storing program instructions and/or data.
Based on the same technical concept, embodiments of the present application further provide a computer-readable storage medium, where computer instructions are stored, and when the computer instructions are executed on a task pushing device, the task pushing device is caused to perform the steps of the method for installing an application program as described above.
It should be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (15)

1.一种任务推送方法,其特征在于,包括:1. a task push method, is characterized in that, comprises: 节点自治服务器接收内容管理系统发送的任务推送请求,所述任务推送请求中携带目标对象的标识以及所述目标对象的操作任务,所述节点自治服务器是由所述内容管理系统根据所述目标对象的标识确定的;The node autonomous server receives the task push request sent by the content management system, the task push request carries the identifier of the target object and the operation task of the target object, and the node autonomous server is executed by the content management system according to the target object. identified by the identification; 所述节点自治服务器根据所述目标对象的标识查询缓存列表,确定缓存所述目标对象的边缘缓存服务器,所述缓存列表中对应保存了各边缘缓存服务器缓存的目标对象的标识,所述缓存列表是根据各边缘缓存服务器中的缓存日志确定的,所述缓存日志保存了边缘缓存服务器中各对象的操作记录;The node autonomous server queries a cache list according to the identifier of the target object, and determines an edge cache server that caches the target object, and the cache list corresponds to the identifier of the target object cached by each edge cache server, and the cache list It is determined according to the cache log in each edge cache server, and the cache log saves the operation records of each object in the edge cache server; 所述节点自治服务器将所述目标对象的操作任务推送至缓存所述目标对象的边缘缓存服务器,缓存所述目标对象的边缘缓存服务器执行所述目标对象的操作任务。The node autonomous server pushes the operation task of the target object to the edge cache server that caches the target object, and the edge cache server that caches the target object executes the operation task of the target object. 2.如权利要求1所述的方法,其特征在于,所述缓存列表中包括多个布隆过滤器,一个布隆过滤器对应保存一个边缘缓存服务器缓存的对象的哈希值;2. The method of claim 1, wherein the cache list comprises a plurality of bloom filters, and a bloom filter corresponds to a hash value of an object cached by an edge cache server; 所述节点自治服务器根据所述目标对象的标识查询缓存列表,确定缓存所述目标对象的边缘缓存服务器,包括:The node autonomous server queries the cache list according to the identifier of the target object, and determines the edge cache server that caches the target object, including: 所述节点自治服务器对所述目标对象的标识进行哈希映射,确定所述目标对象的目标哈希值;The node autonomous server performs hash mapping on the identifier of the target object, and determines the target hash value of the target object; 所述节点自治服务器根据所述目标哈希值查找每一个边缘缓存服务器对应的布隆过滤器;The node autonomous server searches the Bloom filter corresponding to each edge cache server according to the target hash value; 所述节点自治服务器将包含所述目标哈希值的布隆过滤器对应的边缘缓存服务器确定为缓存所述目标对象的边缘缓存服务器。The node autonomous server determines the edge cache server corresponding to the Bloom filter containing the target hash value as the edge cache server that caches the target object. 3.如权利要求2所述的方法,其特征在于,所述缓存列表是根据各边缘缓存服务器中的缓存日志确定的,包括:3. The method of claim 2, wherein the cache list is determined according to cache logs in each edge cache server, comprising: 针对每一个边缘缓存服务器,所述节点自治服务器从所述边缘缓存服务器中采集缓存日志,所述缓存日志保存了所述边缘缓存服务器中各对象的操作记录;For each edge cache server, the node autonomous server collects a cache log from the edge cache server, and the cache log saves operation records of each object in the edge cache server; 所述节点自治服务器在确定所述缓存日志为所述边缘缓存服务器新增缓存对象时,从所述缓存日志中采集新增对象的标识;When the node autonomous server determines that the cache log is a newly added cache object for the edge cache server, the identifier of the newly added object is collected from the cache log; 所述节点自治服务器对所述新增对象的标识进行哈希映射,确定所述新增对象的哈希值;The node autonomous server performs hash mapping on the identifier of the newly added object, and determines the hash value of the newly added object; 所述节点自治服务器将所述新增对象的哈希值添加至所述边缘缓存服务器对应的布隆过滤器。The node autonomous server adds the hash value of the newly added object to the Bloom filter corresponding to the edge cache server. 4.如权利要求3所述的方法,其特征在于,还包括:4. The method of claim 3, further comprising: 所述节点自治服务器在确定所述缓存日志为所述边缘缓存服务器删除缓存对象时,从所述缓存日志中采集删除对象的标识;When the node autonomous server determines that the cache log is for the edge cache server to delete the cache object, collect the identifier of the deleted object from the cache log; 所述节点自治服务器对所述删除对象的标识进行哈希映射,确定所述删除对象的哈希值;The node autonomous server performs hash mapping on the identifier of the deleted object, and determines the hash value of the deleted object; 所述节点自治服务器从所述边缘缓存服务器对应的布隆过滤器中删除所述删除对象的哈希值。The node autonomous server deletes the hash value of the deleted object from the Bloom filter corresponding to the edge cache server. 5.如权利要求1至4任一所述的方法,其特征在于,所述目标对象的操作任务为所述目标对象的更新指令;5. The method according to any one of claims 1 to 4, wherein the operation task of the target object is an update instruction of the target object; 所述节点自治服务器将所述目标对象的操作任务推送至缓存所述目标对象的边缘缓存服务器,缓存所述目标对象的边缘缓存服务器执行所述目标对象的操作任务,包括:The node autonomous server pushes the operation task of the target object to the edge cache server that caches the target object, and the edge cache server that caches the target object performs the operation task of the target object, including: 所述节点自治服务器将所述目标对象的更新指令推送至缓存所述目标对象的边缘缓存服务器,缓存所述目标对象的边缘缓存服务器获取所述目标对象的更新内容,并采用所述目标对象的更新内容替换所述目标对象的旧版本内容。The node autonomous server pushes the update instruction of the target object to the edge cache server that caches the target object, and the edge cache server that caches the target object obtains the updated content of the target object, and uses the update content of the target object. The updated content replaces the content of the old version of the target object. 6.如权利要求1至4任一所述的方法,其特征在于,所述目标对象的操作任务为所述目标对象的删除指令;6. The method according to any one of claims 1 to 4, wherein the operation task of the target object is a deletion instruction of the target object; 所述节点自治服务器将所述目标对象的操作任务推送至缓存所述目标对象的边缘缓存服务器,缓存所述目标对象的边缘缓存服务器执行所述目标对象的操作任务,包括:The node autonomous server pushes the operation task of the target object to the edge cache server that caches the target object, and the edge cache server that caches the target object performs the operation task of the target object, including: 所述节点自治服务器将所述目标对象的删除指令推送至缓存所述目标对象的边缘缓存服务器,缓存所述目标对象的边缘缓存服务器从缓存的对象中查询出所述目标对象并删除。The node autonomous server pushes the deletion instruction of the target object to the edge cache server that caches the target object, and the edge cache server that caches the target object queries and deletes the target object from the cached objects. 7.一种节点自治服务器,其特征在于,包括:7. A node autonomous server, comprising: 接收模块,用于接收内容管理系统发送的任务推送请求,所述任务推送请求中携带目标对象的标识以及所述目标对象的操作任务,所述节点自治服务器是由所述内容管理系统根据所述目标对象的标识确定的;A receiving module, configured to receive a task push request sent by the content management system, where the task push request carries the identifier of the target object and the operation task of the target object, and the node autonomous server is executed by the content management system according to the The identification of the target object is determined; 查询模块,用于根据所述目标对象的标识查询缓存列表,确定缓存所述目标对象的边缘缓存服务器,所述缓存列表中对应保存了各边缘缓存服务器缓存的目标对象的标识,所述缓存列表是根据各边缘缓存服务器中的缓存日志确定的,所述缓存日志保存了边缘缓存服务器中各对象的操作记录;a query module, configured to query a cache list according to the identifier of the target object, and determine the edge cache server that caches the target object, the cache list correspondingly stores the identifier of the target object cached by each edge cache server, and the cache list It is determined according to the cache log in each edge cache server, and the cache log saves the operation records of each object in the edge cache server; 推送模块,用于将所述目标对象的操作任务推送至缓存所述目标对象的边缘缓存服务器,以使缓存所述目标对象的边缘缓存服务器执行所述目标对象的操作任务。The push module is configured to push the operation task of the target object to the edge cache server that caches the target object, so that the edge cache server that caches the target object executes the operation task of the target object. 8.如权利要求7所述的节点自治服务器,其特征在于,所述缓存列表中包括多个布隆过滤器,一个布隆过滤器对应保存一个边缘缓存服务器缓存的对象的哈希值;8. The node autonomous server according to claim 7, wherein the cache list comprises a plurality of Bloom filters, and a Bloom filter correspondingly saves a hash value of an object cached by an edge cache server; 所述查询模块具体用于:The query module is specifically used for: 对所述目标对象的标识进行哈希映射,确定所述目标对象的目标哈希值;Hash mapping is performed on the identifier of the target object, and the target hash value of the target object is determined; 根据所述目标哈希值查找每一个边缘缓存服务器对应的布隆过滤器;Find the Bloom filter corresponding to each edge cache server according to the target hash value; 将包含所述目标哈希值的布隆过滤器对应的边缘缓存服务器确定为缓存所述目标对象的边缘缓存服务器。The edge cache server corresponding to the Bloom filter containing the target hash value is determined as the edge cache server that caches the target object. 9.如权利要求8所述的节点自治服务器,其特征在于,所述查询模块具体用于:9. The node autonomous server according to claim 8, wherein the query module is specifically used for: 针对每一个边缘缓存服务器,从所述边缘缓存服务器中采集缓存日志,所述缓存日志保存了所述边缘缓存服务器中各对象的操作记录;For each edge cache server, collect a cache log from the edge cache server, where the cache log saves operation records of each object in the edge cache server; 在确定所述缓存日志为所述边缘缓存服务器新增缓存对象时,从所述缓存日志中采集新增对象的标识;When it is determined that the cache log is a newly added cache object for the edge cache server, the identifier of the newly added object is collected from the cache log; 对所述新增对象的标识进行哈希映射,确定所述新增对象的哈希值;Hash mapping is performed on the identifier of the newly added object, and the hash value of the newly added object is determined; 将所述新增对象的哈希值添加至所述边缘缓存服务器对应的布隆过滤器。The hash value of the newly added object is added to the Bloom filter corresponding to the edge cache server. 10.如权利要求9所述的节点自治服务器,其特征在于,所述查询模块还用于:10. The node autonomous server according to claim 9, wherein the query module is further used for: 在确定所述缓存日志为所述边缘缓存服务器删除缓存对象时,从所述缓存日志中采集删除对象的标识;When it is determined that the cache log is a cache object deleted by the edge cache server, the identifier of the deleted object is collected from the cache log; 对所述删除对象的标识进行哈希映射,确定所述删除对象的哈希值;Hash mapping is performed on the identifier of the deleted object, and the hash value of the deleted object is determined; 从所述边缘缓存服务器对应的布隆过滤器中删除所述删除对象的哈希值。The hash value of the deleted object is deleted from the Bloom filter corresponding to the edge cache server. 11.如权利要求7至10任一所述的节点自治服务器,其特征在于,所述目标对象的操作任务为所述目标对象的更新指令;11. The node autonomous server according to any one of claims 7 to 10, wherein the operation task of the target object is an update instruction of the target object; 所述推送模块具体用于:The push module is specifically used for: 将所述目标对象的更新指令推送至缓存所述目标对象的边缘缓存服务器,以使缓存所述目标对象的边缘缓存服务器获取所述目标对象的更新内容,并采用所述目标对象的更新内容替换所述目标对象的旧版本内容。Push the update instruction of the target object to the edge cache server that caches the target object, so that the edge cache server that caches the target object obtains the update content of the target object, and replaces it with the update content of the target object Older version content of the target object. 12.如权利要求7至10任一所述的节点自治服务器,其特征在于,所述目标对象的操作任务为所述目标对象的删除指令;12. The node autonomous server according to any one of claims 7 to 10, wherein the operation task of the target object is a deletion instruction of the target object; 所述推送模块具体用于:The push module is specifically used for: 将所述目标对象的删除指令推送至缓存所述目标对象的边缘缓存服务器,以使缓存所述目标对象的边缘缓存服务器从缓存的对象中查询出所述目标对象并删除。The deletion instruction of the target object is pushed to the edge cache server that caches the target object, so that the edge cache server that caches the target object queries and deletes the target object from the cached objects. 13.一种边缘缓存服务器,其特征在于,包括:13. An edge cache server, comprising: 接收模块,用于接收节点自治服务器推送的目标对象的操作任务,所述操作任务是所述节点自治服务器接收的任务推送请求中携带的;a receiving module, configured to receive the operation task of the target object pushed by the node autonomous server, where the operation task is carried in the task push request received by the node autonomous server; 处理模块,用于执行所述目标对象的操作任务,所述边缘缓存服务器是所述节点自治服务器根据接收的任务推送请求中携带的所述目标对象的标识查询缓存列表确定的,所述任务推送请求是内容管理系统发送给所述节点自治服务器的,所述节点自治服务器是由所述内容管理系统根据所述目标对象的标识确定的,所述缓存列表中对应保存了各边缘缓存服务器缓存的对象的标识,所述缓存列表是根据各边缘缓存服务器中的缓存日志确定的,所述缓存日志保存了边缘缓存服务器中各对象的操作记录。A processing module, configured to perform the operation task of the target object, the edge cache server is determined by the node autonomous server according to the received task push request carried in the target object identifier query cache list, the task push request The request is sent by the content management system to the node autonomous server, the node autonomous server is determined by the content management system according to the identifier of the target object, and the cache list correspondingly stores the cache of each edge cache server. The identifier of the object, the cache list is determined according to the cache log in each edge cache server, and the cache log saves the operation records of each object in the edge cache server. 14.一种任务推送设备,其特征在于,包括至少一个处理单元以及至少一个存储单元,其中,所述存储单元存储有计算机程序,当所述程序被所述处理单元执行时,使得所述处理单元执行权利要求1~6任一权利要求所述方法的步骤。14. A task pushing device, characterized by comprising at least one processing unit and at least one storage unit, wherein the storage unit stores a computer program, and when the program is executed by the processing unit, the processing unit is executed. The unit performs the steps of the method of any one of claims 1-6. 15.一种计算机可读存储介质,其特征在于,其存储有可由任务推送设备执行的计算机程序,当所述程序在任务推送设备上运行时,使得所述任务推送设备执行权利要求1~6任一所述方法的步骤。15. A computer-readable storage medium, characterized in that it stores a computer program executable by a task pushing device, and when the program runs on the task pushing device, the task pushing device is made to execute claims 1 to 6 the steps of any of the methods.
CN201811220162.0A 2018-10-19 2018-10-19 Task pushing method, node autonomous server and edge cache server Active CN109167840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811220162.0A CN109167840B (en) 2018-10-19 2018-10-19 Task pushing method, node autonomous server and edge cache server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811220162.0A CN109167840B (en) 2018-10-19 2018-10-19 Task pushing method, node autonomous server and edge cache server

Publications (2)

Publication Number Publication Date
CN109167840A CN109167840A (en) 2019-01-08
CN109167840B true CN109167840B (en) 2021-12-07

Family

ID=64878431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811220162.0A Active CN109167840B (en) 2018-10-19 2018-10-19 Task pushing method, node autonomous server and edge cache server

Country Status (1)

Country Link
CN (1) CN109167840B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110401657B (en) 2019-07-24 2020-09-25 网宿科技股份有限公司 Processing method and device for access log
CN112311820A (en) * 2019-07-26 2021-02-02 腾讯科技(深圳)有限公司 Edge device scheduling method, connection method, device and edge device
JP6901016B1 (en) * 2020-03-09 2021-07-14 日本電気株式会社 Server equipment, edge equipment, processing pattern identification method and control program
CN113064720B (en) * 2021-03-12 2024-04-16 北京达佳互联信息技术有限公司 Object allocation method, device, server and storage medium
CN113065032A (en) * 2021-03-19 2021-07-02 内蒙古工业大学 Self-adaptive data sensing and collaborative caching method for edge network ensemble learning
CN115174696B (en) * 2022-09-08 2023-01-20 北京达佳互联信息技术有限公司 Node scheduling method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102387169A (en) * 2010-08-26 2012-03-21 阿里巴巴集团控股有限公司 Delete method, system and delete server for distributed cache objects
CN102420857A (en) * 2011-11-18 2012-04-18 北京蓝汛通信技术有限责任公司 Operation instruction transmission and processing method, transmission and cache server and storage system
CN103227839A (en) * 2013-05-10 2013-07-31 网宿科技股份有限公司 Management system for regional autonomy of content distribution network server
CN108197138A (en) * 2017-11-21 2018-06-22 北京邮电大学 The method and system for the matching subscription information that releases news in publish/subscribe system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8321568B2 (en) * 2008-03-31 2012-11-27 Amazon Technologies, Inc. Content management
CN101741731A (en) * 2009-12-03 2010-06-16 中兴通讯股份有限公司 Content metadata storing, inquiring method and managing system in content delivery network (CDN)

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102387169A (en) * 2010-08-26 2012-03-21 阿里巴巴集团控股有限公司 Delete method, system and delete server for distributed cache objects
CN102420857A (en) * 2011-11-18 2012-04-18 北京蓝汛通信技术有限责任公司 Operation instruction transmission and processing method, transmission and cache server and storage system
CN103227839A (en) * 2013-05-10 2013-07-31 网宿科技股份有限公司 Management system for regional autonomy of content distribution network server
CN108197138A (en) * 2017-11-21 2018-06-22 北京邮电大学 The method and system for the matching subscription information that releases news in publish/subscribe system

Also Published As

Publication number Publication date
CN109167840A (en) 2019-01-08

Similar Documents

Publication Publication Date Title
CN109167840B (en) Task pushing method, node autonomous server and edge cache server
EP3022708B1 (en) Content source discovery
US8606996B2 (en) Cache optimization
US9237114B2 (en) Managing resources in resource cache components
CN108429777B (en) Data updating method based on cache and server
CN111291079A (en) Data query method and device
CN108055302B (en) Picture caching processing method and system and server
WO2017167050A1 (en) Configuration information generation and transmission method, and resource loading method, apparatus and system
CN111221469B (en) Method, device and system for synchronizing cache data
CN109634753B (en) Data processing method, device, terminal and storage medium for switching browser kernels
CN106790601B (en) Service address reading device, system and method
US20200175549A1 (en) Advertisement Display Control Method, Terminal, and Advertisement Server
CN111859132A (en) Data processing method and device, intelligent equipment and storage medium
CN113542420B (en) Processing method and device of hot spot file, electronic equipment and medium
CN111339171A (en) Data query method, device and device
CN110515979B (en) Data query method, device, device and storage medium
CN112035766A (en) Webpage access method and device, storage medium and electronic equipment
CN116842292A (en) Dynamic page caching methods, electronic devices, vehicles and storage media
CN116405460A (en) Domain name resolution method, device, electronic device and storage medium for content distribution network
JP7392168B2 (en) URL refresh method, device, equipment and CDN node in CDN
CN117421499A (en) Front-end processing method, front-end processing device, terminal equipment and storage medium
CN111767481A (en) Access processing method, device, equipment and storage medium
CN114936216A (en) A data update method, device, electronic device and storage medium
CN111367683A (en) A method, device and device for obtaining results
CN113438302A (en) Dynamic resource multi-level caching method, system, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant