[go: up one dir, main page]

CN111885184A - Method and device for processing hot spot access keywords in high concurrency scene - Google Patents

Method and device for processing hot spot access keywords in high concurrency scene Download PDF

Info

Publication number
CN111885184A
CN111885184A CN202010743319.9A CN202010743319A CN111885184A CN 111885184 A CN111885184 A CN 111885184A CN 202010743319 A CN202010743319 A CN 202010743319A CN 111885184 A CN111885184 A CN 111885184A
Authority
CN
China
Prior art keywords
access
cache server
hotspot
keyword
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010743319.9A
Other languages
Chinese (zh)
Inventor
杨哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Priority to CN202010743319.9A priority Critical patent/CN111885184A/en
Publication of CN111885184A publication Critical patent/CN111885184A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a method for processing hot spot access keywords in a high concurrency scene, which comprises the following steps: monitoring access flow data of a plurality of servers in a cluster of the plurality of servers in a multi-level cache server architecture; determining a first cache server and a second cache server in a cluster of a plurality of servers in the multi-level cache server architecture according to the access flow data; determining at least one first hotspot access key of a plurality of access keys in the first cache server; mapping the first hotspot access keyword to obtain a second hotspot access keyword; and migrating the second hotspot access key to the second cache server. The load pressure of a single cache server is effectively reduced, the pressure of hot spot flow on a single machine is reduced, the impact of the hot spot on the cache is thoroughly solved, and the cache breakdown and the cache penetration are avoided.

Description

Method and device for processing hot spot access keywords in high concurrency scene
Technical Field
The invention relates to the technical field of software cloud monitoring, in particular to a method and a device for processing hot spot access keywords in a high concurrency scene.
Background
With the development of big data, cloud computing technology, the application of highly concurrent distributed scenarios has bloomed throughout. In the solution facing the impact of tens of millions of requests to read a database, a multi-level caching scheme is a common approach in the industry. One of the biggest problems with caching is then the handling of hot-spot access keys. The industry also has many processing means for hot spots, for example, by adding a hot spot area to store a hot spot access keyword specially, the purpose of storing the hot spot access keyword and a common keyword separately has been achieved. However, this approach not only adds complexity to the design, but also adds resource overhead, requiring new hotspot instances to cache hotspot data. Therefore, how to effectively process the hotspot access keywords is a problem that must be solved.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for processing a hotspot access keyword in a high concurrency scenario, a computer device, and a computer-readable storage medium, which are used to solve the problem of processing the hotspot access keyword.
The embodiment of the invention solves the technical problems through the following technical scheme:
monitoring access flow data of a plurality of cache servers in a cluster of a plurality of servers in a multi-level cache server architecture;
determining a first cache server and a second cache server in a cluster of a plurality of servers in the multi-level cache server architecture according to the access flow data;
determining at least one first hotspot access key of a plurality of access keys in the first cache server;
mapping the first hotspot access keyword to obtain a second hotspot access keyword;
and migrating the second hotspot access key to the second cache server.
Further, the determining, according to the access traffic data, a first cache server and a second cache server in the cache server cluster includes:
determining at least one first cache server in the application server cluster according to the access flow data;
and determining at least one second cache server in the cache server cluster according to the access flow data.
Further, the determining at least one first cache server in the application server cluster according to the access traffic data includes:
sequencing all application servers in the application server cluster according to the sequence of the access flow data from large to small;
taking M application servers which are ranked at the top as a first cache server;
the determining at least one second cache server in the cache server cluster according to the access traffic data comprises:
sequencing all cache servers in the cache server cluster according to the sequence of the access flow data from large to small;
and taking the M cache servers which are ranked later as a second cache server.
Further, the determining at least one first cache server in the application server cluster according to the access traffic data includes:
comparing the access flow data with a first preset threshold value;
determining an application server with access flow data exceeding a first preset threshold value in an application server cluster as a first cache server;
the determining at least one second cache server in the cache server cluster according to the access traffic data comprises:
and determining that the cache server with the access flow data not exceeding a first preset threshold value in the cache server cluster is a second cache server.
Further, the determining at least one first hotspot access key of a plurality of access keys in the first cache server comprises:
counting the total access times of a plurality of access keywords in the first cache server and the calling times of each access keyword;
comparing the total access times of the access keywords with the call times of each access keyword, and determining that the access keyword is a first hotspot access keyword when the ratio of the call times of each access keyword to the total access times exceeds a second preset threshold value.
Further, the mapping the first hotspot access keyword to obtain a second hotspot access keyword includes:
acquiring a pre-established mapping rule;
and mapping the first hotspot access keyword according to the mapping rule to obtain the second hotspot access keyword.
Further, the method further comprises:
monitoring access flow data of a plurality of cache servers in a cluster of a plurality of servers in a multi-level cache server architecture;
acquiring hotspot data corresponding to hotspot access keywords through the historical access flow data;
and pushing the hot spot data to an application server in an application server cluster.
In order to achieve the above object, an embodiment of the present invention further provides a device for processing a hotspot access keyword in a high concurrency scenario, where the device includes:
the access flow data monitoring module is used for monitoring the access flow data of a plurality of servers in a cluster of the plurality of servers in the multi-level cache server architecture;
a cache server determination module, configured to determine, according to the access traffic data, a first cache server and a second cache server in a cluster of multiple servers in the multi-level cache server architecture;
a first hotspot access key determining module, configured to determine at least one first hotspot access key of multiple access keys in the first cache server;
the second hotspot access keyword generation module is used for mapping the first hotspot access keyword to obtain a second hotspot access keyword;
and the second hotspot access keyword migration module is used for migrating the second hotspot access keyword to the second cache server.
In order to achieve the above object, an embodiment of the present invention further provides a computer device, where the computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor, when executing the computer program, implements the steps of the hotspot access keyword processing method in the high concurrency scenario described above.
In order to achieve the above object, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, where the computer program is executable by at least one processor, so as to cause the at least one processor to execute the steps of the hotspot access keyword processing method in the high concurrency scenario as described above.
According to the hotspot access keyword processing method, device, computer equipment and computer readable storage medium in the high concurrency scene provided by the embodiment of the invention, part of hotspot access keywords in the cache server containing more hotspot access keywords are migrated to the cache server containing less hotspot access keywords, so that the load pressure of a single cache server is effectively reduced, the pressure of hotspot flow on a single machine is reduced, the impact of hotspot on the cache is thoroughly solved, and cache breakdown and cache penetration are avoided. The usability of the cache is increased, hot data are isolated from general cache data, and the propagation of cache faults is reduced; meanwhile, the utilization rate of the cache server is improved, and the waste of cache server resources is reduced.
The invention is described in detail below with reference to the drawings and specific examples, but the invention is not limited thereto.
Drawings
Fig. 1 is a flowchart illustrating steps of a method for processing a hotspot access keyword in a high concurrency scenario according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a specific process of determining a first cache server and a second cache server in the cache server cluster according to the access traffic data;
fig. 3 is a schematic flowchart of a specific embodiment of determining at least one first cache server in the application server cluster according to the access traffic data;
fig. 4 is a schematic flowchart of a specific embodiment of determining at least one second cache server in the cache server cluster according to the access traffic data;
fig. 5 is a schematic specific flowchart of another embodiment of the step of determining at least one first cache server in the application server cluster according to the access traffic data;
fig. 6 is a schematic detailed flowchart of another embodiment of the step of determining at least one second cache server in the cache server cluster according to the access traffic data;
fig. 7 is a schematic specific flowchart of the step of determining at least one first hotspot access key of the plurality of access keys in the first cache server;
fig. 8 is a schematic specific flowchart of the steps of comparing the total access times of the access keywords with the call times of each access keyword, and determining that the access keyword is a first hotspot access keyword when the ratio of the call times of each access keyword to the total access times exceeds a second preset threshold;
fig. 9 is a schematic specific flowchart of the step of mapping the first hotspot access keyword to obtain a second hotspot access keyword;
fig. 10 is a schematic diagram illustrating a detailed flow of preheating hot spot data according to the present embodiment;
FIG. 11 is a schematic diagram of a program module of a second embodiment of a method for processing a hotspot access keyword in a high concurrency scenario according to the invention;
FIG. 12 is a diagram of a hardware configuration of a third embodiment of the computer apparatus according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Technical solutions between various embodiments may be combined with each other, but must be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
Example one
Referring to fig. 1, a flowchart illustrating steps of a method for processing a hotspot access keyword in a high concurrency scenario according to an embodiment of the present invention is shown. It is to be understood that the flow charts in the embodiments of the present method are not intended to limit the order in which the steps are performed. The following description is given by taking a computer device as an execution subject, specifically as follows:
in practical application, some services often have highly-concurrent data loading requests, for example, service scenes such as commodity killing, live webcast and the like, a user initiates operation through a personal application terminal, a background server receives related data loading requests related to applications of different users, the background server determines related keywords of each data loading request, in some cases, a plurality of data loading requests correspond to one same access keyword, and the same access keyword corresponding to a plurality of data loading requests becomes a hotspot access keyword.
Step S100: monitoring access flow data of a plurality of servers in a cluster of the plurality of servers in a multi-level cache server architecture;
specifically, a server cluster refers to a plurality of servers which are collected together to perform the same service, and appears to a client as if there is only one server. The service cluster can utilize a plurality of computers to perform parallel computation so as to obtain high computation speed, and also can use a plurality of computers to perform backup so as to ensure that any one machine damages the whole system or can normally run. The server cluster comprises a plurality of cache servers, each cache server comprises a plurality of access keywords, the access flow data is determined according to the access flow data corresponding to each access keyword, and the access flow data refers to the sum of the query rates per second of all the access keywords in the cache servers. In an exemplary embodiment, the access traffic data may be reported to the hot spot coordinator by the cache server according to a certain period, or may be actively sent to the cache server by the hot spot coordinator to obtain the access traffic data.
Step S200: and determining a first cache server and a second cache server in a cluster of a plurality of servers in the multi-level cache server architecture according to the access flow data.
Specifically, the first cache server refers to a cache server with relatively high access traffic data, and the second cache server refers to a cache server with relatively low access traffic data. In an exemplary embodiment, the first cache server may be one or more, and the second cache server may also be one or more.
Specifically, since the access keywords contained in each cache server are different, and the number of the contained access keywords is different, the access traffic data of each cache server is also different.
In an exemplary embodiment, the multi-level cache architecture includes an application server cluster, a cache server cluster, and a hotspot coordinating server, and the first cache server and the second cache server in the cache server cluster are determined according to the access traffic data. The application server cluster is a server cluster corresponding to the application software, please refer to fig. 2, and step S200 may further include:
step S201: determining at least one first cache server in the application server cluster according to the access flow data;
in an exemplary embodiment, referring to fig. 3, step S201 may further include:
step S2011A: and sequencing all the application servers in the application server cluster according to the sequence of the access flow data from large to small.
Step S2011B: and taking the top M application servers as a first cache server.
Step S202: and determining at least one second cache server in the cache server cluster according to the access flow data.
In an exemplary embodiment, referring to fig. 4, step S202 may further include: step S2021A: sequencing all cache servers in the cache server cluster according to the sequence of the access flow data from large to small;
step S2021B: and taking the M cache servers which are ranked later as a second cache server. The number M may be determined by an experienced developer according to an actual situation, which may be an application service scenario. In one possible implementation, the modification of the number M may be made through the UI. The modification interfaces of the number M are displayed on the front-end interface through the UI interface, and the number M is dynamically modified through a specific operation, wherein the specific operation can be clicking virtual buttons with increased or decreased numbers to increase or decrease the number M.
In another embodiment, referring to fig. 5, step S201 may further include:
step S2012A: comparing the access flow data with a first preset threshold value;
step S2012B: determining an application server with access flow data exceeding a first preset threshold value in an application server cluster as a first cache server;
in an exemplary embodiment, referring to fig. 6, step S202 may further include:
the determining at least one second cache server in the cache server cluster according to the access traffic data comprises:
step S2022A: comparing the access flow data with a first preset threshold value;
step S2022B: and determining that the cache server with the access flow data not exceeding a first preset threshold value in the cache server cluster is a second cache server.
Specifically, the first preset threshold may also be determined by an experienced developer according to an actual situation, and the actual situation may specifically be an application service scene or the like. In a possible embodiment, the first preset threshold may be constant or may be modified according to actual situations. Specifically, the modification interface of the first preset threshold is displayed on the front-end interface through the UI interface, and the first preset threshold is dynamically modified through a specific operation, where the specific operation may be to click a virtual button whose number is increased or decreased by the first preset threshold, so as to modify the first preset threshold.
Step S300: at least one first hotspot access key of a plurality of access keys in the first cache server is determined.
Specifically, since the first cache server generally includes a plurality of hotspot access keywords, at least one hotspot access keyword needs to be determined among the plurality of hotspot access keywords, so as to migrate the hotspot access keyword.
In an exemplary embodiment, referring to fig. 7, step S300 may further include:
step S301: and counting the total access times of the access keywords in the first cache server and the calling times of each access keyword.
Specifically, when only one first cache server is determined, the total access times of all the access keywords contained in the first cache server and the call times of each access keyword are counted, and when a plurality of first cache servers are determined, the total access times of all the access keywords contained in each first cache server and the call times of each access keyword are counted.
Step S302: comparing the total access times of the access keywords with the call times of each access keyword, and determining that the access keyword is a first hotspot access keyword when the ratio of the call times of each access keyword to the total access times exceeds a second preset threshold value.
Specifically, the second preset threshold may also be determined by an experienced developer according to an actual situation, and the actual situation may specifically be an application service scene or the like. In a feasible implementation manner, the second preset threshold may be unchanged, or may be modified according to an actual situation, where a modification manner of the second preset threshold may be the same as a modification manner of the first preset threshold, and details are not described in this scheme.
In an exemplary embodiment, referring to fig. 8, step S302 may further include:
step S302A: sorting the ratio of the calling times of each access keyword to the total access times according to the size exceeding a second preset threshold value;
step S302B: and determining the top N access keywords as the first hotspot access keywords.
The number N may be determined by an experienced developer according to an actual situation, which may be an application service scenario. In one possible implementation, the number N of modifications may be made through the UI. The number N of modification interfaces is displayed on the front-end interface through the UI interface, and the number N is dynamically modified through a specific operation, wherein the specific operation can be clicking virtual buttons with increased or decreased numbers to increase or decrease the number N.
Step S400: mapping the first hotspot access keyword to obtain a second hotspot access keyword;
in an exemplary embodiment, referring to fig. 9, step S400 may further include:
step S401: acquiring a pre-established mapping rule;
step S402: and mapping the first hotspot access keyword according to the mapping rule to obtain the second hotspot access keyword.
Specifically, mapping the first hotspot access keyword refers to converting the first hotspot access keyword into a second hotspot access keyword by using a prefix adding manner or a suffix adding manner through a mapping rule, so that the hotspot access keyword is different from other access keywords, and thus, it is determined that the access keyword needs to be hashed. Specifically, by means of prefixing a first hotspot access key, for example, a certain first hotspot access key is key1, once the access key is detected to be the first hotspot access key, the first hotspot access key is converted into a number + "_" + key1 according to a mapping rule to serve as a second hotspot access key. And judging whether secondary hashing is required to be carried out or not according to whether the identified access key word is the first hotspot access key word or not.
Step S500: and migrating the second hotspot access key to the second cache server.
Specifically, the second hotspot access keyword is migrated to the second cache server according to a hotspot access keyword processing policy, where the hotspot access keyword processing policy refers to that the second hotspot access keyword is dispersed to the second cache server according to a hash rule. The hotspot access key processing strategy is different according to different distinguishing methods of the first cache server and the second cache server. For example, the second access keywords may be uniformly migrated to the second cache server according to the number of the second hotspot access keywords, for example, when the number of the second hotspot access keywords is 100, and the number of the second cache servers is 5, every 20 hotspot access keywords are migrated to one second cache server; when only one first cache server and only one second cache server are provided, all the second hotspot access keywords are migrated to the second cache server; in addition, the hotspot access keywords of the first cache server with more hotspot access keywords can be migrated to the second cache server with less access traffic data, so that the resource utilization is maximized. In another embodiment, the hotspot access key processing policy further comprises: hashing by key value, numbering polling, assigning a cached instance to hash to back end, etc.
In an exemplary embodiment, the hotspot access keyword processing policy is pre-configured by using an Apollo open source project, the Apollo open source project is an open source unified Application configuration center, and supports management and configuration from 4 dimensions, namely Application (Application), environment (environment), Cluster (Cluster), and namespace (namespace), the Apollo open source project can centrally manage the configuration of different environments and different clusters of the Application, and the configuration can be pushed to an Application end in real time after being modified. In one embodiment, the hotspot access keyword processing policy may be dynamically configured and pushed through the UI. Displaying all hotspot access keyword processing strategies to a front-end interface through a UI interface, and dynamically configuring the hotspot access keyword processing strategies through specific operation, wherein the specific operation can be closing or opening a virtual button corresponding to the hotspot access keyword processing strategies, namely opening a virtual button corresponding to a certain hotspot access keyword processing strategy, and then migrating the first hotspot access keyword to a second cache server according to the hotspot access keyword processing strategies. In another embodiment, a number corresponding to the hotspot access keyword processing policy may also be input in the front-end interface, so as to open the hotspot access keyword processing policy corresponding to the number.
In another embodiment, referring to fig. 10, the present solution further includes:
step S110: monitoring historical access flow data of a plurality of servers in a cluster of the plurality of servers in the multi-level cache server architecture;
step S111: acquiring hotspot data corresponding to hotspot access keywords through the historical access flow data;
step S112: and pushing the hot spot data to an application server in an application server cluster.
Specifically, before determining at least one first hotspot access keyword in the at least one first cache server, sending a request to the cache server cluster through an independent thread by using a preheating strategy configured by the configuration center, and loading hotspot data corresponding to the hotspot access keyword into the cache server in advance. Wherein the preheating strategy comprises: marking hot spot access keywords in advance; or the historical access flow data of the hot spot is pushed in advance, so that confirmation omission of the hot spot access keywords is avoided.
In the embodiment, by migrating part of hotspot access keywords in the cache server with more hotspot access keywords to the cache server with less hotspot access keywords, the load pressure of a single cache server is effectively reduced, the pressure of hotspot traffic on a single machine system is reduced, the impact of hotspots on the cache is thoroughly solved, and cache breakdown and cache penetration are avoided. The usability of the cache system is increased, hot data are isolated from general cache data, and the propagation of cache faults is reduced; meanwhile, the utilization rate of the cache server is improved, and the waste of server resources is reduced.
Example two
With reference to fig. 11, a schematic block diagram of the hot spot access keyword processing apparatus 20 in the high concurrency scenario of the present invention is shown. In this embodiment, the hot spot access keyword processing apparatus 20 in the high concurrency scenario may include or be divided into one or more program modules, and the one or more program modules are stored in a storage medium and executed by one or more processors, so as to complete the present invention and implement the hot spot access keyword processing method in the high concurrency scenario. The program module referred to in the embodiments of the present invention refers to a series of computer program instruction segments capable of performing specific functions, and is more suitable for describing the execution process of the hotspot access keyword processing device 20 in a high concurrency scenario in comparison with the program itself. The following description will specifically describe the functions of the program modules of the present embodiment:
the access traffic data monitoring module 200 is configured to monitor access traffic data of a plurality of servers in a cluster of the plurality of servers in the multi-level cache server architecture.
A cache server determining module 202, configured to determine, according to the access traffic data, a first cache server and a second cache server in a cluster of multiple servers in the multi-level cache server architecture;
further, the cache server determination module 202 is configured to:
determining at least one first cache server in the application server cluster according to the access flow data;
and determining at least one second cache server in the cache server cluster according to the access flow data.
Further, the cache server determination module 202 is further configured to:
sequencing all application servers in the application server cluster according to the sequence of the access flow data from large to small;
and taking the top M application servers as a first cache server.
Sequencing all cache servers in the cache server cluster according to the sequence of the access flow data from large to small;
and taking the M cache servers which are ranked later as a second cache server.
Further, the cache server determination module 202 is further configured to:
comparing the access flow data with a first preset threshold value;
and determining the application server with the access flow data exceeding a first preset threshold value in the application server cluster as a first cache server.
Further, the cache server determination module 202 is further configured to:
comparing the access flow data with a first preset threshold value;
and determining that the cache server with the access flow data not exceeding a first preset threshold value in the cache server cluster is a second cache server. A first hotspot access key determining module 204, configured to determine at least one first hotspot access key of multiple access keys in the first cache server;
further, the first hotspot access keyword determination module 204 is further configured to:
counting the total access times of a plurality of access keywords in the first cache server and the calling times of each access keyword;
comparing the total access times of the access keywords with the call times of each access keyword, and determining that the access keyword is a first hotspot access keyword when the ratio of the call times of each access keyword to the total access times exceeds a second preset threshold value.
A second hotspot access keyword generation module 206, configured to map the first hotspot access keyword to obtain a second hotspot access keyword;
further, the second hotspot access keyword generation module 206 is further configured to:
acquiring a pre-established mapping rule;
and mapping the first hotspot access keyword according to the mapping rule to obtain the second hotspot access keyword.
The second hotspot access keyword migration module 208 is configured to migrate the second hotspot access keyword to the second cache server according to a hotspot access keyword processing policy.
Further, the second hotspot access keyword migration module 208 is further configured to:
monitoring historical access flow data of a plurality of cache servers in a cluster of a plurality of servers in a multi-level cache server architecture;
acquiring hotspot data corresponding to hotspot access keywords through the historical access flow data;
and pushing the hot spot data to an application server in an application server cluster.
EXAMPLE III
Fig. 12 is a schematic diagram of a hardware architecture of a computer device according to a third embodiment of the present invention. In the present embodiment, the computer device 2 is a device capable of automatically performing numerical calculation and/or information processing in accordance with a preset or stored instruction. The computer device 2 may be a rack-mounted cache server, a blade-type cache server, a tower-type cache server, or a rack-type cache server (including an independent cache server or a cache server cluster composed of a plurality of cache servers), and the like. As shown in fig. 12, the computer device 2 at least includes, but is not limited to, a memory 21, a processor 22, a network interface 23, and a hot spot access keyword processing apparatus 20 in a high concurrency scenario, which are communicatively connected to each other through a bus. Wherein:
in this embodiment, the memory 21 includes at least one type of computer-readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 21 may be an internal storage unit of the computer device 2, such as a hard disk or a memory of the computer device 2. In other embodiments, the memory 21 may also be an external storage device of the computer device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the computer device 2. Of course, the memory 21 may also comprise both internal and external memory units of the computer device 2. In this embodiment, the memory 21 is generally used to store operations and various application software installed in the computer device 2, such as the program code of the hot-spot access keyword processing apparatus 20 in the high concurrency scenario described in the above embodiment. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 22 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 22 is typically used to control the overall operation of the computer device 2. In this embodiment, the processor 22 is configured to run the program code stored in the memory 21 or process data, for example, run the hot spot access keyword processing apparatus 20 in a high concurrency scenario, so as to implement the hot spot access keyword processing method in the high concurrency scenario of the foregoing embodiment.
The network interface 23 may comprise a wireless network interface or a wired network interface, and the network interface 23 is generally used for establishing communication connection between the computer device 2 and other electronic apparatuses. For example, the network interface 23 is used to connect the computer device 2 to an external terminal through a network, establish a data transmission channel and a communication connection between the computer device 2 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), 4G network, 5G network, Bluetooth (Bluetooth), Wi-Fi, and the like.
It is noted that fig. 12 only shows the computer device 2 with components 20-23, but it is to be understood that not all of the shown components are required to be implemented, and that more or less components may be implemented instead.
In this embodiment, the hot spot access keyword processing apparatus 20 in the high concurrency scenario stored in the memory 21 may be further divided into one or more program modules, and the one or more program modules are stored in the memory 21 and executed by one or more processors (in this embodiment, the processor 22) to complete the present invention.
For example, fig. 11 shows a schematic program module diagram of a second embodiment of the hotspot access key processing apparatus 20 in the high concurrency scenario, in this embodiment, the hotspot access key processing apparatus 20 in the high concurrency scenario may be divided into an access traffic data monitoring module 200, a cache server determining module 202, a first hotspot access key determining module 204, a second hotspot access key generating module 206, and a second hotspot access key migrating module 208. The program module referred to in the present invention refers to a series of computer program instruction segments capable of performing specific functions, and is more suitable than a program for describing the execution process of the hotspot access keyword processing device 20 in the high concurrency scenario in the computer device 2. The specific functions of the program module access flow data monitoring module 200 and the second hotspot access keyword migration module 208 have been described in detail in the foregoing embodiments, and are not described herein again.
Example four
The present embodiment also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a cache server, an App (business) store, etc., on which a computer program is stored, which when executed by a processor implements corresponding functions. The computer-readable storage medium of this embodiment is used to store the hotspot access keyword processing apparatus 20 in a high concurrency scenario, and when executed by the processor, the method for processing the hotspot access keyword in the high concurrency scenario described in the foregoing embodiment is implemented.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A hotspot access keyword processing method in a high concurrency scene is characterized by comprising the following steps: monitoring access flow data of a plurality of servers in a plurality of server clusters in a multi-level cache server architecture;
determining a first cache server and a second cache server in a cluster of a plurality of servers in the multi-level cache server architecture according to the access flow data;
determining at least one first hotspot access key of a plurality of access keys in the first cache server;
mapping the first hotspot access keyword to obtain a second hotspot access keyword;
and migrating the second hotspot access key to the second cache server.
2. The method according to claim 1, wherein the multi-level cache architecture includes an application server cluster, a cache server cluster, and a hotspot coordinating server, and the determining, according to the access traffic data, a first cache server and a second cache server in the cache server cluster includes:
determining at least one first cache server in the application server cluster according to the access flow data;
and determining at least one second cache server in the cache server cluster according to the access flow data.
3. The method according to claim 2, wherein the determining at least one first cache server in the application server cluster according to the access traffic data comprises:
sequencing all application servers in the application server cluster according to the sequence of the access flow data from large to small;
taking M application servers which are ranked at the top as a first cache server;
the determining at least one second cache server in the cache server cluster according to the access traffic data comprises:
sequencing all cache servers in the cache server cluster according to the sequence of the access flow data from large to small;
and taking the M cache servers which are ranked later as a second cache server.
4. The method according to claim 2, wherein the determining at least one first cache server in the application server cluster according to the access traffic data comprises:
comparing the access flow data with a first preset threshold value;
determining an application server with access flow data exceeding a first preset threshold value in an application server cluster as a first cache server;
the determining at least one second cache server in the cache server cluster according to the access traffic data comprises:
comparing the access flow data with a first preset threshold value;
and determining that the cache server with the access flow data not exceeding a first preset threshold value in the cache server cluster is a second cache server.
5. The method for processing the hotspot access key in the high concurrency scenario according to any one of claims 3 or 4, wherein the determining at least one first hotspot access key of the plurality of access keys in the first cache server comprises:
counting the total access times of a plurality of access keywords in the first cache server and the calling times of each access keyword;
comparing the total access times of the access keywords with the call times of each access keyword, and determining that the access keyword is a first hotspot access keyword when the ratio of the call times of each access keyword to the total access times exceeds a second preset threshold value.
6. The method according to claim 5, wherein the mapping the first hotspot access keyword to obtain a second hotspot access keyword comprises:
acquiring a pre-established mapping rule;
and mapping the first hotspot access keyword according to the mapping rule to obtain the second hotspot access keyword.
7. The method for processing hot spot access keywords under high concurrency scenario according to claim 6, further comprising:
monitoring historical access flow data of a plurality of cache servers in a cluster of a plurality of servers in a multi-level cache server architecture;
acquiring hotspot data corresponding to hotspot access keywords through the historical access flow data;
and pushing the hot spot data to an application server in an application server cluster.
8. A hotspot access keyword processing device in a high concurrency scene is characterized by comprising:
the access flow data monitoring module is used for monitoring the access flow data of a plurality of cache servers in a cluster of a plurality of servers in the multi-level cache server architecture;
a cache server determination module, configured to determine, according to the access traffic data, a first cache server and a second cache server in a cluster of multiple servers in the multi-level cache server architecture;
a first hotspot access key determining module, configured to determine at least one first hotspot access key of multiple access keys in the first cache server;
the second hotspot access keyword generation module is used for mapping the first hotspot access keyword to obtain a second hotspot access keyword;
and the second hotspot access keyword migration module is used for migrating the second hotspot access keyword to the second cache server.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the steps of the method for processing the hotspot access keyword in the high concurrency scenario according to any one of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program, the computer program being executable by at least one processor to cause the at least one processor to perform the steps of the method for processing hotspot access keywords in high concurrency scenarios according to any one of claims 1 to 7.
CN202010743319.9A 2020-07-29 2020-07-29 Method and device for processing hot spot access keywords in high concurrency scene Pending CN111885184A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010743319.9A CN111885184A (en) 2020-07-29 2020-07-29 Method and device for processing hot spot access keywords in high concurrency scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010743319.9A CN111885184A (en) 2020-07-29 2020-07-29 Method and device for processing hot spot access keywords in high concurrency scene

Publications (1)

Publication Number Publication Date
CN111885184A true CN111885184A (en) 2020-11-03

Family

ID=73201957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010743319.9A Pending CN111885184A (en) 2020-07-29 2020-07-29 Method and device for processing hot spot access keywords in high concurrency scene

Country Status (1)

Country Link
CN (1) CN111885184A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112367540A (en) * 2020-11-13 2021-02-12 广州易方信息科技股份有限公司 Method and device for monitoring live broadcast push stream number on line and computer equipment
CN113489776A (en) * 2021-06-30 2021-10-08 北京小米移动软件有限公司 Hotspot detection method and device, monitoring server and storage medium
CN113918603A (en) * 2021-10-11 2022-01-11 平安国际智慧城市科技股份有限公司 Hash cache generation method and device, electronic equipment and storage medium
CN114971079A (en) * 2022-06-29 2022-08-30 中国工商银行股份有限公司 Second killing type transaction processing optimization method and device
CN118426963A (en) * 2024-05-14 2024-08-02 北京墨星球科技有限公司 Distributed cache hot spot data processing method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109729108A (en) * 2017-10-27 2019-05-07 阿里巴巴集团控股有限公司 A kind of method, associated server and system for preventing caching from puncturing

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109729108A (en) * 2017-10-27 2019-05-07 阿里巴巴集团控股有限公司 A kind of method, associated server and system for preventing caching from puncturing

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112367540A (en) * 2020-11-13 2021-02-12 广州易方信息科技股份有限公司 Method and device for monitoring live broadcast push stream number on line and computer equipment
CN113489776A (en) * 2021-06-30 2021-10-08 北京小米移动软件有限公司 Hotspot detection method and device, monitoring server and storage medium
CN113918603A (en) * 2021-10-11 2022-01-11 平安国际智慧城市科技股份有限公司 Hash cache generation method and device, electronic equipment and storage medium
CN114971079A (en) * 2022-06-29 2022-08-30 中国工商银行股份有限公司 Second killing type transaction processing optimization method and device
CN114971079B (en) * 2022-06-29 2024-05-28 中国工商银行股份有限公司 Second killing type transaction processing optimization method and device
CN118426963A (en) * 2024-05-14 2024-08-02 北京墨星球科技有限公司 Distributed cache hot spot data processing method and device
CN118426963B (en) * 2024-05-14 2024-11-22 北京墨星球科技有限公司 A distributed cache hotspot data processing method and device

Similar Documents

Publication Publication Date Title
CN111885184A (en) Method and device for processing hot spot access keywords in high concurrency scene
CN110505162B (en) Message transmission method and device and electronic equipment
CN110704177B (en) Computing task processing method and device, computer equipment and storage medium
CN111460474B (en) Method, device, memory and computer for implementing decentralization predictor
CN110737668A (en) Data storage method, data reading method, related device and medium
CN116150116B (en) File system sharing method and device, electronic equipment and storage medium
CN106055630A (en) Log storage method and device
CN111726266B (en) Hotspot data bucketing method, system and computer equipment
CN112667405B (en) Information processing method, device, equipment and storage medium
CN112866348B (en) Database access method and device, computer equipment and storage medium
CN111988429A (en) Algorithm scheduling method and system
CN115859261A (en) Password cloud service method, platform, equipment and storage medium
CN114827157A (en) Cluster task processing method, device and system, electronic equipment and readable medium
CN112269661B (en) Partition migration method and device based on Kafka cluster
CN111586177B (en) Cluster session loss prevention method and system
CN116151631A (en) Service decision processing system, service decision processing method and device
CN112422450A (en) Computer equipment, and flow control method and device for service request
CN111694639A (en) Method and device for updating address of process container and electronic equipment
CN109614242B (en) A computing power sharing method, device, equipment and medium
CN111245928A (en) Resource adjusting method based on super-fusion architecture, Internet of things server and medium
CN110928888A (en) Data feedback method, device, server and storage medium
CN114661762A (en) Query method and device for embedded database, storage medium and equipment
CN114070847A (en) Current limiting method, device, equipment and storage medium of server
CN117130979A (en) Service resource migration method and device and electronic equipment
CN111404979B (en) Method and device for processing service request and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201103

WD01 Invention patent application deemed withdrawn after publication