Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide a data caching method and apparatus, an electronic device, and a storage medium.
In order to achieve the above object, one or more embodiments of the present disclosure provide the following technical solutions:
according to a first aspect of one or more embodiments of the present specification, there is provided a data caching method, the method comprising:
If the space occupied by the data which are subjected to the persistent cache on the local storage device is larger than a persistent cache threshold, deleting part of the data which are subjected to the persistent cache on the local storage device, so that the space occupied by the data which are subjected to the persistent cache on the local storage device is not larger than the persistent cache threshold, and transferring the metadata of the deleted data from the cache area in the memory to the standby cache area;
If the space occupied by the data which are permanently cached on the local storage equipment is smaller than the threshold value of the persistent cache, caching part or all of the data which are belonged to the metadata in the spare cache area in the memory on the local storage equipment, and transferring the metadata of the cached data from the spare cache area in the memory to the cache area;
wherein the persistent cache threshold is a spatial threshold for persistent caching allocated by the database node on the local storage device.
In a possible embodiment of the present specification, the method further comprises:
And if the space required by the data to be cached currently is larger than the remaining space in the space for the persistent cache on the local storage device, deleting part of the data in the data to be cached persistently on the local storage device so as to cache the data to be cached currently to the local storage device, and transferring the metadata of the deleted data from the cache region in the memory to the standby cache region.
In one possible embodiment of the present disclosure, if the space occupied by the data in the local storage device for persistent cache is smaller than the persistent cache threshold, caching, on the local storage device, data to which some or all metadata in a spare cache area in the memory belongs, and transferring the metadata of the cached data from the spare cache area in the memory to the cache area, where the transferring includes:
If the space occupied by the data which are permanently cached on the local storage device is smaller than the threshold value of the persistent cache, caching part or all of the data which the metadata belong to in the spare cache area in the memory to the local storage device by the shared data layer, and transferring the metadata of the cached data from the spare cache area in the memory to the cache area.
In one possible embodiment of the present disclosure, if the space occupied by the data in the persistent cache on the local storage device is smaller than the persistent cache threshold, caching, on the local storage device, data to which some or all of the metadata in the spare cache area in the memory belongs includes:
If the space occupied by the data which are subjected to the persistence cache on the local storage device is smaller than a persistence cache threshold value, metadata exist in a backup cache region in the memory, and the space required by the data which are subjected to the metadata in the backup cache region is smaller than the rest space in the space which is used for the persistence cache on the local storage device, caching the data which are subjected to all the metadata in the backup cache region to the local storage device;
And if the space occupied by the data which are subjected to the persistence cache on the local storage device is smaller than a persistence cache threshold value, metadata exist in a backup cache region in the memory, and the space required by the data which are subjected to the metadata in the backup cache region is larger than the rest space in the space which is used for the persistence cache on the local storage device, caching the data which are subjected to partial metadata in the backup cache region to the local storage device.
In one possible embodiment of the present disclosure, the backup cache area includes a latest access list and a frequent access list, and the caching the data to which the metadata of the part of the backup cache area belongs to the local storage device includes:
And caching the data of partial metadata in the backup cache area to the local storage equipment according to the principle of preferentially transferring the data of the metadata in the frequent access list.
In one possible embodiment of the present disclosure, the caching the data to which the metadata of the part of the backup cache area belongs to the local storage device includes:
and caching the data of partial metadata in the backup cache region to the local storage device according to the principle of preferentially transferring and accessing the data with high heat.
In one possible embodiment of the present disclosure, transferring the metadata of the deleted data in the memory from the buffer to the backup buffer includes:
and if the residual space in the spare cache area in the memory is smaller than the space required by the metadata of the deleted data, adjusting the space allocated to the spare cache area in the memory to be not smaller than the space required by the metadata of the deleted data, and transferring the metadata of the deleted data from the cache area to the spare cache area.
In one possible embodiment of the present disclosure, if the space occupied by the data in the local storage device for persistent cache is smaller than the persistent cache threshold, caching, on the local storage device, data to which some or all metadata in a spare cache area in the memory belongs, and transferring the metadata of the cached data from the spare cache area in the memory to the cache area, where the transferring includes:
If an expansion node is added for the database node so that the space occupied by the data which is permanently cached on the local storage device of the database node is smaller than the threshold value of the persistent cache, the metadata in the spare cache area in the memory of the database node is sent to the expansion node, so that the expansion node caches the received data which are part or all of the metadata on the local storage device, the metadata of the cached data are added to the cache area in the memory, and the rest metadata are added to the spare cache area in the memory.
According to a second aspect of one or more embodiments of the present specification, there is provided a data caching apparatus for application to a database node, the apparatus comprising:
The capacity reduction module is used for deleting partial data in the data of the persistent cache on the local storage device if the space occupied by the data of the persistent cache on the local storage device is larger than a persistent cache threshold value, so that the space occupied by the data of the persistent cache on the local storage device is not larger than the persistent cache threshold value, and transferring the metadata of the deleted data from the cache region in the memory to the backup cache region;
The capacity expansion module is used for caching partial or all metadata in a standby cache area in the memory to the local storage equipment if the space occupied by the data which are cached in the local storage equipment in a lasting way is smaller than a lasting cache threshold value, and transferring the metadata of the cached data from the standby cache area in the memory to the cache area;
wherein the persistent cache threshold is a spatial threshold for persistent caching allocated by the database node on the local storage device.
According to a third aspect of one or more embodiments of the present description, a computer program product is presented, comprising a computer program/instruction which, when executed by a processor, implements the steps of the method of the first aspect.
According to a fourth aspect of one or more embodiments of the present specification, there is provided an electronic device comprising:
A processor;
A memory for storing processor-executable instructions;
Wherein the processor implements the method of the first aspect by executing the executable instructions.
According to a fifth aspect of one or more embodiments of the present description, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the first aspect.
The technical scheme provided by the embodiment of the specification can comprise the following beneficial effects:
In the data caching method provided by the embodiment of the specification, the database node is configured with the cache region and the backup cache region in the memory, when the data is cached in a persistent mode, the database node stores the data on a local storage device such as a disk, meanwhile, the metadata of the cached data is loaded to the cache region, when the space for the persistent cache in the local storage device is reduced, part of the data in the cached data can be deleted, and meanwhile, the metadata of the deleted data is transferred from the cache region to the backup cache region, and when the space for the persistent cache in the local storage device is enlarged, the database node can be used for storing the data belonging to the metadata in the backup cache region in a persistent mode to the local storage device, and transferring the metadata of the cached data from the backup cache region to the cache region. Therefore, the metadata in the backup cache area belongs to the data which the database node wants to cache but cannot cache due to limited space, and the metadata of the data can be recorded to increase the data which are cached in a lasting way in a targeted way when the space is increased, so that the increased data have higher hit rate, the problem of low hit rate caused by randomly increasing the data which are cached in a lasting way when the space is increased in the related art is avoided, and the higher hit rate enables the delay of the data service of the database node to be lower, the efficiency to be higher and the performance to be better.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with aspects of one or more embodiments of the present description as detailed in the accompanying claims.
It should be noted that in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. Furthermore, a single step described in this specification may be described as being split into multiple steps in other embodiments, while multiple steps described in this specification may be described as being combined into a single step in other embodiments.
First, some concepts related to the present description will be explained.
Persistent caching refers to the storage of cached data in a persistent storage medium so that the data can be reloaded and used after a program is shut down or powered down. Specifically, under the shared storage architecture, the database node needs to load part of hot spot data to the local memory for caching, so that delay caused by loading the data from the shared storage in real time is avoided when the database node accesses the data each time, but the hot spot data needing to be cached is large in quantity, and the memory is very limited in space and has more purposes, so that the hot spot data of the shared data layer is stored on a persistent storage medium such as a local disk of the database node in a persistent manner to realize persistent caching.
Space elasticity expansion refers to that a database node can automatically adjust the size of a space for persistent cache in a persistent storage medium according to the requirement so as to improve the flexibility of the use of the space of the persistent medium.
Object storage is a data management architecture that efficiently and flexibly stores and retrieves large amounts of unstructured data by treating the data as objects rather than files or blocks. The concept of everything object emphasizes that all data can be uniformly processed as objects no matter documents, pictures, videos or logs, and the access efficiency and expansibility are improved.
In the related art, when the volume of a database node under the shared storage architecture is reduced, namely when the disk space allocated for the persistent cache is reduced, redundant data in the data already stored for the persistent cache is deleted, and metadata of the redundant data is synchronously deleted, and when the volume of the database node under the shared storage architecture is expanded, namely when the disk space allocated for the persistent cache is enlarged, part of data in the shared data layer is randomly increased to a local disk. The hit rate of the database node is lower when the database node accesses data, namely hot spot data which is required to be accessed by the database node is not always cached to a local disk in a persistent mode in advance, and the database node needs to load the data which is required to be accessed from a shared data layer in real time under the condition, so that the efficiency of data access is lower, and the performance of the database node is poorer.
Based on the technical problems, at least one embodiment of the present disclosure provides a data caching method, which may be applied to a database node, for example, any database node of a database system sharing a storage architecture, where the method may correspondingly increase or decrease data in a persistent cache when a space for the persistent cache data in a local storage device of the database node is subjected to capacity reduction or expansion, and may ensure that data access of the database node has a higher hit rate in a dynamic elastic change process of the space for the persistent cache data, thereby improving a data service performance of the database node.
Referring to fig. 1, the database system of the shared storage architecture may include a shared data layer and at least one database node, and preferably a plurality of database nodes, in communication therewith. The shared data layer may be in the form of object storage or the like, in which user data and metadata or the like (e.g., baseline, dump, log, etc. data) internal to the system are stored for sharing by multiple database nodes. The database node has a local storage device and a memory. The local storage device is a persistent storage medium, such as a disk, for persisting a portion of the data within the shared storage and hopefully the cached data is data that is subsequently accessed, or even frequently accessed, by the database node. The memory is a non-persistent storage medium, and is used for storing metadata of data of a persistent cache, where the metadata may include a key value key, a size of a space required for storing, an offset, an area (i.e. in a cache area or a standby cache area), an access heat value, and the like.
Wherein each database node exists as an instance or copy of the data system. The data system forms a framework of memory and calculation separation based on shared storage, and the data stored in the shared data layer not only avoids the occupation of the local storage space of the database nodes, but also realizes the data sharing among different database nodes.
Referring to fig. 2, a flowchart of the method is schematically shown, including step S201 and step S202.
In step S201, if the space occupied by the data in the persistent cache on the local storage device is greater than the persistent cache threshold, the partial data in the persistent cache on the local storage device is deleted, so that the space occupied by the data in the persistent cache on the local storage device is not greater than the persistent cache threshold, and the metadata of the deleted data is transferred from the cache area in the memory to the backup cache area.
Wherein the persistent cache threshold is a spatial threshold for persistent caching allocated by the database node on the local storage device.
Wherein, the buffer area and the standby buffer area are internally configured and are used for storing metadata of data. The latest access list and the frequent access list are configured in the cache area, metadata in the cache area is divided into the two lists, the metadata of the data is divided into the latest access list if the data of the metadata belongs to the latest access list, and the metadata of the data is divided into the frequent access list if the data of the metadata belongs to the latest access list and the metadata of the metadata is both accessed and accessed most frequently. The backup cache area is internally provided with a latest access list and a frequent access list, and metadata in the backup cache area are divided into the two lists.
For example, when the step deletes a part of data in the data of the persistent cache on the local storage device, the part of data with metadata in the latest access list may be preferentially selected for deletion. For another example, when part of the data in the persistent cache on the local storage device is deleted in this step, part of the data with a lower access heat value may be preferentially selected for deletion.
The fact that the space occupied by the data in the local storage device in the step is larger than the persistent cache threshold is often caused by the capacity shrinkage of the database node, that is, the database node reduces the persistent cache threshold. This is because database nodes, when performing persistent caching of data, may tightly control the space occupied by the data that is being cached for persistence so that it does not exceed a persistence cache threshold. Specifically, the method can control the space occupied by the data of the persistent cache not to exceed the persistent cache threshold in a cache elimination mode, if the space required by the data to be cached currently is larger than the rest space in the space for the persistent cache on the local storage device, deleting part of the data in the data of the persistent cache on the local storage device so as to cache the data to be cached currently to the local storage device, and transferring the metadata of the deleted data from the cache area in the memory to the standby cache area. In the scene, partial data with lower heat value in the data of the persistent cache can be preferentially selected for deletion. The partial data with metadata in the latest access list in the data of the persistent cache can be preferentially selected for deletion.
The operation in this step can be seen that the data of the metadata in the backup cache area is not subjected to persistent cache on the local storage device, that is, the data of the metadata in the backup cache area is the data which the database node wants to be cached to the current storage device in a persistent way but cannot be cached due to the limited space, in other words, the data which is possibly accessed to be obsolete and has higher heat.
For example, in this step, when metadata is transferred from the buffer to the spare buffer, metadata in the latest access list in the buffer may be transferred to the latest access list in the spare buffer, and metadata in the frequently accessed list in the buffer may be transferred to the frequently accessed list in the spare buffer.
For another example, in this step, when metadata is transferred from the buffer to the spare buffer, if the remaining space in the spare buffer is smaller than the space required by the metadata of the deleted data, the space allocated to the spare buffer in the memory is adjusted to be not smaller than the space required by the metadata of the deleted data, and the metadata of the deleted data is transferred from the buffer to the spare buffer. Because the total amount of the metadata is unchanged in the transferring process and the storage space for loading the metadata in the memory is unchanged, the spare cache area can be enlarged by adjusting the proportion between the cache area and the spare cache area and the like, so that the metadata of the deleted data can be completely reserved in the memory. As described above, the data to which the metadata in the backup cache area belongs is data that the database node wants to be cached in the current storage device in a persistent manner but cannot be cached due to limited space, and by this exemplary manner, the database node can record as much data as possible that is desired to be cached.
In step S402, if the space occupied by the data in the persistent cache on the local storage device is smaller than the persistent cache threshold, the data to which some or all of the metadata in the spare cache area in the memory belongs is cached on the local storage device, and the metadata of the cached data is transferred from the spare cache area in the memory to the cache area.
As described above, the data to which the metadata in the backup cache area belongs is data that the database node wants to be cached to the current storage device in a persistent manner but cannot be cached due to the limited space, and in step S401, the metadata of the data is loaded when the database node is scaled down (i.e. the threshold value of the persistent cache is reduced), so that the data in the persistent cache needs to be added in the step, and the access hit rate of the database node is increased, and especially, compared with the mode of randomly selecting the data in the persistent cache, the blindness is overcome and the access hit rate of the database node is increased.
Under the framework of a shared storage system, the step may include caching, by a shared data layer, data to which some or all metadata in a spare cache area in a memory belongs, if a space occupied by data cached in a persistent manner on a local storage device is smaller than a persistent cache threshold, and transferring metadata of cached data (i.e., cached data newly added in the step) from the spare cache area in a memory to the cache area.
It should be understood that if metadata is stored in the spare cache area of the memory, it indicates that the database node has previously recorded data in the spare cache area due to the capacity shrinkage. If the spare cache area of the memory does not store metadata, the specification database node does not record data in the spare cache area due to capacity shrinkage before.
The scenario targeted by this step is a scenario in which the database node expands after a period of time has elapsed after the capacity reduction, where the space occupied by the data in the persistent cache on the local storage device is smaller than the persistent cache threshold value because the persistent cache threshold value is suddenly increased due to the capacity expansion, and metadata is stored in the standby cache region. Therefore, this step may be performed when caching, on the local storage device, the data to which some or all of the metadata in the spare cache area in the memory belongs, in a manner provided by the following two alternative examples.
Alternative example 1
And if the space occupied by the data which are subjected to the persistence cache on the local storage device is smaller than a persistence cache threshold value, metadata exist in a backup cache region in the memory, and the space required by the data which are subjected to the metadata in the backup cache region is not larger than the rest space in the space which is used for the persistence cache on the local storage device, caching the data which are subjected to all the metadata in the backup cache region to the local storage device.
Alternative example 2
And if the space occupied by the data which are subjected to the persistence cache on the local storage device is smaller than a persistence cache threshold value, metadata exist in a backup cache region in the memory, and the space required by the data which are subjected to the metadata in the backup cache region is larger than the rest space in the space which is used for the persistence cache on the local storage device, caching the data which are subjected to partial metadata in the backup cache region to the local storage device.
For example, in this example, the data to which some metadata in the backup cache area belongs may be cached to the local storage device on the principle that the data to which the metadata in the frequently accessed list belongs is preferentially transferred.
For another example, in this example, the data to which some metadata in the spare cache area belongs may be cached to the local storage device on the principle of preferentially transferring data with high access heat.
If metadata is not stored in the backup cache area of the memory, it is indicated that the remaining persistent cache space in the local storage device of the database node is not fully occupied after the database node operates, and the situation is only required to be executed according to a general data persistent cache mode, that is, part of data in the shared data layer is cached in the local storage device of the database node to gradually occupy the persistent cache space, and metadata of the persistent cache data is added into the cache area of the memory.
When the database node expands, two modes of vertical expansion and horizontal expansion can be adopted. Vertical expansion refers to adding space on a single machine for a local storage device to persist cache. Horizontal expansion refers to adding expansion nodes to database nodes to increase local storage while increasing space for persistent caches with part or all of the newly added local storage.
For example, vertical expansion may accomplish persistent caching of data in the manner described in the embodiments above.
For example, the horizontal capacity expansion can further complete the persistent caching of the data in a manner that if an expansion node is added to the database node so that the space occupied by the data in the persistent cache on the local storage device of the database node is smaller than the persistent cache threshold, the metadata in the spare cache area in the memory of the database node is sent to the expansion node, so that the expansion node caches the received data of part or all of the metadata on the local storage device, the metadata of the cached data is added to the cache area in the memory, and the rest metadata is added to the spare cache area in the memory.
Compared with vertical capacity expansion, the horizontal capacity expansion requires the original node to share the metadata in the spare cache area in the memory to the newly added expansion node, so that the expansion node can purposefully and permanently cache part of data in the shared data layer based on the metadata in the spare cache area.
As far as more details are concerned in the horizontal expansion, this can be done in the manner described in the above embodiments:
And caching part of data of the shared data layer to the local storage device.
And if the space required by the data of all metadata received by the expansion node is not larger than the space used for persistent caching on the local storage device of the expansion node, caching the data of all the received metadata to the local storage device.
And if the space required by the data of all metadata received by the expansion node is larger than the space used for persistent caching on the local storage equipment of the expansion node, caching the data of the received partial metadata to the local storage equipment. Preferably, the data of partial metadata in the spare cache area is cached to the local storage device according to the principle of preferentially transferring the data of the metadata in the frequent access list. Preferably, the data of partial metadata in the spare cache area is cached to the local storage device according to the principle of preferentially transferring and accessing the data with high heat.
Referring to fig. 3, the following description will describe the data caching method according to the above embodiments in an exemplary embodiment, which includes the following steps:
and (3) persistent cache space allocation, namely, the threshold value of persistent cache in a local disk of a database node is 100GB, and the cache area in the memory is the space required by metadata of 100GB of persistent cache data.
And (3) data persistence caching, namely gradually filling up the space for persistence caching in the local disk along with the data access database node, namely writing up 100GB of persistence caching data.
And (3) the capacity reduction of the persistent cache space, namely, the database node performs capacity reduction, reduces the threshold value of the persistent cache to 20GB, eliminates, namely deletes, 80GB data in 100GB of persistent cache data, configures a cache region in a memory as a space required by 20GB of metadata of the persistent cache data, configures a standby cache region as a space required by 80GB of metadata of the persistent cache data, and transfers the deleted 80GB of metadata of the data from the cache region to the standby cache region.
And (3) expanding the persistent cache space, namely expanding the database node, and increasing the persistent cache threshold to 100GB, and then persistently caching 80GB data of the metadata in the spare cache region to a local disk to achieve the persistent cache threshold. For example, data persisted to the local disk is selected by a key in the metadata in the spare cache region.
In the process, the metadata of the deleted data in the capacity-shrinking scene is recorded in the backup cache region of the memory, and the metadata recorded in the backup cache region can be used for purposefully and durably caching the data in the capacity-expanding scene, so that blindness and randomness in the related technology are overcome, and the hit rate of the access of the database nodes is improved. In addition, when the metadata is recorded in the backup buffer area, the space of the backup buffer area is enlarged to record the metadata as much as possible, so that the capacity expansion scene can have enough metadata to guide the selection of the data of the persistent buffer.
In the data caching method provided by the embodiment of the specification, the database node is configured with the cache region and the backup cache region in the memory, when the data is cached in a persistent mode, the database node stores the data on a local storage device such as a disk, meanwhile, the metadata of the cached data is loaded to the cache region, when the space for the persistent cache in the local storage device is reduced, part of the data in the cached data can be deleted, and meanwhile, the metadata of the deleted data is transferred from the cache region to the backup cache region, and when the space for the persistent cache in the local storage device is enlarged, the database node can be used for storing the data belonging to the metadata in the backup cache region in a persistent mode to the local storage device, and transferring the metadata of the cached data from the backup cache region to the cache region. Therefore, the metadata in the backup cache area belongs to the data which the database node wants to cache but cannot cache due to limited space, and the metadata of the data can be recorded to increase the data which are cached in a lasting way in a targeted way when the space is increased, so that the increased data have higher hit rate, the problem of low hit rate caused by randomly increasing the data which are cached in a lasting way when the space is increased in the related art is avoided, and the higher hit rate enables the delay of the data service of the database node to be lower, the efficiency to be higher and the performance to be better.
Fig. 4 is a schematic block diagram of an apparatus according to an exemplary embodiment. Referring to fig. 4, at the hardware level, the device includes a processor 402, an internal bus 404, a network interface 406, a memory 408, and a nonvolatile memory 410, although other tasks may be performed. One or more embodiments of the present description may be implemented in a software-based manner, such as by the processor 402 reading a corresponding computer program from the non-volatile memory 410 into the memory 408 and then running. Of course, in addition to software implementation, one or more embodiments of the present disclosure do not exclude other implementation manners, such as a logic device or a combination of software and hardware, etc., that is, the execution subject of the following processing flow is not limited to each logic unit, but may also be hardware or a logic device.
Referring to fig. 5, the data caching apparatus may be applied to a database node running on the device shown in fig. 4, so as to implement the technical solution of the present specification. The data caching apparatus may include:
the capacity reduction module 501 is configured to delete part of the data in the persistent cache on the local storage device if the space occupied by the data in the persistent cache on the local storage device is greater than the persistent cache threshold, so that the space occupied by the data in the persistent cache on the local storage device is not greater than the persistent cache threshold, and transfer the metadata of the deleted data from the cache area in the memory to the backup cache area;
The capacity expansion module 502 is configured to cache, if a space occupied by data in a persistent cache on a local storage device is smaller than a persistent cache threshold, data to which some or all metadata in a backup cache area in a memory belongs to the local storage device, and transfer metadata of the cached data from the backup cache area to the cache area in the memory;
wherein the persistent cache threshold is a spatial threshold for persistent caching allocated by the database node on the local storage device.
In one embodiment of the present specification, the apparatus further comprises a rejection module for:
And if the space required by the data to be cached currently is larger than the remaining space in the space for the persistent cache on the local storage device, deleting part of the data in the data to be cached persistently on the local storage device so as to cache the data to be cached currently to the local storage device, and transferring the metadata of the deleted data from the cache region in the memory to the standby cache region.
In one embodiment of the present specification, the capacity expansion module is configured to:
If the space occupied by the data which are permanently cached on the local storage device is smaller than the threshold value of the persistent cache, caching part or all of the data which the metadata belong to in the spare cache area in the memory to the local storage device by the shared data layer, and transferring the metadata of the cached data from the spare cache area in the memory to the cache area.
In an embodiment of the present disclosure, the capacity expansion module is configured to, if a space occupied by data in a persistent cache on a local storage device is smaller than a persistent cache threshold, cache, to the local storage device, data to which some or all metadata in a backup cache area in a memory belongs, where the data is used to:
If the space occupied by the data which are subjected to the persistence cache on the local storage device is smaller than a persistence cache threshold value, metadata exist in a backup cache region in the memory, and the space required by the data which are subjected to the metadata in the backup cache region is smaller than the rest space in the space which is used for the persistence cache on the local storage device, caching the data which are subjected to all the metadata in the backup cache region to the local storage device;
And if the space occupied by the data which are subjected to the persistence cache on the local storage device is smaller than a persistence cache threshold value, metadata exist in a backup cache region in the memory, and the space required by the data which are subjected to the metadata in the backup cache region is larger than the rest space in the space which is used for the persistence cache on the local storage device, caching the data which are subjected to partial metadata in the backup cache region to the local storage device.
In one embodiment of the present disclosure, the backup cache area includes a latest access list and a frequent access list, and the capacity expansion module is configured to, when caching data to which metadata of a portion of the backup cache area belongs to the local storage device:
And caching the data of partial metadata in the backup cache area to the local storage equipment according to the principle of preferentially transferring the data of the metadata in the frequent access list.
In one embodiment of the present disclosure, the metadata includes an access heat of the data, and the capacity expansion module is configured to, when caching the data to which the metadata of the portion of the backup cache area belongs to the local storage device,:
and caching the data of partial metadata in the backup cache region to the local storage device according to the principle of preferentially transferring and accessing the data with high heat.
In an embodiment of the present disclosure, the capacity expansion module is configured to transfer metadata of deleted data in the memory from the buffer area to the backup buffer area, and is configured to:
and if the residual space in the spare cache area in the memory is smaller than the space required by the metadata of the deleted data, adjusting the space allocated to the spare cache area in the memory to be not smaller than the space required by the metadata of the deleted data, and transferring the metadata of the deleted data from the cache area to the spare cache area.
In one embodiment of the present specification, the capacity expansion module is configured to:
If an expansion node is added for the database node so that the space occupied by the data which is permanently cached on the local storage device of the database node is smaller than the threshold value of the persistent cache, the metadata in the spare cache area in the memory of the database node is sent to the expansion node, so that the expansion node caches the received data which are part or all of the metadata on the local storage device, the metadata of the cached data are added to the cache area in the memory, and the rest metadata are added to the spare cache area in the memory.
Further details of the various modules in the apparatus have been described in more detail in the data caching method of the first aspect, and the detailed description is not repeated here.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
User information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in this specification are both information and data authorized by the user or sufficiently authorized by the parties, and the collection, use and processing of relevant data requires compliance with relevant laws and regulations and standards of the relevant country and region, and is provided with corresponding operation portals for the user to choose authorization or denial.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments of the present description. The term "if" as used herein may be interpreted as "at..once" or "when..once" or "in response to a determination", depending on the context.
The foregoing description of the preferred embodiment(s) is (are) merely intended to illustrate the embodiment(s) of the present invention, and it is not intended to limit the embodiment(s) of the present invention to the particular embodiment(s) described.