[go: up one dir, main page]

CN115421919B - Memory management method, device, memory manager, equipment and storage medium - Google Patents

Memory management method, device, memory manager, equipment and storage medium

Info

Publication number
CN115421919B
CN115421919B CN202211139508.0A CN202211139508A CN115421919B CN 115421919 B CN115421919 B CN 115421919B CN 202211139508 A CN202211139508 A CN 202211139508A CN 115421919 B CN115421919 B CN 115421919B
Authority
CN
China
Prior art keywords
memory
slice
capacity
data
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211139508.0A
Other languages
Chinese (zh)
Other versions
CN115421919A (en
Inventor
周华伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202211139508.0A priority Critical patent/CN115421919B/en
Publication of CN115421919A publication Critical patent/CN115421919A/en
Priority to PCT/CN2023/098519 priority patent/WO2024060682A1/en
Application granted granted Critical
Publication of CN115421919B publication Critical patent/CN115421919B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4812Task transfer initiation or dispatching by interrupt, e.g. masked
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)

Abstract

本申请实施例公开一种内存管理方法、装置、内存管理器、设备及存储介质,涉及内存管理领域。该方法包括:根据不同业务数据对应的业务类型,确定内存片区;基于业务处理过程中所述内存片区的中断频率,对所述内存片区的片区容量进行调整,其中,所述中断频率是指所述内存片区的内存使用量大于内存水线的频率,所述内存水线是所述内存片区触发中断的门限值。本申请实施例提供的方法,通过动态内存管理机制,可合理调整不同业务类型对应内存片区的片区容量,从而可提高内存使用率。

The embodiments of the present application disclose a memory management method, apparatus, memory manager, device and storage medium, which relate to the field of memory management. The method comprises: determining a memory slice according to the business type corresponding to different business data; adjusting the slice capacity of the memory slice based on the interruption frequency of the memory slice during business processing, wherein the interruption frequency refers to the frequency at which the memory usage of the memory slice is greater than the memory waterline, and the memory waterline is the threshold value for triggering an interruption of the memory slice. The method provided in the embodiments of the present application can reasonably adjust the slice capacity of the memory slice corresponding to different business types through a dynamic memory management mechanism, thereby improving memory utilization.

Description

Memory management method, device, memory manager, equipment and storage medium
Technical Field
The embodiment of the application relates to the field of memory management, in particular to a memory management method, a memory management device, a memory manager, equipment and a storage medium.
Background
Reasonable use of memory has a large impact on terminal power consumption and chip area in the terminal. The memory usage object contains different types of service data.
In the related art, the memory is divided according to different service types. In the dividing process, the method is divided according to the maximum memory amount required to be used by various services. In this way, after the memory amounts corresponding to various services are accumulated, a larger memory space may be occupied, and memory waste is easily caused.
Disclosure of Invention
The embodiment of the application provides a memory management method, a memory management device, a memory manager, equipment and a storage medium. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a memory management method, where the method includes:
Determining a memory fragment according to the service types corresponding to different service data;
and adjusting the capacity of the memory patch based on the interrupt frequency of the memory patch in the service processing process, wherein the interrupt frequency refers to the frequency that the memory usage of the memory patch is larger than the memory waterline, and the memory waterline is the threshold value of the memory patch for triggering interrupt.
In another aspect, an embodiment of the present application provides a memory management device, where the device includes:
The memory dividing module is used for determining a memory slice area according to the service types corresponding to different service data;
The memory adjusting module is used for adjusting the capacity of the memory slice based on the interrupt frequency of the memory slice in the service processing process, wherein the interrupt frequency refers to the frequency that the memory usage of the memory slice is larger than the memory waterline, and the memory waterline is a threshold value of the memory slice triggering interrupt.
In another aspect, an embodiment of the present application provides a memory manager, where the memory manager includes programmable logic circuits and/or program instructions, and the memory manager is configured to implement the memory management method according to the foregoing aspect when the memory manager is running.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory includes a memory and a memory manager as in the above aspect.
In another aspect, embodiments of the present application provide a computer storage medium having at least one program code stored therein, the program code being loaded and executed by a processor to implement the memory management method according to the above aspect.
In another aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the terminal performs the memory management method provided in various alternative implementations of the above aspect.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
In the embodiment of the application, a memory manager in a computer device determines memory areas for storing different service data based on service types corresponding to the different service data. And then, according to the frequency that the memory usage amount is larger than the memory waterline in the service processing process, the capacity of each memory slice can be dynamically adjusted, the capacity of the memory slice with higher interrupt frequency can be improved, and the capacity of the memory slice with lower interrupt frequency can be reduced. Compared with the mode of using the memory planning memory capacity to the maximum extent by the estimated service in the related art, in the embodiment of the application, the capacity of the memory corresponding to the different service types can be reasonably adjusted through the dynamic memory management mechanism, so that the memory utilization rate can be improved, and the memory space can be saved. And because the capacity of the patch area is adjusted based on the interrupt frequency, the frequent triggering of the interrupt can be reduced, thereby being beneficial to reducing the power consumption and improving the efficiency of batch processing of service data.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart illustrating a memory management method according to an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a memory chip according to an exemplary embodiment of the present application;
FIG. 3 is a flow chart illustrating a memory management method according to another exemplary embodiment of the present application;
FIG. 4 is a schematic diagram illustrating adjustment of the capacity of a memory segment according to an exemplary embodiment of the present application;
FIG. 5 is a flow chart of a memory management method according to another exemplary embodiment of the present application;
fig. 6 is a schematic diagram illustrating adjustment of a capacity of a memory slice according to another exemplary embodiment of the present application;
fig. 7 is a schematic diagram illustrating adjustment of a capacity of a memory slice according to another exemplary embodiment of the present application;
FIG. 8 is a block diagram illustrating a memory management device according to an embodiment of the present application;
fig. 9 is a block diagram showing the structure of a computer device according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
References herein to "a plurality" means two or more. "and/or" describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate that there are three cases of a alone, a and B together, and B alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The memory using object includes service data, and the service data of different service types need to use the memory. In the related art, the memory amount used is maximized for each type of service estimated position, and the total memory amount is determined based on the maximized memory amount. In this way, after the memory amount used by each service type in the maximization is accumulated, a larger memory space is occupied, and in the actual use process, all services may not occupy the maximization memory amount, so that memory waste is caused.
Therefore, in the embodiment of the application, the memory management method is provided, the dynamic management of the memory is realized in the service processing process, the memory occupation amounts corresponding to different services can be adjusted in real time according to the actual use condition of the memory, and the use of the memory is saved. The method provided by the embodiment of the application can be applied to computer equipment. The computer device may be a mobile terminal device such as a smart phone, a tablet computer, a laptop portable notebook computer, or a desktop computer, a projection computer, etc., which is not limited in the embodiment of the present application.
Referring to fig. 1, a flowchart of a memory management method according to an exemplary embodiment of the application is shown. This embodiment is described by taking the method as an example performed by a memory manager in a computer device, and the process includes the following steps:
and step 101, determining the memory area according to the service types corresponding to different service data.
In general, when storing service data of different service types, a memory block is divided into different slices, so as to be used for storing the service data of different types. Thus, in one possible implementation, the memory manager may perform a preliminary partition of the memory to obtain different memory partitions. Different memory segments are used to store traffic data of different traffic types.
Optionally, in the preliminary division process, the memory manager may divide the memory uniformly to obtain memory slices for storing service data corresponding to each service type. Or the slice division can be performed according to the characteristics of different service types. When dividing, the memory area with larger capacity can be divided for the common service, and the memory area with smaller capacity can be divided for the unusual service.
Illustratively, as shown in fig. 2, taking dividing the memory banks for service type 1, service type 2 and service type 3 as an example, three memory banks can be obtained. The first memory slice 201 is used for storing service data of service type 1, the second memory slice 202 is used for storing service data of service type 2, and the third memory slice 203 is used for storing service data of service type 3.
Step 102, adjusting the capacity of the memory slice based on the interrupt frequency of the memory slice in the service processing process, wherein the interrupt frequency refers to the frequency that the memory usage of the memory slice is greater than the memory waterline, and the memory waterline is the threshold value of the memory slice triggering interrupt.
For each memory chip area, a memory waterline is provided. The memory waterline refers to a threshold value of triggering interruption of a memory chip area. The memory waterline may be the maximum capacity of the memory slice, or may be an intermediate value of the capacity of the memory slice, that is, may be smaller than the maximum capacity of the memory slice, which is not limited in this embodiment.
When the memory usage of the memory patch is greater than the memory waterline, an interrupt is triggered so that the processor can process the data in the memory patch in time. When the interrupt is frequently triggered, the power consumption of the terminal is increased, and the efficiency of business batch processing is affected.
Therefore, in one possible implementation, the capacity of the memory slice is adjusted according to the interrupt frequency of the memory slice, so as to reduce the probability of frequent triggering of the interrupt. The memory manager counts the interrupt frequency of each memory area, and when the interrupt is triggered, the corresponding interrupt frequency is updated.
After the interrupt frequency of each memory slice is obtained, the slice capacity of the memory slice is adjusted based on the interrupt frequency of each memory slice.
In one possible implementation, when the interrupt frequency is higher, the memory manager may increase the capacity of the corresponding memory partition, so that the maximum amount of memory allowed to be used by the corresponding memory partition increases accordingly, and the interrupt frequency may be effectively reduced. In the process of adjusting the capacity of the memory area, the total memory capacity is kept unchanged. Therefore, when the capacity of a memory segment is increased, the capacity of a segment in which another memory segment exists is correspondingly reduced. In order to reduce the influence of the adjustment of the capacity of the memory area on other business processes, after the memory area needing to be increased is determined, the memory area with the capacity capable of being reduced is determined according to the interrupt frequency, and the capacity of the memory area is adjusted under the condition that the memory area with the capacity allowed to be reduced exists. Alternatively, under the condition of lower interrupt frequency, the memory usage rate of the corresponding memory fragment may be lower, so that the fragment capacity of the memory fragment with lower interrupt frequency can be reduced, and the memory usage rate is improved.
In the embodiment of the application, after the memory fragments of different services are initially divided, the fragment capacity of each memory fragment is dynamically adjusted according to the actual use condition of the memory fragment in the service processing process. In the service processing process, the memory usage conditions corresponding to different services are different, and the service with larger memory demand exists possibly, and correspondingly, the service with smaller memory demand also exists, so that the memory usage rate can be improved and the memory waste can be avoided by reasonably adjusting the capacity of the memory area corresponding to the different services.
In summary, in the embodiment of the present application, the memory manager in the computer device first determines the memory area storing different service data based on the service types corresponding to the different service data. And then, according to the frequency that the memory usage amount is larger than the memory waterline in the service processing process, the capacity of each memory slice can be dynamically adjusted, the capacity of the memory slice with higher interrupt frequency can be improved, and the capacity of the memory slice with lower interrupt frequency can be reduced. Compared with the mode of using the memory planning memory capacity to the maximum extent by the estimated service in the related art, in the embodiment of the application, the capacity of the memory corresponding to the memory slices of different services can be reasonably adjusted through the dynamic memory management mechanism, so that the memory utilization rate can be improved, and the memory space can be saved. And because the capacity of the patch area is adjusted based on the interrupt frequency, the frequent triggering of the interrupt can be reduced, thereby being beneficial to reducing the power consumption and improving the efficiency of batch processing of service data.
In the data processing process, data may need to be moved from one node to another node for processing, and in the process, different memory nodes need to be set for realizing the data movement. In one possible implementation, the memory may be divided into different memory nodes, and then the memory nodes may be partitioned into different memory slices. In the process of node division of the memory, the memory capacity required by the memory node can be determined first, and then the node division can be performed based on the memory capacity. And when each memory node is partitioned, the memory can be primarily partitioned according to the service frequency, and primary reasonable planning is performed on the memory. The following will describe exemplary embodiments.
Referring to fig. 3, a flowchart of a memory management method according to another exemplary embodiment of the application is shown. This embodiment is described by taking the method as an example performed by a memory manager in a computer device, and the process includes the following steps:
in step 301, the memory is node-partitioned to obtain memory nodes, and memory movement exists between different memory nodes.
In this embodiment, the memory manager first performs node division on the memory, so as to obtain different memory nodes. In one possible implementation, the memory manager performs node partitioning based on whether the processed data is memory moved. When the memory is required to be moved, the memory nodes are required to be arranged to store the moved data. I.e., there is memory movement between different memory nodes.
Illustratively, the data processed by a modem (modem) chip is taken as an example. When the physical layer receives data from the network side, a series of data processing is performed on the physical layer, and after the processing is completed, the data is moved to the protocol layer, and a series of processing is performed by each processing layer in the protocol layer. When data is moved between the physical layer and the protocol layer, the data in the memory corresponding to the physical layer needs to be moved to the memory corresponding to the protocol layer, that is, memory movement exists, so that the physical layer and the protocol layer respectively correspond to one memory node. The protocol layer comprises each processing layer such as MAC/RLC/PDCP/SDAP, and each processing layer in the protocol layer can access data in the same memory node to process the data, so that memory movement does not exist when each processing layer in the protocol layer processes the data, and the protocol layer can be uniformly regarded as a node, namely only one corresponding memory node.
In the partitioning process, the memory manager may determine the memory capacity required to be used by each memory node, thereby partitioning the memory based on the memory capacity required to be used by each memory node. After the division, the memory capacity of each memory node is unchanged. In one possible implementation, the method for node partitioning the memory may include steps 301a-301c (not shown):
In step 301a, processing time of each data node for processing received data is determined based on a first data rate and a second data rate, the data node is a node for processing data of the memory node, the first data rate is a rate for receiving data by the data node, and the second data rate is a rate for processing data by the data node.
When the memory manager performs memory partitioning, firstly, the content capacity required by the memory nodes is determined, and then each memory node is obtained based on the memory capacity partitioning. For each memory node, there is a data node for processing data therein. In one possible implementation, the calculation of the memory capacity may be performed according to the processing time of processing data by each data node. The processing time of the data node for processing the received data refers to a unit time for processing the received data.
Alternatively, when the rate of receiving data is faster and the rate of processing data is slower, the processing time is longer and the amount of memory required is greater. When the rate of receiving data is the same as or slower than the rate of processing data, the data can be processed in time, and the processing time is shorter, so the required amount of memory is smaller. The processing time may be determined based on a ratio of a rate at which the data node receives data (a first data rate) to a rate at which the data node processes data (a second data rate), thereby determining the content capacity based on the processing time.
Illustratively, taking a data node as an example of a protocol layer, where a data rate supported by the protocol layer, that is, a rate of receiving data, is 8G/s, and when a corresponding rate of processing data by the CPU is 4G/s, a processing time required by the CPU to process the data is 2s.
In step 301b, the memory capacity required for each data node is determined based on the processing time and the first data rate.
When the processing time is determined, the product of the processing time and the first data rate of the received data may be determined as the memory capacity required for each data node.
Illustratively, when the first data rate of the data node is r and the processing time is t, the required memory capacity is r×t.
In step 301c, the memory is node-divided based on the memory capacity required by each data node, so as to obtain the memory node.
After determining the memory capacity required by each data node, the memory manager divides the memory according to the content capacity required by each node to obtain each memory node. And determining the memory capacity according to the actual data processing capacity of each node, dividing the nodes, and maximizing the memory utilization rate.
Step 302, dividing the memory node into memory slices corresponding to different service types based on service use frequency of the service type, wherein the service use frequency and the slice capacity of the memory slices are in positive correlation.
After each memory node is obtained, the memory manager divides the memory area. In one possible implementation manner, the terminal may count service usage frequencies of different service types, so that the memory node is partitioned according to the service usage frequencies, and the higher the service usage frequency is, the more memory partition with larger capacity of the partition can be allocated.
Optionally, the service may be ordered according to the service usage frequency, so that the duty ratio of the corresponding memory segment is determined according to the ordered service type order. The memory manager may preset the memory segment occupation ratios corresponding to the service types in different orders, so as to partition the memory nodes according to the memory segment occupation ratios.
Illustratively, taking 5 service types as an example, the memory fragment occupancy ratios corresponding to the service types in different orders may be as shown in table 1:
TABLE 1
Order of the Duty ratio of
1 50%
2-3 20%
4-5 5%
After each memory area is obtained by dividing, the memory manager sets a memory description table and a memory management table for each memory area, so that the management of the memory areas is facilitated.
The memory description table may be as shown in table 2:
TABLE 2
Description_0
Description_1
...
Description_n
...
The description_i represents the memory Description information of the ith service unit in the memory slice, including the start address, the size, etc. The service unit refers to a unit formed by the minimum memory units which can be processed each time in the service processing process. The service units of different service types differ in size, i.e. one service unit contains a different number of minimum memory units for different service types. For example, service type 1 uses 3 minimum memory units to form a service unit, while service type 2 uses 2 minimum memory units to form a service unit.
The memory management table is shown in table 3:
TABLE 3 Table 3
The in_data_base_address represents a start address of a memory slice, in_total_depth represents a slice capacity of the memory slice, in_filled_depth represents a used slice capacity (a number of stored service units), in_read_pointer represents a pointer position of data to be processed, in_write_pointer represents a pointer position of stored data, in_full represents an identifier of full memory, in_empty represents an identifier of empty memory (i.e. no stored data), WATERMARK _threshold represents a threshold value of a memory line triggering a processing interrupt after a certain amount of memory is used, and enable represents an identifier of memory on.
Step 303, reading the interrupt frequency of the memory chip area from the interrupt frequency statistics table, wherein the interrupt frequency statistics table stores the interrupt frequency of the memory chip area corresponding to each service type, and the interrupt frequency is updated when the interrupt is triggered.
In one possible implementation manner, the memory manager is further provided with an interrupt frequency statistics table corresponding to the memory node, in addition to the memory description table and the memory management table. As shown in table 4:
TABLE 4 Table 4
InterruptFrequency_1
...
InterruptFrequency_n
ThresholdH
ThresholdL
Wherein InterruptFrequency _i represents the interrupt frequency of the interrupt triggered by the memory segment i corresponding to the service type i.
In the service processing process, the memory manager counts the frequency of triggering interruption of each memory area, and updates the interruption frequency in the frequency statistics table in real time.
In one possible implementation, the memory manager may obtain the interrupt frequency in real time, so as to adjust the partition capacity of the memory partition according to the change condition of the interrupt frequency.
In step 304, if the interrupt frequency of the memory slice is greater than the first frequency threshold, the memory slice is determined to be the first memory slice.
In one possible implementation, the memory manager has a first frequency threshold stored therein. When the interrupt frequency of the memory patch is greater than the first frequency threshold, the number of times that the memory usage amount in the memory patch exceeds the memory waterline is indicated to be more, so that the patch capacity of the corresponding memory patch needs to be increased. It may be determined as the first memory patch, which is the memory patch to be enlarged.
Alternatively, the first frequency threshold may be stored in an interrupt frequency statistics table, as shown in table 4, with ThresholdH stored therein, indicating an interrupt frequency high threshold, i.e., the first frequency threshold.
In step 305, if the interrupt frequency of the memory slice is less than the second frequency threshold, the memory slice is determined to be the second memory slice, and the first frequency threshold is greater than the second frequency threshold.
After determining the first memory area, the memory manager also needs to determine a second memory area which can be reduced, so as to avoid the influence on the corresponding business processing process.
The memory manager stores a preset second frequency threshold, which is an interrupt frequency low threshold and is lower than the first frequency threshold. When the interrupt frequency of the memory chip is smaller than the second frequency threshold, the number of times that the memory usage in the memory chip exceeds the memory waterline is smaller, and accordingly, the memory usage is relatively smaller, the chip capacity of the corresponding memory chip can be reduced, and therefore the memory chip can be determined to be the second memory chip and be the reducible memory chip.
Alternatively, the second frequency threshold may be stored in an interrupt frequency statistics table, as shown in table 4, with ThresholdL stored therein, indicating an interrupt frequency low threshold, i.e., the second frequency threshold.
Step 306, increasing the capacity of the first memory segment and decreasing the capacity of the second memory segment.
In one possible implementation, when the first memory slice exists in the memory node and the second memory slice exists in the memory node, the memory manager adjusts the slice capacity of the memory slice, so as to ensure that the influence on other service processing procedures is avoided when the slice capacity of the memory slice is increased.
Alternatively, the memory manager may increase the capacity of the first memory segment and decrease the capacity of the second memory segment, the increased capacity of the first memory segment being the same as the decreased capacity of the second memory segment.
Illustratively, as shown in fig. 4, when the interruption frequency of the service type 2 is lower than the second frequency threshold and the interruption frequency of the service type 3 is higher than the third frequency threshold, the capacity of the fragment of the service type 2 is reduced from the first capacity 401 to the second capacity 402, and the capacity of the fragment of the service type 3 is increased from the third capacity 403 to the fourth capacity 404, without additionally increasing the memory capacity of the memory node.
The specific adjustment process may refer to the following embodiments, which are not described in detail.
In this embodiment, in the node dividing process, the required memory capacity is determined according to the data receiving rate and the data processing rate of the node, so that different nodes are obtained according to the memory capacity division required by each node, and compared with the mode of obtaining the memory capacity by estimating the maximum usage amount and accumulating according to different service types in the related art, the maximum memory capacity can be planned according to the actual data processing capacity of the node, so as to improve the memory usage rate.
In the primary division process, the primary division is performed according to the service use frequency, and for the memory fragment with larger common service division, the subsequent frequent fragment capacity adjustment process can be reduced. In this embodiment, after the first memory segment is detected and the second memory segment is detected, the segment capacity of the memory segment is adjusted, and by reducing the memory segment of the inactive service, the segment capacity of the memory segment of the active service is increased, so that the memory capacity of the memory node is kept unchanged, only the duty ratio of the memory segments of different service types is adjusted, and the influence on the service processing process is reduced.
In the process of adjusting the capacity of the memory banks, no other memory banks may exist between the first memory bank and the second memory bank, at this time, the capacities of the first memory bank and the second memory bank may be directly adjusted, and in another possible case, other memory banks may exist, at this time, the other memory banks need to be moved to realize the adjustment of the capacities of the memory banks. The following will describe exemplary embodiments.
As shown in fig. 5, the step 306 may include the following steps:
In step 306a, when the third memory segment does not exist between the first memory segment and the second memory segment, the start address and the end address adjacent to each other between the first memory segment and the second memory segment are adjusted.
The distance between the first starting address and the first ending address of the first memory chip area after the address adjustment is increased, and the distance between the second starting address and the second ending address of the second memory chip area is reduced.
In one possible case, after determining the first memory slice and the second memory slice, the memory manager may first determine whether a third memory slice exists between the first memory slice and the second memory slice, and if the third memory slice does not exist, only the adjacent addresses need to be adjusted.
Optionally, when the start address (first start address) of the first memory chip area is adjacent to the end address (second end address) of the second memory chip area, the first start address of the first memory chip area and the second end address of the second memory chip area can be adjusted, and the end address (first end address) of the first memory chip area and the start address (second start address) of the second memory chip area are kept unchanged. Schematically, as shown in fig. 6, the second start address and the second end address of the second memory segment are 1-20, and the first start address and the first end address of the first memory segment are 21-50. In the capacity adjustment process of the first memory area, the second starting address and the first starting address can be reduced simultaneously, the adjusted second starting address and second ending address of the second memory area are 1-15, and the first starting address and first ending address of the first memory area are 16-50, so that the capacity of the first memory area is increased, and the capacity of the second memory area is reduced.
Optionally, when the first end address of the first memory segment is adjacent to the first start address of the second memory segment, the first end address of the first memory segment and the second start address of the second memory segment may be adjusted. The first end address and the second start address can be increased at the same time, and the first start address and the second end address are kept unchanged, so that the capacity of the first memory area is increased, and the capacity of the second memory area is reduced.
In step 306b, a minimum shift path is determined in the case that a third memory chip region exists between the first memory chip region and the second memory chip region, and the number of the third memory chip regions in the minimum shift path is the smallest.
In another possible case, there may be another third memory partition between the first memory partition and the second memory partition, and at this time, during the capacity adjustment process of the partition, the third memory partition needs to be moved, that is, the start address (third start address) of the third memory partition and the end address (third end address) of the third memory partition are adjusted.
In one possible implementation manner, when the third memory partition exists, different migration paths may exist, and the memory manager may determine the minimum migration path therein, so that the capacity of the partition is adjusted by using the minimum migration path, and the adjustment efficiency is improved. Wherein, determining the minimum shift path may comprise the steps of:
Step one, determining the number of the third memory slices under different moving paths.
In one possible implementation, the memory manager may determine the number of third memory slices to be traversed under various different move paths.
Illustratively, when the memory block corresponding to the memory node is 1-100, the memory node comprises a first memory chip region (1-10), a second memory chip region (11-40), a third memory chip region (41-60), a fourth memory chip region (61-80), a fifth memory chip region (81-90) and a sixth memory chip region (91-100).
The first memory chip area is a second memory chip area, and the second memory chip area is a fourth memory chip area. Under the first moving path, the moving sequence can be the second memory chip region, the third memory chip region and the fourth memory chip region, the number of the third memory chip region is 1, and under the second moving path, the moving sequence can be the second memory chip region, the first memory chip region, the sixth memory chip region, the fifth memory chip region and the fourth memory chip region, wherein the number of the third memory chip region is 3.
And step two, determining a moving path corresponding to the minimum number in the number of the sections as a minimum moving path.
In one possible implementation manner, the memory manager may determine the move path corresponding to the minimum number of the number of slices as the minimum move path, so as to adjust the memory slices in a manner of minimizing the adjustment amount of the slices, thereby improving adjustment efficiency.
Step 306c, based on the minimum shift path, increasing the capacity of the first memory segment and decreasing the capacity of the second memory segment.
In one possible implementation, after determining the minimum move path, the capacity of the first memory bank may be increased and the capacity of the second memory bank may be decreased based on the minimum move path. Wherein, the method can comprise the following steps:
Step one, according to the adjustment sequence indicated by the minimum shift path, the first starting address or the first ending address of the first memory chip area is adjusted, and the interval between the first starting address and the first ending address is increased after the address adjustment.
In one possible implementation, the memory manager may sequentially adjust the first memory slice, the third memory slice, and the second memory slice according to the minimum shift path. The first memory area may be adjusted as a starting point, or the second memory area may be adjusted as a starting point. This embodiment is not limited thereto.
In the adjustment process, from the first memory chip area to the second memory chip area, one address exists in the first starting address or the first ending address of the first memory chip area and is kept unchanged, and one address exists in the second starting address or the second ending address of the second memory chip area and is kept unchanged.
When the first starting address of the first memory slice is kept unchanged under the condition that the first memory slice is taken as a starting point, the second ending address of the second memory slice is kept unchanged after the first starting address of the first memory slice is sequentially adjusted. And when the first end address of the first memory slice is kept unchanged, the second start address of the second memory slice is kept unchanged.
When the second starting address of the second memory segment is kept unchanged, the first ending address of the first memory segment is kept unchanged, and when the second ending address of the second memory segment is kept unchanged, the first starting address of the second memory segment is kept unchanged.
That is, only one address of the first start address or the first end address needs to be adjusted in the process of adjusting the first memory area.
And step two, adjusting a third starting address and a third ending address of the third memory area, wherein the distance between the third starting address and the third ending address after the address adjustment is kept unchanged.
In the process of adjusting the third memory slice, the slice capacity of the third memory slice is kept unchanged, so that the third starting address and the third ending address of the third memory slice are adjusted, and the interval between the adjusted third starting address and third ending address is kept unchanged.
In one possible embodiment, the first memory segment may be the starting point, or the second memory segment may be the starting point. When the first memory area is used as a starting point, after the first memory area is adjusted, a third starting address and a third ending address of a third memory area are sequentially adjusted, and then the second memory area is adjusted. When the second memory area is used as a starting point, after the second memory area is adjusted, the third starting address and the third ending address of the third memory area are sequentially adjusted.
And thirdly, adjusting a second starting address or a second ending address of the second memory area, wherein the distance between the second starting address and the second ending address after the address adjustment is reduced.
In one possible implementation, only one of the second start address and the second end address needs to be adjusted in the second memory segment adjustment process.
Illustratively, as shown in fig. 7, when the addresses of the first memory bank are 11-40, the addresses of the third memory bank are 41-60, and the addresses of the second memory bank are 61-80, the first memory bank is taken as an example of the adjustment starting point, and the addresses of the adjusted memory banks are respectively the first memory bank (11-45), the third memory bank (46-65), and the second memory bank (66-80) under the minimum shift path.
And in one possible case, the second memory region obtained by the statistics of the memory manager may include at least two memory regions, that is, at least two memory regions capable of being reduced. At this time, in order to reduce the adjustment amount, one second memory segment may be selected from at least two second memory segments according to a minimum shift path between segments.
Optionally, before increasing the capacity of the first memory chip and reducing the capacity of the second memory chip, determining a minimum shift path between the at least two second memory chips and the first memory chip under the condition that the at least two second memory chips are included.
The memory manager determines the minimum shift path between each second memory region and the first memory region under the condition that at least two second memory regions are obtained, and then the second memory region corresponding to the minimum shift path with the shortest path can be reduced.
Illustratively, when two third memory slices exist between the minimum moving paths of the second memory slice a and the first memory slice, and one third memory slice exists between the minimum moving paths of the second memory slice B and the first memory slice, the second memory slice B may be determined as the memory slice to be reduced. The memory manager reduces the memory capacity of the second memory partition B.
In this embodiment, in the process of adjusting the partition capacity of the memory partition, when a third memory partition exists between the first memory partition and the second memory partition, the memory manager may select the minimum shift path to perform adjustment, thereby improving the adjustment efficiency. And under the condition that a plurality of second memory areas exist, one second memory area can be selected according to the minimum moving path between each second memory area and the first memory area, so that the adjustment efficiency is further improved, and the influence on business processing is reduced.
In the process of adjusting the capacity of the fragment, the adjustment can be performed according to the amount of memory units required for increasing the service type. Wherein, the step 306 may comprise the following steps:
Step one, determining the memory unit quantity which is needed to be increased for the corresponding service type of the first memory area, wherein the memory unit quantity is the quantity of the minimum memory units in the memory node.
In one possible implementation, the memory manager first determines the amount of memory units that need to be increased, i.e., the minimum amount of memory units that need to be increased, for the corresponding traffic type for the first memory segment.
Alternatively, the amount of memory cells to be increased may be determined according to a fixed ratio, for example, 5% increase based on the current first memory slice's slice capacity, and 5% of the first slice's slice capacity is determined as the increased amount of memory cells.
Or the increased memory cell amount can be dynamically adjusted according to the number of times the interrupt frequency is greater than the first frequency threshold. The higher the number of times the interrupt frequency is greater than the first frequency threshold, the more memory cells are increased, directly to the upper memory cell increase limit. For example, the first interrupt frequency may be increased by 5% when it is greater than the first frequency threshold, and the second interrupt frequency may be increased by 8% when it is greater than the first frequency threshold, directly until the memory cell is increased by 20%.
And step two, increasing the capacity of the first memory slice based on the memory unit quantity.
The memory manager increases the capacity of the first memory segment according to the memory unit amount, and the adjusted minimum memory unit amount included in the first memory segment is the sum of the minimum memory unit amount included in the first memory segment before adjustment and the memory unit amount.
And step three, reducing the capacity of the first memory slice based on the memory unit quantity.
The memory manager reduces the capacity of the second memory segment according to the memory unit amount, and the adjusted minimum memory unit amount contained in the second memory segment is the difference between the minimum memory unit amount contained in the second memory segment before adjustment and the memory unit amount.
In the above manner, the adjustment of the capacity of the sector is directly performed according to the amount of memory cells required for increasing the service type. However, since the service units corresponding to different service types have different sizes, that is, the number of units of the minimum memory unit that can be processed at a time is different. If the service unit size of the first service corresponding to the first memory chip area is 3 minimum memory units, and the service unit size of the second service corresponding to the second memory chip area is 2 minimum memory units, when the first service needs to increase 3 service units, i.e. 9 minimum memory units need to be increased, and if the second service is reduced by 9 minimum memory units, a single minimum memory unit exists in the second memory chip area, the service units of the second service cannot be formed, and memory fragments appear in the second memory chip area, so that memory waste is caused. Thus, in another possible implementation, the memory adjustment granularity may be set, with adjustments based on the memory adjustment granularity. The method can comprise the following steps:
Step one, obtaining the service unit size of each memory area, wherein the service unit size refers to the unit number of the minimum memory units processed at a time in the service processing process.
In one possible implementation, the memory manager may obtain a service unit size of each memory fragment in the memory node corresponding to a service type, thereby determining the memory adjustment granularity according to the service unit size. The memory adjustment granularity refers to an adjustment reference unit in the process of adjusting the capacity of the memory, that is, the reference number of the adjusted minimum memory units, and the capacity of the finally adjusted memory is an integer multiple of the memory adjustment granularity.
Or in each adjustment process, the memory manager may obtain the service unit sizes (including the service unit sizes corresponding to the first memory slice, the second memory slice and the third memory slice) of the service types corresponding to each memory slice in the move path, so as to determine the memory adjustment granularity according to the service unit sizes of the adjusted memory slices in the move path.
And step two, determining the least common multiple of the sizes of the business units as memory adjustment granularity, wherein the memory adjustment granularity refers to the reference number of the minimum memory units adjusted in the process of adjusting the capacity of the slice, and the adjusted capacity of the slice is an integer multiple of the memory adjustment granularity.
In one possible implementation, the memory manager may determine the least common multiple of the sizes of the individual service units as the memory adjustment granularity. For example, when the service unit size contains 2 and 3, 6 may be determined as the memory adjustment granularity. In the process of adjusting the capacity of the slice area, 6 minimum memory units are used as reference units for adjustment.
And thirdly, increasing the capacity of the first memory slice and reducing the capacity of the second memory slice based on the memory adjusting granularity.
After determining the memory adjustment granularity, the memory manager may increase the tile capacity of the first memory tile and decrease the tile capacity of the second memory tile based on the memory adjustment granularity. The memory unit quantity to be increased can be determined first, and when the increased memory unit quantity is an integer multiple of the memory adjustment granularity, the adjustment of the capacity of the partition can be performed directly based on the increased memory unit quantity. For example, when the granularity of memory adjustment is 6 and the amount of memory units to be increased is 12, the adjustment of the capacity of the partition can be performed based on 12. When the increased memory unit amount is not an integer multiple of the memory adjustment granularity, the integer multiple of the memory adjustment granularity larger than the memory unit amount is determined as the adjusted slice capacity. For example, when the number of memory cells to be increased is 15, the capacity of the tab area is determined to be 18, and the memory manager increases the first memory area by 18 minimum memory cells and decreases the second memory area by 18 memory cells.
Because the memory adjustment granularity is the least common multiple of the sizes of the business units, the generation of memory fragments can be effectively avoided under the condition that the sizes of the business units corresponding to different businesses are different.
In this embodiment, in the adjustment process, the memory adjustment granularity is determined based on the sizes of the service units, and the adjustment of the partition capacity of the memory partition is performed based on the memory adjustment granularity, so that the generation of memory fragments in the adjustment process can be avoided, and the waste of memory can be avoided.
Referring to fig. 8, a block diagram of a memory management device according to an embodiment of the application is shown. As shown in fig. 8, the apparatus may include:
the memory dividing module 801 is configured to determine a memory slice area according to service types corresponding to different service data;
The memory adjustment module 802 is configured to adjust a partition capacity of the memory partition based on an interrupt frequency of the memory partition in a service processing process, where the interrupt frequency is a frequency that a memory usage amount of the memory partition is greater than a memory waterline, and the memory waterline is a threshold value of the memory partition triggering an interrupt.
Optionally, the memory adjustment module 802 is further configured to:
determining the memory chip area as a first memory chip area under the condition that the interrupt frequency of the memory chip area is larger than a first frequency threshold value;
Determining the memory patch as a second memory patch under the condition that the interrupt frequency of the memory patch is smaller than a second frequency threshold, wherein the first frequency threshold is larger than the second frequency threshold;
and increasing the capacity of the first memory area and reducing the capacity of the second memory area.
Optionally, the memory adjustment module 802 is further configured to:
And under the condition that a third memory chip area does not exist between the first memory chip area and the second memory chip area, adjusting the adjacent starting address and the adjacent ending address between the first memory chip area and the second memory chip area, wherein the interval between the first starting address and the first ending address of the first memory chip area after the address adjustment is increased, and the interval between the second starting address and the second ending address of the second memory chip area is reduced.
Optionally, the memory adjustment module 802 is further configured to:
determining a minimum shift path under the condition that a third memory chip area exists between the first memory chip area and the second memory chip area, wherein the number of the third memory chip areas is minimum in the minimum shift path;
And based on the minimum moving path, increasing the capacity of the first memory area and reducing the capacity of the second memory area.
Optionally, the memory adjustment module 802 is further configured to:
According to the adjustment sequence indicated by the minimum moving path, adjusting a first starting address or a first ending address of the first memory chip area, wherein the interval between the first starting address and the first ending address is increased after the address adjustment;
Adjusting a third starting address and a third ending address of the third memory area, wherein the distance between the third starting address and the third ending address after the address adjustment is kept unchanged;
and adjusting a second starting address or a second ending address of the second memory area, wherein the distance between the second starting address and the second ending address after the address adjustment is reduced.
Optionally, the memory adjustment module 802 is further configured to:
determining the number of the third memory areas under different moving paths;
and determining the moving path corresponding to the minimum in the number of the sections as the minimum moving path.
Optionally, the memory adjustment module 802 is further configured to:
determining a minimum moving path between at least two second memory slices and the first memory slices under the condition of containing at least two second memory slices;
And reducing the second memory chip area corresponding to the minimum shift path with the shortest path.
Optionally, the memory adjustment module 802 is further configured to:
Determining the amount of memory units which are required to be increased for the corresponding service type of the first memory area, wherein the amount of memory units is the amount of minimum memory units in the memory node;
Based on the memory unit quantity, increasing the capacity of the first memory slice;
and reducing the capacity of the first memory slice based on the memory unit quantity.
Optionally, the memory adjustment module 802 is further configured to:
acquiring the service unit size of each memory area, wherein the service unit size refers to the unit number of the minimum memory unit processed once in the service processing process;
Determining the least common multiple of the sizes of the service units as memory adjustment granularity, wherein the memory adjustment granularity refers to the reference number of the minimum memory units adjusted in the process of adjusting the capacity of the area, and the adjusted capacity of the area is an integer multiple of the memory adjustment granularity;
and increasing the capacity of the first memory slice and reducing the capacity of the second memory slice based on the memory adjustment granularity.
Optionally, the memory partitioning module 801 is further configured to:
node division is carried out on the memory to obtain memory nodes, and memory movement exists among different memory nodes;
and dividing the memory nodes into memory slices corresponding to different service types based on service use frequencies corresponding to the service types, wherein the service use frequencies and the slice capacities of the memory slices are in positive correlation.
Optionally, the memory partitioning module 801 is further configured to:
Determining processing time of each data node for processing received data based on a first data rate and a second data rate, wherein the data node is a node for processing data of a memory node, the first data rate is a rate of the data node for receiving the data, and the second data rate is a rate of the data node for processing the data;
Determining a memory capacity required by each data node based on the processing time and the first data rate;
and carrying out node division on the memory based on the memory capacity required by each data node to obtain the memory node.
Optionally, the apparatus further includes:
The frequency acquisition module is used for reading the interrupt frequency of the memory chip area from an interrupt frequency statistical table, wherein the interrupt frequency of the memory chip area corresponding to each service type is stored in the interrupt frequency statistical table, and the interrupt frequency is updated when the interrupt is triggered.
In summary, in the embodiment of the present application, the memory manager in the computer device first determines, based on the service types corresponding to the different service data, the memory areas corresponding to the different service data. And then, according to the frequency that the memory usage amount is larger than the memory waterline in the service processing process, the capacity of each memory slice can be dynamically adjusted, the capacity of the memory slice with higher interrupt frequency can be improved, and the capacity of the memory slice with lower interrupt frequency can be reduced. Compared with the mode of using the memory planning memory capacity to the maximum extent by the estimated service in the related art, in the embodiment of the application, the capacity of the memory corresponding to the different service types can be reasonably adjusted through the dynamic memory management mechanism, so that the memory utilization rate can be improved, and the memory space can be saved. And because the capacity of the patch area is adjusted based on the interrupt frequency, the frequent triggering of the interrupt can be reduced, thereby being beneficial to reducing the power consumption and improving the efficiency of batch processing of service data.
It should be noted that, in the device provided in the above embodiment, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the method embodiments are described in the method embodiments, which are not repeated herein.
Referring to FIG. 9, a block diagram of a computer device 900 according to an exemplary embodiment of the application is shown. Computer device 900 in the present application may include one or more components including a processor 910 and a memory 920.
Processor 910 may include one or more processing cores. The processor 910 utilizes various interfaces and lines to connect various portions of the overall electronic device 900, perform various functions of the electronic device 900, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 920, and invoking data stored in the memory 920. Alternatively, the processor 910 may be implemented in hardware in at least one of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 910 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like, the GPU is used for rendering and drawing contents required to be displayed by the display screen, and the modem is used for processing wireless communication. It will be appreciated that the modem may not be integrated into the processor 910 and may be implemented solely by a baseband chip.
Memory 920 may include memory 921 and memory manager 922. The Memory 921 includes a random access Memory (Random Access Memory, RAM), and may also include a Read-Only Memory (ROM). Optionally, the memory 920 includes a non-transitory computer-readable medium (non-transitory computer-readable storage medium). Memory 921 may be used to store instructions, programs, code sets, or instruction sets. The memory 921 may include a stored program area and a stored data area, where the stored program area may store instructions for implementing an operating system, which may be an Android (Android) system (including a system developed based on an Android system), an IOS system developed by apple corporation (including a system developed based on an IOS system), or other systems, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described method embodiments, and the like. The storage data area may also store data created by the computer device 900 in use (e.g., phonebook, audiovisual data, chat log data), and the like.
Memory 920 may also include a memory manager 922, where memory manager 922 includes programmable logic and/or program instructions for managing memory of computer device 900.
In addition, those skilled in the art will appreciate that the structure of the computer device 900 illustrated in the above-described figures is not limiting of the computer device 900, and that a computer device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. For example, the computer device 900 further includes components such as a radio frequency circuit, a shooting component, a sensor, an audio circuit, a wireless fidelity (WIRELESS FIDELITY, WIFI) component, a power supply, and a bluetooth component, which are not described herein.
The present application also provides a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the memory management method provided by any of the above-described exemplary embodiments.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computer device to perform the memory management method provided in the above-described alternative implementation.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.

Claims (16)

1.一种内存管理方法,其特征在于,所述方法包括:1. A memory management method, characterized in that the method comprises: 根据不同业务数据对应的业务类型,确定内存片区;Determine the memory area based on the business type corresponding to different business data; 统计各个所述内存片区的中断频率,所述中断频率是指所述内存片区的内存使用量大于内存水线的频率,所述内存水线是所述内存片区触发中断的门限值,且处理器在所述内存片区触发中断时处理所述内存片区中的数据;Counting the interrupt frequency of each memory slice, where the interrupt frequency refers to the frequency at which the memory usage of the memory slice exceeds the memory watermark, where the memory watermark is the threshold at which the memory slice triggers an interrupt, and the processor processes data in the memory slice when the memory slice triggers an interrupt; 基于业务处理过程中所述内存片区的所述中断频率,对所述内存片区的片区容量进行调整,以降低所述内存片区触发中断的概率,其中,调整时采用的内存调整粒度为各个内存片区对应业务类型的业务单元大小的最小公倍数,所述业务单元大小指业务处理过程中单次处理的最小内存单元的单元数量,所述内存调整粒度指片区容量调整过程中调整的所述最小内存单元的基准数量,且调整的片区容量是所述内存调整粒度的整数倍。Based on the interruption frequency of the memory slice during business processing, the slice capacity of the memory slice is adjusted to reduce the probability of the memory slice triggering an interruption, wherein the memory adjustment granularity used during the adjustment is the lowest common multiple of the business unit sizes of the business types corresponding to the various memory slices, the business unit size refers to the number of units of the minimum memory unit processed in a single time during business processing, the memory adjustment granularity refers to the baseline number of the minimum memory units adjusted during the slice capacity adjustment process, and the adjusted slice capacity is an integer multiple of the memory adjustment granularity. 2.根据权利要求1所述的方法,其特征在于,所述基于业务处理过程中所述内存片区的所述中断频率,对所述内存片区的片区容量进行调整,包括:2. The method according to claim 1, wherein adjusting the slice capacity of the memory slice based on the interrupt frequency of the memory slice during service processing comprises: 在所述内存片区的中断频率大于第一频率阈值的情况下,将所述内存片区确定为第一内存片区;When the interrupt frequency of the memory slice is greater than a first frequency threshold, determining the memory slice as a first memory slice; 在所述内存片区的中断频率小于第二频率阈值的情况下,将所述内存片区确定为第二内存片区,所述第一频率阈值大于所述第二频率阈值;In a case where the interrupt frequency of the memory slice is less than a second frequency threshold, determining the memory slice as a second memory slice, and the first frequency threshold is greater than the second frequency threshold; 增大所述第一内存片区的片区容量,以及减小所述第二内存片区的片区容量。The slice capacity of the first memory slice is increased, and the slice capacity of the second memory slice is decreased. 3.根据权利要求2所述的方法,其特征在于,所述增大所述第一内存片区的片区容量,以及减小所述第二内存片区的片区容量,包括:3. The method according to claim 2, wherein increasing the slice capacity of the first memory slice and decreasing the slice capacity of the second memory slice comprise: 在所述第一内存片区与所述第二内存片区之间不存在第三内存片区的情况下,调整所述第一内存片区与所述第二内存片区之间相邻的起始地址与结束地址,其中,地址调整后的第一内存片区的第一起始地址与第一结束地址之间间距增加,且第二内存片区的第二起始地址与第二结束地址之间间距减小。In the case that there is no third memory slice between the first memory slice and the second memory slice, the adjacent start address and end address between the first memory slice and the second memory slice are adjusted, wherein the distance between the first start address and the first end address of the first memory slice after the address adjustment is increased, and the distance between the second start address and the second end address of the second memory slice is reduced. 4.根据权利要求2所述的方法,其特征在于,所述增大所述第一内存片区的片区容量,以及减小所述第二内存片区的片区容量,包括:4. The method according to claim 2, wherein increasing the slice capacity of the first memory slice and decreasing the slice capacity of the second memory slice comprise: 在所述第一内存片区与所述第二内存片区之间存在第三内存片区的情况下,确定最小挪移路径,所述最小挪移路径下所述第三内存片区的数量最小;If a third memory slice exists between the first memory slice and the second memory slice, determining a minimum migration path, wherein the number of the third memory slice is the smallest under the minimum migration path; 基于所述最小挪移路径,增大所述第一内存片区的片区容量,以及减小所述第二内存片区的片区容量。Based on the minimum migration path, the slice capacity of the first memory slice is increased, and the slice capacity of the second memory slice is reduced. 5.根据权利要求4所述的方法,其特征在于,所述基于所述最小挪移路径,增大所述第一内存片区的片区容量,以及减小所述第二内存片区的片区容量,包括:5. The method according to claim 4, wherein increasing the slice capacity of the first memory slice and decreasing the slice capacity of the second memory slice based on the minimum migration path comprises: 按照所述最小挪移路径指示的调整顺序,调整所述第一内存片区的第一起始地址或第一结束地址,地址调整后所述第一起始地址与所述第一结束地址之间间距增加;Adjusting the first start address or the first end address of the first memory slice according to the adjustment order indicated by the minimum shift path, so that the distance between the first start address and the first end address increases after the address adjustment; 调整所述第三内存片区的第三起始地址以及第三结束地址,地址调整后的所述第三起始地址与所述第三结束地址之间的间距保持不变;Adjusting a third starting address and a third ending address of the third memory area, wherein the distance between the third starting address and the third ending address after the address adjustment remains unchanged; 调整所述第二内存片区的第二起始地址或第二结束地址,地址调整后的所述第二起始地址与所述第二结束地址之间间距减小。The second starting address or the second ending address of the second memory area is adjusted, and the distance between the second starting address and the second ending address after the address adjustment is reduced. 6.根据权利要求4所述的方法,其特征在于,所述确定最小挪移路径,包括:6. The method according to claim 4, wherein determining the minimum shift path comprises: 确定不同挪移路径下所述第三内存片区的片区数量;Determining the number of slices of the third memory slice under different migration paths; 将所述片区数量中最小数量对应的所述挪移路径,确定为所述最小挪移路径。The shifting path corresponding to the minimum number of the areas is determined as the minimum shifting path. 7.根据权利要求4所述的方法,其特征在于,所述增大所述第一内存片区的片区容量,以及减小所述第二内存片区的片区容量之前,所述方法还包括:7. The method according to claim 4, characterized in that before increasing the slice capacity of the first memory slice and reducing the slice capacity of the second memory slice, the method further comprises: 在包含至少两个所述第二内存片区的情况下,确定至少两个所述第二内存片区与所述第一内存片区之间的最小挪移路径;In the case where at least two second memory slices are included, determining a minimum migration path between the at least two second memory slices and the first memory slice; 所述减小所述第二内存片区的片区容量,包括:The reducing the slice capacity of the second memory slice includes: 减小路径最短的最小挪移路径对应的所述第二内存片区。The second memory area corresponding to the minimum shift path with the shortest path is reduced. 8.根据权利要求2至7任一所述的方法,其特征在于,所述增大所述第一内存片区的片区容量,以及减小所述第二内存片区的片区容量,包括:8. The method according to any one of claims 2 to 7, wherein increasing the slice capacity of the first memory slice and decreasing the slice capacity of the second memory slice comprise: 确定所述第一内存片区对应业务类型所需增加的内存单元量,所述内存单元量是内存节点中最小内存单元的数量;Determine the amount of memory units required to be added to the first memory slice corresponding to the service type, where the amount of memory units is the number of minimum memory units in the memory node; 基于所述内存单元量,增大所述第一内存片区的片区容量;increasing a slice capacity of the first memory slice based on the amount of memory cells; 基于所述内存单元量,减小所述第一内存片区的片区容量。Based on the memory cell quantity, a slice capacity of the first memory slice is reduced. 9.根据权利要求1至7任一所述的方法,其特征在于,所述根据不同业务数据对应的业务类型,确定内存片区之前,所述方法还包括:9. The method according to any one of claims 1 to 7, wherein before determining the memory slices according to the service types corresponding to the different service data, the method further comprises: 对内存进行节点划分,得到内存节点,不同内存节点之间存在内存搬移;Divide the memory into nodes to obtain memory nodes, and move memory between different memory nodes; 所述根据不同业务数据对应的业务类型,确定内存片区,包括:Determining the memory area according to the business type corresponding to different business data includes: 基于业务类型对应的业务使用频率,对所述内存节点进行片区划分,得到不同业务类型对应的内存片区,其中,所述业务使用频率与所述内存片区的片区容量呈正相关关系。Based on the service usage frequency corresponding to the service type, the memory node is partitioned to obtain memory slices corresponding to different service types, wherein the service usage frequency is positively correlated with the slice capacity of the memory slice. 10.根据权利要求9所述的方法,其特征在于,所述对内存进行节点划分,得到内存节点,包括:10. The method according to claim 9, wherein dividing the memory into nodes to obtain memory nodes comprises: 基于第一数据速率与第二数据速率,确定各个数据节点处理接收数据的处理时间,所述数据节点是指处理内存节点数据的节点,所述第一数据速率是指所述数据节点接收数据的速率,所述第二数据速率是指所述数据节点处理数据的速率;Determining, based on a first data rate and a second data rate, a processing time for each data node to process received data, wherein the data node is a node that processes memory node data, the first data rate is a rate at which the data node receives data, and the second data rate is a rate at which the data node processes data; 基于所述处理时间与所述第一数据速率,确定各个所述数据节点所需的内存容量;determining a memory capacity required by each of the data nodes based on the processing time and the first data rate; 基于各个所述数据节点所需的内存容量,对所述内存进行节点划分,得到所述内存节点。Based on the memory capacity required by each of the data nodes, the memory is divided into nodes to obtain the memory nodes. 11.根据权利要求1至7任一所述的方法,其特征在于,所述基于业务处理过程中所述内存片区的中断频率,对所述内存片区的片区容量进行调整之前,所述方法还包括:11. The method according to any one of claims 1 to 7, characterized in that before adjusting the slice capacity of the memory slice based on the interrupt frequency of the memory slice during service processing, the method further comprises: 从中断频率统计表中读取所述内存片区的中断频率,所述中断频率统计表中存储有各个所述业务类型对应的内存片区的所述中断频率。The interrupt frequency of the memory slice is read from an interrupt frequency statistics table, where the interrupt frequency statistics table stores the interrupt frequency of the memory slice corresponding to each of the service types. 12.一种内存管理装置,其特征在于,所述装置包括:12. A memory management device, characterized in that the device comprises: 内存划分模块,用于根据不同业务数据对应的业务类型,确定内存片区;The memory partitioning module is used to determine the memory area according to the business type corresponding to different business data; 内存调整模块,用于统计各个所述内存片区的中断频率,所述中断频率是指所述内存片区的内存使用量大于内存水线的频率,所述内存水线是所述内存片区触发中断的门限值,且处理器在所述内存片区触发中断时处理所述内存片区中的数据;a memory adjustment module, configured to count the interrupt frequency of each of the memory slices, where the interrupt frequency refers to the frequency at which the memory usage of the memory slice exceeds the memory watermark, where the memory watermark is the threshold at which the memory slice triggers an interrupt, and the processor processes data in the memory slice when the memory slice triggers an interrupt; 基于业务处理过程中所述内存片区的所述中断频率,对所述内存片区的片区容量进行调整,以降低所述内存片区触发中断的概率,其中,调整时采用的内存调整粒度为各个内存片区对应业务类型的业务单元大小的最小公倍数,所述业务单元大小指业务处理过程中单次处理的最小内存单元的单元数量,所述内存调整粒度指片区容量调整过程中调整的所述最小内存单元的基准数量,且调整的片区容量是所述内存调整粒度的整数倍。Based on the interruption frequency of the memory slice during business processing, the slice capacity of the memory slice is adjusted to reduce the probability of the memory slice triggering an interruption, wherein the memory adjustment granularity used during the adjustment is the lowest common multiple of the business unit sizes of the business types corresponding to the various memory slices, the business unit size refers to the number of units of the minimum memory unit processed in a single time during business processing, the memory adjustment granularity refers to the baseline number of the minimum memory units adjusted during the slice capacity adjustment process, and the adjusted slice capacity is an integer multiple of the memory adjustment granularity. 13.一种内存管理器,其特征在于,所述内存管理器包含可编程逻辑电路和/或程序指令,所述内存管理器运行时用于实现如权利要求1至11任一所述的内存管理方法。13. A memory manager, characterized in that the memory manager comprises a programmable logic circuit and/or program instructions, and is used to implement the memory management method according to any one of claims 1 to 11 when the memory manager is running. 14.一种计算机设备,其特征在于,所述计算机设备包括处理器和存储器,所述存储器中包含内存以及如权利要求13所述的内存管理器。14. A computer device, characterized in that the computer device comprises a processor and a memory, wherein the memory comprises a memory and the memory manager according to claim 13. 15.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有至少一条程序代码,所述程序代码由处理器加载并执行以实现如权利要求1至11任一所述的内存管理方法。15. A computer-readable storage medium, characterized in that at least one program code is stored in the computer-readable storage medium, and the program code is loaded and executed by a processor to implement the memory management method according to any one of claims 1 to 11. 16.一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机指令,所述计算机指令存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机指令,以实现如权利要求1至11任一所述的内存管理方法。16. A computer program product, characterized in that the computer program product includes computer instructions, the computer instructions are stored in a computer-readable storage medium, and a processor reads and executes the computer instructions from the computer-readable storage medium to implement the memory management method according to any one of claims 1 to 11.
CN202211139508.0A 2022-09-19 2022-09-19 Memory management method, device, memory manager, equipment and storage medium Active CN115421919B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211139508.0A CN115421919B (en) 2022-09-19 2022-09-19 Memory management method, device, memory manager, equipment and storage medium
PCT/CN2023/098519 WO2024060682A1 (en) 2022-09-19 2023-06-06 Memory management method and apparatus, memory manager, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211139508.0A CN115421919B (en) 2022-09-19 2022-09-19 Memory management method, device, memory manager, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115421919A CN115421919A (en) 2022-12-02
CN115421919B true CN115421919B (en) 2025-09-09

Family

ID=84204628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211139508.0A Active CN115421919B (en) 2022-09-19 2022-09-19 Memory management method, device, memory manager, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN115421919B (en)
WO (1) WO2024060682A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115421919B (en) * 2022-09-19 2025-09-09 Oppo广东移动通信有限公司 Memory management method, device, memory manager, equipment and storage medium
CN116483287B (en) * 2023-06-21 2024-02-06 宝德计算机系统股份有限公司 Data security storage method and system for Internet users

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937397A (en) * 2009-06-29 2011-01-05 深圳富泰宏精密工业有限公司 Mobile intelligent terminal and dynamic memory management method thereof
CN109144718A (en) * 2018-07-06 2019-01-04 北京比特大陆科技有限公司 A kind of memory allocation method, memory release method and relevant device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6363468B1 (en) * 1999-06-22 2002-03-26 Sun Microsystems, Inc. System and method for allocating memory by partitioning a memory
US7296133B2 (en) * 2004-10-14 2007-11-13 International Business Machines Corporation Method, apparatus, and computer program product for dynamically tuning amount of physical processor capacity allocation in shared processor systems
US7620840B2 (en) * 2006-12-29 2009-11-17 Intel Corporation Transactional flow management interrupt debug architecture
JP5439983B2 (en) * 2009-06-29 2014-03-12 富士通株式会社 Multiprocessor system, interrupt control method, and interrupt control program
CN104778125B (en) * 2015-04-03 2017-09-15 无锡天脉聚源传媒科技有限公司 A kind of EMS memory management process and system
CN111008076B (en) * 2019-12-06 2023-03-14 安徽芯智科技有限公司 Memory management method based on slab algorithm
CN115421919B (en) * 2022-09-19 2025-09-09 Oppo广东移动通信有限公司 Memory management method, device, memory manager, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937397A (en) * 2009-06-29 2011-01-05 深圳富泰宏精密工业有限公司 Mobile intelligent terminal and dynamic memory management method thereof
CN109144718A (en) * 2018-07-06 2019-01-04 北京比特大陆科技有限公司 A kind of memory allocation method, memory release method and relevant device

Also Published As

Publication number Publication date
WO2024060682A9 (en) 2024-06-13
WO2024060682A1 (en) 2024-03-28
CN115421919A (en) 2022-12-02

Similar Documents

Publication Publication Date Title
CN115421919B (en) Memory management method, device, memory manager, equipment and storage medium
CN109690500B (en) Providing resilient management of heterogeneous memory systems using spatial quality of service (QoS) tagging
US9612648B2 (en) System and method for memory channel interleaving with selective power or performance optimization
CN110069219B (en) Data storage method and system, electronic equipment and storage medium
CN111159436A (en) Method and device for recommending multimedia content and computing equipment
US11928359B2 (en) Memory swapping method and apparatus
US20060069898A1 (en) Memory manager for an embedded system
CN117891618B (en) Resource task processing method and device of artificial intelligent model training platform
CN113986559B (en) Memory management method and related device
CN108780420A (en) Priority-based access of compressed memory lines in memory in a processor-based system
TW201717026A (en) System and method for page-by-page memory channel interleaving
CN111538572A (en) Task processing method, device, scheduling server and medium
KR20110121362A (en) How to manage data to prevent memory fragmentation in memory pools
WO2024187779A1 (en) Service data storage method and apparatus, computer device, and storage medium
TW201717025A (en) System and method for page-by-page memory channel interleaving
CN110597879B (en) Method and device for processing time series data
CN108681469B (en) Page caching method, device, equipment and storage medium based on Android system
CN112152641B (en) Data interleaving method and device and data transmitting equipment
CN118801896A (en) Data compression method, device and electronic equipment
CN117215485A (en) ZNS SSD management method, data writing method, storage device and controller
CN116938849B (en) Intelligent adjustment method for flow table specification and related equipment
CN113254211A (en) Cache allocation method and device, electronic equipment and storage medium
CN117539796B (en) Electronic device and buffer memory management method
CN111858392B (en) Memory space allocation method and device, storage medium and electronic device
KR102334237B1 (en) Methods and apparatuses for managing page cache for multiple foreground application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant