Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
References herein to "a plurality" means two or more. "and/or" describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate that there are three cases of a alone, a and B together, and B alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The memory using object includes service data, and the service data of different service types need to use the memory. In the related art, the memory amount used is maximized for each type of service estimated position, and the total memory amount is determined based on the maximized memory amount. In this way, after the memory amount used by each service type in the maximization is accumulated, a larger memory space is occupied, and in the actual use process, all services may not occupy the maximization memory amount, so that memory waste is caused.
Therefore, in the embodiment of the application, the memory management method is provided, the dynamic management of the memory is realized in the service processing process, the memory occupation amounts corresponding to different services can be adjusted in real time according to the actual use condition of the memory, and the use of the memory is saved. The method provided by the embodiment of the application can be applied to computer equipment. The computer device may be a mobile terminal device such as a smart phone, a tablet computer, a laptop portable notebook computer, or a desktop computer, a projection computer, etc., which is not limited in the embodiment of the present application.
Referring to fig. 1, a flowchart of a memory management method according to an exemplary embodiment of the application is shown. This embodiment is described by taking the method as an example performed by a memory manager in a computer device, and the process includes the following steps:
and step 101, determining the memory area according to the service types corresponding to different service data.
In general, when storing service data of different service types, a memory block is divided into different slices, so as to be used for storing the service data of different types. Thus, in one possible implementation, the memory manager may perform a preliminary partition of the memory to obtain different memory partitions. Different memory segments are used to store traffic data of different traffic types.
Optionally, in the preliminary division process, the memory manager may divide the memory uniformly to obtain memory slices for storing service data corresponding to each service type. Or the slice division can be performed according to the characteristics of different service types. When dividing, the memory area with larger capacity can be divided for the common service, and the memory area with smaller capacity can be divided for the unusual service.
Illustratively, as shown in fig. 2, taking dividing the memory banks for service type 1, service type 2 and service type 3 as an example, three memory banks can be obtained. The first memory slice 201 is used for storing service data of service type 1, the second memory slice 202 is used for storing service data of service type 2, and the third memory slice 203 is used for storing service data of service type 3.
Step 102, adjusting the capacity of the memory slice based on the interrupt frequency of the memory slice in the service processing process, wherein the interrupt frequency refers to the frequency that the memory usage of the memory slice is greater than the memory waterline, and the memory waterline is the threshold value of the memory slice triggering interrupt.
For each memory chip area, a memory waterline is provided. The memory waterline refers to a threshold value of triggering interruption of a memory chip area. The memory waterline may be the maximum capacity of the memory slice, or may be an intermediate value of the capacity of the memory slice, that is, may be smaller than the maximum capacity of the memory slice, which is not limited in this embodiment.
When the memory usage of the memory patch is greater than the memory waterline, an interrupt is triggered so that the processor can process the data in the memory patch in time. When the interrupt is frequently triggered, the power consumption of the terminal is increased, and the efficiency of business batch processing is affected.
Therefore, in one possible implementation, the capacity of the memory slice is adjusted according to the interrupt frequency of the memory slice, so as to reduce the probability of frequent triggering of the interrupt. The memory manager counts the interrupt frequency of each memory area, and when the interrupt is triggered, the corresponding interrupt frequency is updated.
After the interrupt frequency of each memory slice is obtained, the slice capacity of the memory slice is adjusted based on the interrupt frequency of each memory slice.
In one possible implementation, when the interrupt frequency is higher, the memory manager may increase the capacity of the corresponding memory partition, so that the maximum amount of memory allowed to be used by the corresponding memory partition increases accordingly, and the interrupt frequency may be effectively reduced. In the process of adjusting the capacity of the memory area, the total memory capacity is kept unchanged. Therefore, when the capacity of a memory segment is increased, the capacity of a segment in which another memory segment exists is correspondingly reduced. In order to reduce the influence of the adjustment of the capacity of the memory area on other business processes, after the memory area needing to be increased is determined, the memory area with the capacity capable of being reduced is determined according to the interrupt frequency, and the capacity of the memory area is adjusted under the condition that the memory area with the capacity allowed to be reduced exists. Alternatively, under the condition of lower interrupt frequency, the memory usage rate of the corresponding memory fragment may be lower, so that the fragment capacity of the memory fragment with lower interrupt frequency can be reduced, and the memory usage rate is improved.
In the embodiment of the application, after the memory fragments of different services are initially divided, the fragment capacity of each memory fragment is dynamically adjusted according to the actual use condition of the memory fragment in the service processing process. In the service processing process, the memory usage conditions corresponding to different services are different, and the service with larger memory demand exists possibly, and correspondingly, the service with smaller memory demand also exists, so that the memory usage rate can be improved and the memory waste can be avoided by reasonably adjusting the capacity of the memory area corresponding to the different services.
In summary, in the embodiment of the present application, the memory manager in the computer device first determines the memory area storing different service data based on the service types corresponding to the different service data. And then, according to the frequency that the memory usage amount is larger than the memory waterline in the service processing process, the capacity of each memory slice can be dynamically adjusted, the capacity of the memory slice with higher interrupt frequency can be improved, and the capacity of the memory slice with lower interrupt frequency can be reduced. Compared with the mode of using the memory planning memory capacity to the maximum extent by the estimated service in the related art, in the embodiment of the application, the capacity of the memory corresponding to the memory slices of different services can be reasonably adjusted through the dynamic memory management mechanism, so that the memory utilization rate can be improved, and the memory space can be saved. And because the capacity of the patch area is adjusted based on the interrupt frequency, the frequent triggering of the interrupt can be reduced, thereby being beneficial to reducing the power consumption and improving the efficiency of batch processing of service data.
In the data processing process, data may need to be moved from one node to another node for processing, and in the process, different memory nodes need to be set for realizing the data movement. In one possible implementation, the memory may be divided into different memory nodes, and then the memory nodes may be partitioned into different memory slices. In the process of node division of the memory, the memory capacity required by the memory node can be determined first, and then the node division can be performed based on the memory capacity. And when each memory node is partitioned, the memory can be primarily partitioned according to the service frequency, and primary reasonable planning is performed on the memory. The following will describe exemplary embodiments.
Referring to fig. 3, a flowchart of a memory management method according to another exemplary embodiment of the application is shown. This embodiment is described by taking the method as an example performed by a memory manager in a computer device, and the process includes the following steps:
in step 301, the memory is node-partitioned to obtain memory nodes, and memory movement exists between different memory nodes.
In this embodiment, the memory manager first performs node division on the memory, so as to obtain different memory nodes. In one possible implementation, the memory manager performs node partitioning based on whether the processed data is memory moved. When the memory is required to be moved, the memory nodes are required to be arranged to store the moved data. I.e., there is memory movement between different memory nodes.
Illustratively, the data processed by a modem (modem) chip is taken as an example. When the physical layer receives data from the network side, a series of data processing is performed on the physical layer, and after the processing is completed, the data is moved to the protocol layer, and a series of processing is performed by each processing layer in the protocol layer. When data is moved between the physical layer and the protocol layer, the data in the memory corresponding to the physical layer needs to be moved to the memory corresponding to the protocol layer, that is, memory movement exists, so that the physical layer and the protocol layer respectively correspond to one memory node. The protocol layer comprises each processing layer such as MAC/RLC/PDCP/SDAP, and each processing layer in the protocol layer can access data in the same memory node to process the data, so that memory movement does not exist when each processing layer in the protocol layer processes the data, and the protocol layer can be uniformly regarded as a node, namely only one corresponding memory node.
In the partitioning process, the memory manager may determine the memory capacity required to be used by each memory node, thereby partitioning the memory based on the memory capacity required to be used by each memory node. After the division, the memory capacity of each memory node is unchanged. In one possible implementation, the method for node partitioning the memory may include steps 301a-301c (not shown):
In step 301a, processing time of each data node for processing received data is determined based on a first data rate and a second data rate, the data node is a node for processing data of the memory node, the first data rate is a rate for receiving data by the data node, and the second data rate is a rate for processing data by the data node.
When the memory manager performs memory partitioning, firstly, the content capacity required by the memory nodes is determined, and then each memory node is obtained based on the memory capacity partitioning. For each memory node, there is a data node for processing data therein. In one possible implementation, the calculation of the memory capacity may be performed according to the processing time of processing data by each data node. The processing time of the data node for processing the received data refers to a unit time for processing the received data.
Alternatively, when the rate of receiving data is faster and the rate of processing data is slower, the processing time is longer and the amount of memory required is greater. When the rate of receiving data is the same as or slower than the rate of processing data, the data can be processed in time, and the processing time is shorter, so the required amount of memory is smaller. The processing time may be determined based on a ratio of a rate at which the data node receives data (a first data rate) to a rate at which the data node processes data (a second data rate), thereby determining the content capacity based on the processing time.
Illustratively, taking a data node as an example of a protocol layer, where a data rate supported by the protocol layer, that is, a rate of receiving data, is 8G/s, and when a corresponding rate of processing data by the CPU is 4G/s, a processing time required by the CPU to process the data is 2s.
In step 301b, the memory capacity required for each data node is determined based on the processing time and the first data rate.
When the processing time is determined, the product of the processing time and the first data rate of the received data may be determined as the memory capacity required for each data node.
Illustratively, when the first data rate of the data node is r and the processing time is t, the required memory capacity is r×t.
In step 301c, the memory is node-divided based on the memory capacity required by each data node, so as to obtain the memory node.
After determining the memory capacity required by each data node, the memory manager divides the memory according to the content capacity required by each node to obtain each memory node. And determining the memory capacity according to the actual data processing capacity of each node, dividing the nodes, and maximizing the memory utilization rate.
Step 302, dividing the memory node into memory slices corresponding to different service types based on service use frequency of the service type, wherein the service use frequency and the slice capacity of the memory slices are in positive correlation.
After each memory node is obtained, the memory manager divides the memory area. In one possible implementation manner, the terminal may count service usage frequencies of different service types, so that the memory node is partitioned according to the service usage frequencies, and the higher the service usage frequency is, the more memory partition with larger capacity of the partition can be allocated.
Optionally, the service may be ordered according to the service usage frequency, so that the duty ratio of the corresponding memory segment is determined according to the ordered service type order. The memory manager may preset the memory segment occupation ratios corresponding to the service types in different orders, so as to partition the memory nodes according to the memory segment occupation ratios.
Illustratively, taking 5 service types as an example, the memory fragment occupancy ratios corresponding to the service types in different orders may be as shown in table 1:
TABLE 1
| Order of the |
Duty ratio of |
| 1 |
50% |
| 2-3 |
20% |
| 4-5 |
5% |
After each memory area is obtained by dividing, the memory manager sets a memory description table and a memory management table for each memory area, so that the management of the memory areas is facilitated.
The memory description table may be as shown in table 2:
TABLE 2
| Description_0 |
| Description_1 |
| ... |
| Description_n |
| ... |
The description_i represents the memory Description information of the ith service unit in the memory slice, including the start address, the size, etc. The service unit refers to a unit formed by the minimum memory units which can be processed each time in the service processing process. The service units of different service types differ in size, i.e. one service unit contains a different number of minimum memory units for different service types. For example, service type 1 uses 3 minimum memory units to form a service unit, while service type 2 uses 2 minimum memory units to form a service unit.
The memory management table is shown in table 3:
TABLE 3 Table 3
The in_data_base_address represents a start address of a memory slice, in_total_depth represents a slice capacity of the memory slice, in_filled_depth represents a used slice capacity (a number of stored service units), in_read_pointer represents a pointer position of data to be processed, in_write_pointer represents a pointer position of stored data, in_full represents an identifier of full memory, in_empty represents an identifier of empty memory (i.e. no stored data), WATERMARK _threshold represents a threshold value of a memory line triggering a processing interrupt after a certain amount of memory is used, and enable represents an identifier of memory on.
Step 303, reading the interrupt frequency of the memory chip area from the interrupt frequency statistics table, wherein the interrupt frequency statistics table stores the interrupt frequency of the memory chip area corresponding to each service type, and the interrupt frequency is updated when the interrupt is triggered.
In one possible implementation manner, the memory manager is further provided with an interrupt frequency statistics table corresponding to the memory node, in addition to the memory description table and the memory management table. As shown in table 4:
TABLE 4 Table 4
| InterruptFrequency_1 |
| ... |
| InterruptFrequency_n |
| ThresholdH |
| ThresholdL |
Wherein InterruptFrequency _i represents the interrupt frequency of the interrupt triggered by the memory segment i corresponding to the service type i.
In the service processing process, the memory manager counts the frequency of triggering interruption of each memory area, and updates the interruption frequency in the frequency statistics table in real time.
In one possible implementation, the memory manager may obtain the interrupt frequency in real time, so as to adjust the partition capacity of the memory partition according to the change condition of the interrupt frequency.
In step 304, if the interrupt frequency of the memory slice is greater than the first frequency threshold, the memory slice is determined to be the first memory slice.
In one possible implementation, the memory manager has a first frequency threshold stored therein. When the interrupt frequency of the memory patch is greater than the first frequency threshold, the number of times that the memory usage amount in the memory patch exceeds the memory waterline is indicated to be more, so that the patch capacity of the corresponding memory patch needs to be increased. It may be determined as the first memory patch, which is the memory patch to be enlarged.
Alternatively, the first frequency threshold may be stored in an interrupt frequency statistics table, as shown in table 4, with ThresholdH stored therein, indicating an interrupt frequency high threshold, i.e., the first frequency threshold.
In step 305, if the interrupt frequency of the memory slice is less than the second frequency threshold, the memory slice is determined to be the second memory slice, and the first frequency threshold is greater than the second frequency threshold.
After determining the first memory area, the memory manager also needs to determine a second memory area which can be reduced, so as to avoid the influence on the corresponding business processing process.
The memory manager stores a preset second frequency threshold, which is an interrupt frequency low threshold and is lower than the first frequency threshold. When the interrupt frequency of the memory chip is smaller than the second frequency threshold, the number of times that the memory usage in the memory chip exceeds the memory waterline is smaller, and accordingly, the memory usage is relatively smaller, the chip capacity of the corresponding memory chip can be reduced, and therefore the memory chip can be determined to be the second memory chip and be the reducible memory chip.
Alternatively, the second frequency threshold may be stored in an interrupt frequency statistics table, as shown in table 4, with ThresholdL stored therein, indicating an interrupt frequency low threshold, i.e., the second frequency threshold.
Step 306, increasing the capacity of the first memory segment and decreasing the capacity of the second memory segment.
In one possible implementation, when the first memory slice exists in the memory node and the second memory slice exists in the memory node, the memory manager adjusts the slice capacity of the memory slice, so as to ensure that the influence on other service processing procedures is avoided when the slice capacity of the memory slice is increased.
Alternatively, the memory manager may increase the capacity of the first memory segment and decrease the capacity of the second memory segment, the increased capacity of the first memory segment being the same as the decreased capacity of the second memory segment.
Illustratively, as shown in fig. 4, when the interruption frequency of the service type 2 is lower than the second frequency threshold and the interruption frequency of the service type 3 is higher than the third frequency threshold, the capacity of the fragment of the service type 2 is reduced from the first capacity 401 to the second capacity 402, and the capacity of the fragment of the service type 3 is increased from the third capacity 403 to the fourth capacity 404, without additionally increasing the memory capacity of the memory node.
The specific adjustment process may refer to the following embodiments, which are not described in detail.
In this embodiment, in the node dividing process, the required memory capacity is determined according to the data receiving rate and the data processing rate of the node, so that different nodes are obtained according to the memory capacity division required by each node, and compared with the mode of obtaining the memory capacity by estimating the maximum usage amount and accumulating according to different service types in the related art, the maximum memory capacity can be planned according to the actual data processing capacity of the node, so as to improve the memory usage rate.
In the primary division process, the primary division is performed according to the service use frequency, and for the memory fragment with larger common service division, the subsequent frequent fragment capacity adjustment process can be reduced. In this embodiment, after the first memory segment is detected and the second memory segment is detected, the segment capacity of the memory segment is adjusted, and by reducing the memory segment of the inactive service, the segment capacity of the memory segment of the active service is increased, so that the memory capacity of the memory node is kept unchanged, only the duty ratio of the memory segments of different service types is adjusted, and the influence on the service processing process is reduced.
In the process of adjusting the capacity of the memory banks, no other memory banks may exist between the first memory bank and the second memory bank, at this time, the capacities of the first memory bank and the second memory bank may be directly adjusted, and in another possible case, other memory banks may exist, at this time, the other memory banks need to be moved to realize the adjustment of the capacities of the memory banks. The following will describe exemplary embodiments.
As shown in fig. 5, the step 306 may include the following steps:
In step 306a, when the third memory segment does not exist between the first memory segment and the second memory segment, the start address and the end address adjacent to each other between the first memory segment and the second memory segment are adjusted.
The distance between the first starting address and the first ending address of the first memory chip area after the address adjustment is increased, and the distance between the second starting address and the second ending address of the second memory chip area is reduced.
In one possible case, after determining the first memory slice and the second memory slice, the memory manager may first determine whether a third memory slice exists between the first memory slice and the second memory slice, and if the third memory slice does not exist, only the adjacent addresses need to be adjusted.
Optionally, when the start address (first start address) of the first memory chip area is adjacent to the end address (second end address) of the second memory chip area, the first start address of the first memory chip area and the second end address of the second memory chip area can be adjusted, and the end address (first end address) of the first memory chip area and the start address (second start address) of the second memory chip area are kept unchanged. Schematically, as shown in fig. 6, the second start address and the second end address of the second memory segment are 1-20, and the first start address and the first end address of the first memory segment are 21-50. In the capacity adjustment process of the first memory area, the second starting address and the first starting address can be reduced simultaneously, the adjusted second starting address and second ending address of the second memory area are 1-15, and the first starting address and first ending address of the first memory area are 16-50, so that the capacity of the first memory area is increased, and the capacity of the second memory area is reduced.
Optionally, when the first end address of the first memory segment is adjacent to the first start address of the second memory segment, the first end address of the first memory segment and the second start address of the second memory segment may be adjusted. The first end address and the second start address can be increased at the same time, and the first start address and the second end address are kept unchanged, so that the capacity of the first memory area is increased, and the capacity of the second memory area is reduced.
In step 306b, a minimum shift path is determined in the case that a third memory chip region exists between the first memory chip region and the second memory chip region, and the number of the third memory chip regions in the minimum shift path is the smallest.
In another possible case, there may be another third memory partition between the first memory partition and the second memory partition, and at this time, during the capacity adjustment process of the partition, the third memory partition needs to be moved, that is, the start address (third start address) of the third memory partition and the end address (third end address) of the third memory partition are adjusted.
In one possible implementation manner, when the third memory partition exists, different migration paths may exist, and the memory manager may determine the minimum migration path therein, so that the capacity of the partition is adjusted by using the minimum migration path, and the adjustment efficiency is improved. Wherein, determining the minimum shift path may comprise the steps of:
Step one, determining the number of the third memory slices under different moving paths.
In one possible implementation, the memory manager may determine the number of third memory slices to be traversed under various different move paths.
Illustratively, when the memory block corresponding to the memory node is 1-100, the memory node comprises a first memory chip region (1-10), a second memory chip region (11-40), a third memory chip region (41-60), a fourth memory chip region (61-80), a fifth memory chip region (81-90) and a sixth memory chip region (91-100).
The first memory chip area is a second memory chip area, and the second memory chip area is a fourth memory chip area. Under the first moving path, the moving sequence can be the second memory chip region, the third memory chip region and the fourth memory chip region, the number of the third memory chip region is 1, and under the second moving path, the moving sequence can be the second memory chip region, the first memory chip region, the sixth memory chip region, the fifth memory chip region and the fourth memory chip region, wherein the number of the third memory chip region is 3.
And step two, determining a moving path corresponding to the minimum number in the number of the sections as a minimum moving path.
In one possible implementation manner, the memory manager may determine the move path corresponding to the minimum number of the number of slices as the minimum move path, so as to adjust the memory slices in a manner of minimizing the adjustment amount of the slices, thereby improving adjustment efficiency.
Step 306c, based on the minimum shift path, increasing the capacity of the first memory segment and decreasing the capacity of the second memory segment.
In one possible implementation, after determining the minimum move path, the capacity of the first memory bank may be increased and the capacity of the second memory bank may be decreased based on the minimum move path. Wherein, the method can comprise the following steps:
Step one, according to the adjustment sequence indicated by the minimum shift path, the first starting address or the first ending address of the first memory chip area is adjusted, and the interval between the first starting address and the first ending address is increased after the address adjustment.
In one possible implementation, the memory manager may sequentially adjust the first memory slice, the third memory slice, and the second memory slice according to the minimum shift path. The first memory area may be adjusted as a starting point, or the second memory area may be adjusted as a starting point. This embodiment is not limited thereto.
In the adjustment process, from the first memory chip area to the second memory chip area, one address exists in the first starting address or the first ending address of the first memory chip area and is kept unchanged, and one address exists in the second starting address or the second ending address of the second memory chip area and is kept unchanged.
When the first starting address of the first memory slice is kept unchanged under the condition that the first memory slice is taken as a starting point, the second ending address of the second memory slice is kept unchanged after the first starting address of the first memory slice is sequentially adjusted. And when the first end address of the first memory slice is kept unchanged, the second start address of the second memory slice is kept unchanged.
When the second starting address of the second memory segment is kept unchanged, the first ending address of the first memory segment is kept unchanged, and when the second ending address of the second memory segment is kept unchanged, the first starting address of the second memory segment is kept unchanged.
That is, only one address of the first start address or the first end address needs to be adjusted in the process of adjusting the first memory area.
And step two, adjusting a third starting address and a third ending address of the third memory area, wherein the distance between the third starting address and the third ending address after the address adjustment is kept unchanged.
In the process of adjusting the third memory slice, the slice capacity of the third memory slice is kept unchanged, so that the third starting address and the third ending address of the third memory slice are adjusted, and the interval between the adjusted third starting address and third ending address is kept unchanged.
In one possible embodiment, the first memory segment may be the starting point, or the second memory segment may be the starting point. When the first memory area is used as a starting point, after the first memory area is adjusted, a third starting address and a third ending address of a third memory area are sequentially adjusted, and then the second memory area is adjusted. When the second memory area is used as a starting point, after the second memory area is adjusted, the third starting address and the third ending address of the third memory area are sequentially adjusted.
And thirdly, adjusting a second starting address or a second ending address of the second memory area, wherein the distance between the second starting address and the second ending address after the address adjustment is reduced.
In one possible implementation, only one of the second start address and the second end address needs to be adjusted in the second memory segment adjustment process.
Illustratively, as shown in fig. 7, when the addresses of the first memory bank are 11-40, the addresses of the third memory bank are 41-60, and the addresses of the second memory bank are 61-80, the first memory bank is taken as an example of the adjustment starting point, and the addresses of the adjusted memory banks are respectively the first memory bank (11-45), the third memory bank (46-65), and the second memory bank (66-80) under the minimum shift path.
And in one possible case, the second memory region obtained by the statistics of the memory manager may include at least two memory regions, that is, at least two memory regions capable of being reduced. At this time, in order to reduce the adjustment amount, one second memory segment may be selected from at least two second memory segments according to a minimum shift path between segments.
Optionally, before increasing the capacity of the first memory chip and reducing the capacity of the second memory chip, determining a minimum shift path between the at least two second memory chips and the first memory chip under the condition that the at least two second memory chips are included.
The memory manager determines the minimum shift path between each second memory region and the first memory region under the condition that at least two second memory regions are obtained, and then the second memory region corresponding to the minimum shift path with the shortest path can be reduced.
Illustratively, when two third memory slices exist between the minimum moving paths of the second memory slice a and the first memory slice, and one third memory slice exists between the minimum moving paths of the second memory slice B and the first memory slice, the second memory slice B may be determined as the memory slice to be reduced. The memory manager reduces the memory capacity of the second memory partition B.
In this embodiment, in the process of adjusting the partition capacity of the memory partition, when a third memory partition exists between the first memory partition and the second memory partition, the memory manager may select the minimum shift path to perform adjustment, thereby improving the adjustment efficiency. And under the condition that a plurality of second memory areas exist, one second memory area can be selected according to the minimum moving path between each second memory area and the first memory area, so that the adjustment efficiency is further improved, and the influence on business processing is reduced.
In the process of adjusting the capacity of the fragment, the adjustment can be performed according to the amount of memory units required for increasing the service type. Wherein, the step 306 may comprise the following steps:
Step one, determining the memory unit quantity which is needed to be increased for the corresponding service type of the first memory area, wherein the memory unit quantity is the quantity of the minimum memory units in the memory node.
In one possible implementation, the memory manager first determines the amount of memory units that need to be increased, i.e., the minimum amount of memory units that need to be increased, for the corresponding traffic type for the first memory segment.
Alternatively, the amount of memory cells to be increased may be determined according to a fixed ratio, for example, 5% increase based on the current first memory slice's slice capacity, and 5% of the first slice's slice capacity is determined as the increased amount of memory cells.
Or the increased memory cell amount can be dynamically adjusted according to the number of times the interrupt frequency is greater than the first frequency threshold. The higher the number of times the interrupt frequency is greater than the first frequency threshold, the more memory cells are increased, directly to the upper memory cell increase limit. For example, the first interrupt frequency may be increased by 5% when it is greater than the first frequency threshold, and the second interrupt frequency may be increased by 8% when it is greater than the first frequency threshold, directly until the memory cell is increased by 20%.
And step two, increasing the capacity of the first memory slice based on the memory unit quantity.
The memory manager increases the capacity of the first memory segment according to the memory unit amount, and the adjusted minimum memory unit amount included in the first memory segment is the sum of the minimum memory unit amount included in the first memory segment before adjustment and the memory unit amount.
And step three, reducing the capacity of the first memory slice based on the memory unit quantity.
The memory manager reduces the capacity of the second memory segment according to the memory unit amount, and the adjusted minimum memory unit amount contained in the second memory segment is the difference between the minimum memory unit amount contained in the second memory segment before adjustment and the memory unit amount.
In the above manner, the adjustment of the capacity of the sector is directly performed according to the amount of memory cells required for increasing the service type. However, since the service units corresponding to different service types have different sizes, that is, the number of units of the minimum memory unit that can be processed at a time is different. If the service unit size of the first service corresponding to the first memory chip area is 3 minimum memory units, and the service unit size of the second service corresponding to the second memory chip area is 2 minimum memory units, when the first service needs to increase 3 service units, i.e. 9 minimum memory units need to be increased, and if the second service is reduced by 9 minimum memory units, a single minimum memory unit exists in the second memory chip area, the service units of the second service cannot be formed, and memory fragments appear in the second memory chip area, so that memory waste is caused. Thus, in another possible implementation, the memory adjustment granularity may be set, with adjustments based on the memory adjustment granularity. The method can comprise the following steps:
Step one, obtaining the service unit size of each memory area, wherein the service unit size refers to the unit number of the minimum memory units processed at a time in the service processing process.
In one possible implementation, the memory manager may obtain a service unit size of each memory fragment in the memory node corresponding to a service type, thereby determining the memory adjustment granularity according to the service unit size. The memory adjustment granularity refers to an adjustment reference unit in the process of adjusting the capacity of the memory, that is, the reference number of the adjusted minimum memory units, and the capacity of the finally adjusted memory is an integer multiple of the memory adjustment granularity.
Or in each adjustment process, the memory manager may obtain the service unit sizes (including the service unit sizes corresponding to the first memory slice, the second memory slice and the third memory slice) of the service types corresponding to each memory slice in the move path, so as to determine the memory adjustment granularity according to the service unit sizes of the adjusted memory slices in the move path.
And step two, determining the least common multiple of the sizes of the business units as memory adjustment granularity, wherein the memory adjustment granularity refers to the reference number of the minimum memory units adjusted in the process of adjusting the capacity of the slice, and the adjusted capacity of the slice is an integer multiple of the memory adjustment granularity.
In one possible implementation, the memory manager may determine the least common multiple of the sizes of the individual service units as the memory adjustment granularity. For example, when the service unit size contains 2 and 3, 6 may be determined as the memory adjustment granularity. In the process of adjusting the capacity of the slice area, 6 minimum memory units are used as reference units for adjustment.
And thirdly, increasing the capacity of the first memory slice and reducing the capacity of the second memory slice based on the memory adjusting granularity.
After determining the memory adjustment granularity, the memory manager may increase the tile capacity of the first memory tile and decrease the tile capacity of the second memory tile based on the memory adjustment granularity. The memory unit quantity to be increased can be determined first, and when the increased memory unit quantity is an integer multiple of the memory adjustment granularity, the adjustment of the capacity of the partition can be performed directly based on the increased memory unit quantity. For example, when the granularity of memory adjustment is 6 and the amount of memory units to be increased is 12, the adjustment of the capacity of the partition can be performed based on 12. When the increased memory unit amount is not an integer multiple of the memory adjustment granularity, the integer multiple of the memory adjustment granularity larger than the memory unit amount is determined as the adjusted slice capacity. For example, when the number of memory cells to be increased is 15, the capacity of the tab area is determined to be 18, and the memory manager increases the first memory area by 18 minimum memory cells and decreases the second memory area by 18 memory cells.
Because the memory adjustment granularity is the least common multiple of the sizes of the business units, the generation of memory fragments can be effectively avoided under the condition that the sizes of the business units corresponding to different businesses are different.
In this embodiment, in the adjustment process, the memory adjustment granularity is determined based on the sizes of the service units, and the adjustment of the partition capacity of the memory partition is performed based on the memory adjustment granularity, so that the generation of memory fragments in the adjustment process can be avoided, and the waste of memory can be avoided.
Referring to fig. 8, a block diagram of a memory management device according to an embodiment of the application is shown. As shown in fig. 8, the apparatus may include:
the memory dividing module 801 is configured to determine a memory slice area according to service types corresponding to different service data;
The memory adjustment module 802 is configured to adjust a partition capacity of the memory partition based on an interrupt frequency of the memory partition in a service processing process, where the interrupt frequency is a frequency that a memory usage amount of the memory partition is greater than a memory waterline, and the memory waterline is a threshold value of the memory partition triggering an interrupt.
Optionally, the memory adjustment module 802 is further configured to:
determining the memory chip area as a first memory chip area under the condition that the interrupt frequency of the memory chip area is larger than a first frequency threshold value;
Determining the memory patch as a second memory patch under the condition that the interrupt frequency of the memory patch is smaller than a second frequency threshold, wherein the first frequency threshold is larger than the second frequency threshold;
and increasing the capacity of the first memory area and reducing the capacity of the second memory area.
Optionally, the memory adjustment module 802 is further configured to:
And under the condition that a third memory chip area does not exist between the first memory chip area and the second memory chip area, adjusting the adjacent starting address and the adjacent ending address between the first memory chip area and the second memory chip area, wherein the interval between the first starting address and the first ending address of the first memory chip area after the address adjustment is increased, and the interval between the second starting address and the second ending address of the second memory chip area is reduced.
Optionally, the memory adjustment module 802 is further configured to:
determining a minimum shift path under the condition that a third memory chip area exists between the first memory chip area and the second memory chip area, wherein the number of the third memory chip areas is minimum in the minimum shift path;
And based on the minimum moving path, increasing the capacity of the first memory area and reducing the capacity of the second memory area.
Optionally, the memory adjustment module 802 is further configured to:
According to the adjustment sequence indicated by the minimum moving path, adjusting a first starting address or a first ending address of the first memory chip area, wherein the interval between the first starting address and the first ending address is increased after the address adjustment;
Adjusting a third starting address and a third ending address of the third memory area, wherein the distance between the third starting address and the third ending address after the address adjustment is kept unchanged;
and adjusting a second starting address or a second ending address of the second memory area, wherein the distance between the second starting address and the second ending address after the address adjustment is reduced.
Optionally, the memory adjustment module 802 is further configured to:
determining the number of the third memory areas under different moving paths;
and determining the moving path corresponding to the minimum in the number of the sections as the minimum moving path.
Optionally, the memory adjustment module 802 is further configured to:
determining a minimum moving path between at least two second memory slices and the first memory slices under the condition of containing at least two second memory slices;
And reducing the second memory chip area corresponding to the minimum shift path with the shortest path.
Optionally, the memory adjustment module 802 is further configured to:
Determining the amount of memory units which are required to be increased for the corresponding service type of the first memory area, wherein the amount of memory units is the amount of minimum memory units in the memory node;
Based on the memory unit quantity, increasing the capacity of the first memory slice;
and reducing the capacity of the first memory slice based on the memory unit quantity.
Optionally, the memory adjustment module 802 is further configured to:
acquiring the service unit size of each memory area, wherein the service unit size refers to the unit number of the minimum memory unit processed once in the service processing process;
Determining the least common multiple of the sizes of the service units as memory adjustment granularity, wherein the memory adjustment granularity refers to the reference number of the minimum memory units adjusted in the process of adjusting the capacity of the area, and the adjusted capacity of the area is an integer multiple of the memory adjustment granularity;
and increasing the capacity of the first memory slice and reducing the capacity of the second memory slice based on the memory adjustment granularity.
Optionally, the memory partitioning module 801 is further configured to:
node division is carried out on the memory to obtain memory nodes, and memory movement exists among different memory nodes;
and dividing the memory nodes into memory slices corresponding to different service types based on service use frequencies corresponding to the service types, wherein the service use frequencies and the slice capacities of the memory slices are in positive correlation.
Optionally, the memory partitioning module 801 is further configured to:
Determining processing time of each data node for processing received data based on a first data rate and a second data rate, wherein the data node is a node for processing data of a memory node, the first data rate is a rate of the data node for receiving the data, and the second data rate is a rate of the data node for processing the data;
Determining a memory capacity required by each data node based on the processing time and the first data rate;
and carrying out node division on the memory based on the memory capacity required by each data node to obtain the memory node.
Optionally, the apparatus further includes:
The frequency acquisition module is used for reading the interrupt frequency of the memory chip area from an interrupt frequency statistical table, wherein the interrupt frequency of the memory chip area corresponding to each service type is stored in the interrupt frequency statistical table, and the interrupt frequency is updated when the interrupt is triggered.
In summary, in the embodiment of the present application, the memory manager in the computer device first determines, based on the service types corresponding to the different service data, the memory areas corresponding to the different service data. And then, according to the frequency that the memory usage amount is larger than the memory waterline in the service processing process, the capacity of each memory slice can be dynamically adjusted, the capacity of the memory slice with higher interrupt frequency can be improved, and the capacity of the memory slice with lower interrupt frequency can be reduced. Compared with the mode of using the memory planning memory capacity to the maximum extent by the estimated service in the related art, in the embodiment of the application, the capacity of the memory corresponding to the different service types can be reasonably adjusted through the dynamic memory management mechanism, so that the memory utilization rate can be improved, and the memory space can be saved. And because the capacity of the patch area is adjusted based on the interrupt frequency, the frequent triggering of the interrupt can be reduced, thereby being beneficial to reducing the power consumption and improving the efficiency of batch processing of service data.
It should be noted that, in the device provided in the above embodiment, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the method embodiments are described in the method embodiments, which are not repeated herein.
Referring to FIG. 9, a block diagram of a computer device 900 according to an exemplary embodiment of the application is shown. Computer device 900 in the present application may include one or more components including a processor 910 and a memory 920.
Processor 910 may include one or more processing cores. The processor 910 utilizes various interfaces and lines to connect various portions of the overall electronic device 900, perform various functions of the electronic device 900, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 920, and invoking data stored in the memory 920. Alternatively, the processor 910 may be implemented in hardware in at least one of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 910 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like, the GPU is used for rendering and drawing contents required to be displayed by the display screen, and the modem is used for processing wireless communication. It will be appreciated that the modem may not be integrated into the processor 910 and may be implemented solely by a baseband chip.
Memory 920 may include memory 921 and memory manager 922. The Memory 921 includes a random access Memory (Random Access Memory, RAM), and may also include a Read-Only Memory (ROM). Optionally, the memory 920 includes a non-transitory computer-readable medium (non-transitory computer-readable storage medium). Memory 921 may be used to store instructions, programs, code sets, or instruction sets. The memory 921 may include a stored program area and a stored data area, where the stored program area may store instructions for implementing an operating system, which may be an Android (Android) system (including a system developed based on an Android system), an IOS system developed by apple corporation (including a system developed based on an IOS system), or other systems, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described method embodiments, and the like. The storage data area may also store data created by the computer device 900 in use (e.g., phonebook, audiovisual data, chat log data), and the like.
Memory 920 may also include a memory manager 922, where memory manager 922 includes programmable logic and/or program instructions for managing memory of computer device 900.
In addition, those skilled in the art will appreciate that the structure of the computer device 900 illustrated in the above-described figures is not limiting of the computer device 900, and that a computer device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. For example, the computer device 900 further includes components such as a radio frequency circuit, a shooting component, a sensor, an audio circuit, a wireless fidelity (WIRELESS FIDELITY, WIFI) component, a power supply, and a bluetooth component, which are not described herein.
The present application also provides a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the memory management method provided by any of the above-described exemplary embodiments.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computer device to perform the memory management method provided in the above-described alternative implementation.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.