CN103729249B - Method and computer system for memory management of virtual machine - Google Patents
Method and computer system for memory management of virtual machine Download PDFInfo
- Publication number
- CN103729249B CN103729249B CN201310459723.3A CN201310459723A CN103729249B CN 103729249 B CN103729249 B CN 103729249B CN 201310459723 A CN201310459723 A CN 201310459723A CN 103729249 B CN103729249 B CN 103729249B
- Authority
- CN
- China
- Prior art keywords
- processor
- balloon
- stage
- metamorphosis
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
Abstract
一种虚拟机系统的存储器管理方法和计算机系统。存储器管理方法包括:首先,通过处理器设定第一阀值。接着通过处理器在第一调整阶段中根据换入(swap in)/再次快取错误再次快取错误(refault)检测结果而将气球目标设定为分配的虚拟存储器大小且将气球目标逐步递减第一递减值。通过处理器检测至少一个换入或再次快取错误事件来产生换入/再次快取错误检测结果。通过处理器在冷却阶段中根据换入/再次快取错误检测结果而停止递减气球目标。通过处理器在冷却阶段之后的第二调整阶段中将气球目标逐步递减第二递减值。第二递减值小于第一递减值,且气球目标不小于第一阀值。
A memory management method and a computer system for a virtual machine system. The memory management method includes: first, setting a first threshold value by a processor. Then, in a first adjustment phase, the processor sets a balloon target to the allocated virtual memory size and gradually decreases the balloon target by a first decrement value according to a swap in/refault detection result. The processor detects at least one swap in or refault event to generate a swap in/refault detection result. In a cooling phase, the processor stops decreasing the balloon target according to the swap in/refault detection result. In a second adjustment phase after the cooling phase, the processor gradually decreases the balloon target by a second decrement value. The second decrement value is less than the first decrement value, and the balloon target is not less than the first threshold value.
Description
技术领域technical field
本揭露涉及对虚拟机的存储器管理的技术。The present disclosure relates to technologies for memory management of virtual machines.
背景技术Background technique
计算机虚拟化是涉及建立一种如同具有操作系统的物理计算机的虚拟机器的技术,且计算机虚拟化的架构大体上依据在单一物理计算机平台上同时支持多个操作系统的能力来界定。举例来说,正在运行微软窗口操作系统(Microsoft Windows)的计算机可主控具有Linux操作系统的虚拟机。在当虚拟机被视为客户机(guest machine)时,主机为上述发生虚拟化的物理机器。超管理器(hypervisor;正确的说法为虚拟机超管理器(virtualmachine monitor;VMM))为虚拟化硬件资源且呈现虚拟硬件接口给至少一虚拟机的软件层。超管理器类似于传统操作系统管理硬件资源以用于处理的方式以及相对于执行中的虚拟机执行某些管理功能。虚拟机可称作“客户”且在虚拟机内运行的操作系统可称作“客户操作系统”。Computer virtualization is a technology that involves creating a virtual machine like a physical computer with an operating system, and the architecture of computer virtualization is generally defined by the ability to simultaneously support multiple operating systems on a single physical computer platform. For example, a computer running Microsoft Windows can host a virtual machine with a Linux operating system. When the virtual machine is regarded as a guest machine (guest machine), the host is the above-mentioned physical machine where virtualization occurs. A hypervisor (virtual machine monitor, VMM) is a software layer that virtualizes hardware resources and presents a virtual hardware interface to at least one virtual machine. A hypervisor is similar to the way a traditional operating system manages hardware resources for processing and performs certain management functions with respect to an executing virtual machine. A virtual machine may be referred to as a "guest" and an operating system running within the virtual machine may be referred to as a "guest operating system."
虚拟化环境当前受到存储限制,这意味着主机的物理存储器为数据中心的资源利用的瓶颈。存储器虚拟化将物理存储器资源与数据中心分离且接着将资源聚合到虚拟化存储器池(memory pool)中,所述虚拟化存储器池可由客户操作系统或在客户操作系统上运行的应用程序访问。就存储器虚拟化来说,存储器压缩为存储器资源管理和利用的至关重要的主题之一Virtualized environments are currently storage constrained, which means that the physical memory of the host machine is the bottleneck for resource utilization in the data center. Memory virtualization separates physical memory resources from the data center and then aggregates the resources into virtualized memory pools that can be accessed by guest operating systems or applications running on the guest operating systems. As far as memory virtualization is concerned, memory compression is one of the crucial topics of memory resource management and utilization
相似于传统操作系统,提高超管理器使用的存储器的最后手段为通过主机交换(即,将虚拟机的存储器页面(memory page)移动到实体的交换内存空间(swap space),称作换出)来从虚拟机回收存储器,将虚拟机的物理地址对机器地址(P2M)表的对应页面表项(page table entry;PTE)标记为不存在,且接着将对应页面释放到超管理器的自由存储器池中,其中页面表为由虚拟机使用以存储虚拟地址与物理地址之间的映像的数据结构。稍后,如果所述页面再次被虚拟机所访问,页面错误被触发且访问时复制(copy-on access;COA)机制被启动以将页面内容从交换内存空间带到新分配的存储器页面中,称作换入。然而,因为磁盘的输入/输出(I/O)所造成的长时间的延滞的耗费会令人非常不满意。Similar to traditional operating systems, the last resort to increase the memory used by the hypervisor is through host swapping (i.e., moving a virtual machine's memory page to the entity's swap space, called swapping out) To reclaim memory from the virtual machine, mark the corresponding page table entry (PTE) of the physical address-to-machine address (P2M) table of the virtual machine as absent, and then release the corresponding page to the hypervisor's free memory pool, where a page table is a data structure used by a virtual machine to store the mapping between virtual and physical addresses. Later, if the page is accessed again by the virtual machine, a page fault is triggered and a copy-on access (COA) mechanism is activated to bring the page content from the swap memory space to a newly allocated memory page, called swap-in. However, the cost of long latency due to disk input/output (I/O) can be very unsatisfactory.
作为提高存储器利用的另一方式,可通过将虚拟机的换出页面压缩成大小较小的数据且将其一起放在存储器中以节省用以存储原始内容的物理存储磁盘来进行存储器压缩。这个想法为从压缩的存储器进行换入动作,且将比由磁盘的记忆空间进行换入动作为快,这是因为存储器的存取速度比磁盘的存取速度为快。As another way to improve memory utilization, memory compression can be performed by compressing swapped-out pages of a virtual machine into smaller-sized data and putting them together in memory to save physical storage disks used to store original content. The idea is that swapping in from compressed memory will be faster than swapping in from disk memory because memory accesses are faster than disk access.
尽管如此,存储器压缩主要视为第二选择,这是因为其不仅启动引起触发硬件陷阱hardware trap且停止当前应用程序执行的访问时复制(COA),而且消耗主机的处理器的处理周期(cycle)来压缩和解压缩页面内容并引发较多销耗。因此,在理想情形下为避免压缩被客户操作系统频繁地访问的存储器页面(即,工作集),而是找出闲置存储器页面(即,工作集之外的客户存储器页面)以用于存储器压缩。Nevertheless, memory compression is mainly seen as a second choice because it not only initiates copy-on-access (COA) that triggers a hardware trap and stops the current application execution, but also consumes processing cycles of the host's processor. to compress and decompress page content and cause more overhead. Therefore, in an ideal situation, to avoid compacting memory pages (i.e., working set) that are frequently accessed by the guest operating system, find free memory pages (i.e., guest memory pages outside the working set) for memory compaction .
发明内容Contents of the invention
本揭露提供一个实施例涉及用于虚拟机系统的存储器管理方法。存储器管理方法包含以下步骤。首先,通过处理器设定第一阈值。接着通过处理器在第一调整阶段中根据换入/再次快取错误检测结果而将气球目标设定为分配的虚拟存储器大小且将气球目标逐步递减第一递减值。通过处理器检测至少一个换入或再次快取错误事件来产生换入/再次快取错误检测结果。通过处理器在冷却阶段中根据换入/再次快取错误检测结果而停止递减气球目标。通过处理器在冷却阶段之后的第二调整阶段中将气球目标逐步递减第二递减值。第二递减值小于第一递减值,且气球目标不小于第一阈值。The present disclosure provides an embodiment related to a memory management method for a virtual machine system. The memory management method includes the following steps. First, a first threshold is set by a processor. Then the processor sets the balloon target to the allocated virtual memory size and decrements the balloon target by a first decrement value step by step according to the swap-in/re-cache error detection result in the first adjustment stage. A swap-in/re-caching error detection result is generated by the processor detecting at least one swap-in or re-caching error event. The decrementing of the balloon target is stopped by the processor during the cooling phase upon swap-in/re-cache error detection. The balloon target is gradually decremented by a second decrement value by the processor in a second adjustment phase following the cooling phase. The second decreasing value is smaller than the first decreasing value, and the balloon target is not smaller than the first threshold.
本揭露提供另一个实施例涉及包含一种计算机系统,包含存储器以及处理器。所述处理器耦合到所述存储器且针对对虚拟机系统的存储器管理执行以下操作。所述处理器设定第一阈值以及在第一调整阶段中根据换入/再次快取错误检测结果将气球目标设定为分配的虚拟存储器大小且将气球目标逐步递减第一递减值。所述处理器还通过检测至少一个换入或再次快取错误事件来产生换入/再次快取错误检测结果。所述处理器在冷却阶段中根据换入/再次快取错误检测结果而停止递减气球目标。所述处理器还在冷却阶段之后的第二调整阶段中将气球目标逐步递减第二递减值。第二递减值小于第一递减值,且气球目标不小于第一阈值。Another embodiment of the present disclosure relates to a computer system including a memory and a processor. The processor is coupled to the memory and performs the following operations for memory management of a virtual machine system. The processor sets a first threshold and sets the balloon target to the allocated virtual memory size and decrements the balloon target by a first decrement value in a first adjustment phase according to the swap-in/re-caching error detection result. The processor also generates a swap-in/re-caching error detection result by detecting at least one swap-in or re-caching error event. The processor stops decrementing balloon targets during the cool down phase based on swap-in/re-cache error detections. The processor also steps down the balloon target by a second decrement value in a second adjustment phase following the cooling phase. The second decreasing value is smaller than the first decreasing value, and the balloon target is not smaller than the first threshold.
下文详细描述伴有图式的若干示范性实施例以进一步详细描述本揭露。Several exemplary embodiments accompanied by figures are described in detail below to further describe the present disclosure in detail.
附图说明Description of drawings
包含附图以提供对本揭露的进一步理解,且附图并入本说明书中并构成本说明书的一部分。所述图式用以说明本揭露的示范性实施例,且与描述一起用以解释本揭露的原理。然而,不希望所述图式限制本揭露的范围,本揭露的范围由所附权利要求书界定。The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification. The drawings are used to illustrate exemplary embodiments of the disclosure, and together with the description, serve to explain the principles of the disclosure. However, the drawings are not intended to limit the scope of the disclosure, which is defined by the appended claims.
图1A为说明根据本揭露的示范性实施例的计算机系统的框图。FIG. 1A is a block diagram illustrating a computer system according to an exemplary embodiment of the present disclosure.
图1B为说明根据本揭露的示范性实施例的虚拟机系统的框图。FIG. 1B is a block diagram illustrating a virtual machine system according to an exemplary embodiment of the present disclosure.
图2为说明根据本揭露的示范性实施例的用于对虚拟机的存储器管理的方法的阶段图。FIG. 2 is a stage diagram illustrating a method for memory management of a virtual machine according to an exemplary embodiment of the present disclosure.
图3为说明根据本揭露的示范性实施例的用于对虚拟机的存储器管理的方法的另一阶段图。FIG. 3 is another stage diagram illustrating a method for memory management of a virtual machine according to an exemplary embodiment of the present disclosure.
【主要元件标号说明】【Description of main component labels】
100:计算机系统100: Computer Systems
100':虚拟机系统100': virtual machine system
110:处理器110: Processor
120:系统存储器120: System memory
150:虚拟机150: virtual machine
155:客户操作系统155: Guest Operating System
160:超管理器160: Super Manager
170:虚拟硬件170: Virtual Hardware
P12:路径P12: path
P21:路径P21: Path
P23:路径P23: Path
P31:路径P31: Path
P32:路径P32: Path
P34:路径P34: Path
S1:第一调整阶段S1: The first adjustment stage
S2:冷却阶段S2: cooling stage
S3:第二调整阶段S3: Second adjustment stage
S4:另一冷却阶段S4: another cooling phase
S5:第三调整阶段S5: The third adjustment stage
具体实施方式detailed description
现将详细参考本揭露的示范性实施例,其实例在附图中得以说明。只要可能,相同参考数字在图式和描述中用以指相同或相似部分。Reference will now be made in detail to the exemplary embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and description to refer to the same or like parts.
出于说明目的,一个处理器以及一个系统存储器用于以下示范性实施例中,且本揭露不限于此。在其它示范性实施例中,可使用一个以上处理器以及一个以上系统存储器。For illustration purposes, one processor and one system memory are used in the following exemplary embodiments, and the disclosure is not limited thereto. In other exemplary embodiments, more than one processor and more than one system memory may be used.
图1A为说明根据本揭露的示范性实施例的计算机系统的方块图。参看图1,计算机系统100包含处理器110、系统存储器120以及其它标准外围组件(未图标),其中系统存储器120耦接到处理器110。FIG. 1A is a block diagram illustrating a computer system according to an exemplary embodiment of the present disclosure. Referring to FIG. 1 , a computer system 100 includes a processor 110 , a system memory 120 and other standard peripheral components (not shown), wherein the system memory 120 is coupled to the processor 110 .
处理器110可以为专用或专有处理器,其经配置以透过执行定义与操作有关的功能的机器可读软件代码语言来执行特定任务,以透过与计算机系统100的其它组件通信来执行功能操作。Processor 110 may be a dedicated or proprietary processor configured to perform specific tasks by executing a machine-readable software code language that defines functions related to operations to perform by communicating with other components of computer system 100 Functional operation.
系统存储器120存储例如操作系统等软件且暂时存储当前在作用中或经常被使用的数据或应用程序。因此,系统存储器120(也称作物理存储器)可为较快的存储器(例如,随机存取存储器(RAM)、静态随机存取存储器(SRAM)或动态随机存取存储器(DRAM))以获得较短的存取时间。The system memory 120 stores software such as an operating system and temporarily stores data or application programs that are currently active or frequently used. Thus, system memory 120 (also referred to as physical memory) may be faster memory (eg, random access memory (RAM), static random access memory (SRAM), or dynamic random access memory (DRAM)) for faster short access time.
虚拟存储器为用于管理系统存储器120的资源的技术。其提供虚构的海量存储器。虚拟存储器以及系统存储器120两者被划分为具有连续存储器地址的区块,其也称作存储器页面。系统存储器120可(例如)包含压缩存储器,其与计算机系统100上运行的至少一虚拟机相关联。压缩存储器以压缩格式临时存储最近较少访问的存储器页面以使得系统存储器120中有较多空间可用。Virtual memory is a technique used to manage resources of system memory 120 . It provides imaginary mass storage. Both virtual memory and system memory 120 are divided into blocks of contiguous memory addresses, also referred to as memory pages. System memory 120 may, for example, include compressed memory associated with at least one virtual machine running on computer system 100 . Compressed memory temporarily stores less recently accessed memory pages in a compressed format to make more space available in system memory 120 .
超管理器安装在计算机系统100上且提供支持至少一个虚拟机可同时在虚拟机实体化且执行的虚拟机执行空间。图1B为说明根据本揭露的示范性实施例的虚拟机系统的方块图。在本实施例中,将仅说明一个虚拟机,但本揭露不限于此。在其它实施例中,多个虚拟机可共存且以类似方式执行操作。The hypervisor is installed on the computer system 100 and provides a virtual machine execution space in which at least one virtual machine can be instantiated and executed in the virtual machine at the same time. FIG. 1B is a block diagram illustrating a virtual machine system according to an exemplary embodiment of the present disclosure. In this embodiment, only one virtual machine will be described, but the present disclosure is not limited thereto. In other embodiments, multiple virtual machines can coexist and operate in a similar manner.
参看图1B与图1A,虚拟机系统100'包含具有客户操作系统155和其它应用程序(未图标)的虚拟机150、超管理器160以及虚拟硬件170。虚拟硬件170包含处理器、存储器以及I/O装置,并被抽取且分配以作为虚拟处理器、虚拟存储器以及虚拟I/O装置以提供给上层运行的虚拟机150。超管理器160管理虚拟机150且提供仿真硬件以及固件资源。在一实施例中,Linux发行版本可作为客户操作系统155被安装在虚拟机150内以执行任何支持的应用程序,且可提供支持大多数Linux发行版本的开源软件Xen作为超管理器160。客户操作系统155包含气球驱动程序(未图示)。结合超管理器160,气球驱动程序可通过调用存储器管理算法来分配或解除分配客户操作系统155的虚拟存储器。举例来说,通过利用客户操作系统155的现有页面回收机制,开发出所谓的基于真工作集的气球算法(称作TWS气球算法)以探测工作集且收回闲置存储器页面作为压缩目标。在实际的应用上,特别着重于Linux的客户操作系统,并且真工作集的气球算法还可用于其它客户操作系统(例如,微软的窗口操作系统Microsoft Windows)。Referring to FIG. 1B and FIG. 1A , the virtual machine system 100 ′ includes a virtual machine 150 having a guest operating system 155 and other application programs (not shown), a hypervisor 160 and virtual hardware 170 . The virtual hardware 170 includes a processor, a memory, and an I/O device, and is extracted and allocated as a virtual processor, a virtual memory, and a virtual I/O device to provide to the virtual machine 150 running on the upper layer. The hypervisor 160 manages the virtual machines 150 and provides emulated hardware and firmware resources. In one embodiment, a Linux distribution can be installed as a guest operating system 155 within the virtual machine 150 to execute any supported applications, and Xen, an open source software that supports most Linux distributions, can be provided as the hypervisor 160 . Guest operating system 155 includes a balloon driver (not shown). In conjunction with hypervisor 160, the balloon driver may allocate or deallocate virtual memory for guest operating system 155 by calling memory management algorithms. For example, by utilizing the existing page reclamation mechanism of the guest operating system 155, a so called true working set based ballooning algorithm (referred to as TWS ballooning algorithm) was developed to detect working set and reclaim idle memory pages as compaction targets. In practical application, we especially focus on the Linux guest operating system, and the true working set balloon algorithm can also be used in other guest operating systems (for example, Microsoft Windows, the window operating system of Microsoft).
更详细地说,为了使存储器压缩较有效,通过利用页面回收机制来识别虚拟机150的工作集以及压缩工作集之外的存储器页面为必需的。直观地说,虚拟机150的工作集被定义为在最近被虚拟机150积极地使用的存储器的量。为了页面回收,Linux客户操作系统155使用最近最少使用(least recently used,LRU)准则来确定收回页面的次序以及维护用于两个主要类型的存储器(匿名存储器和页面高速缓冲存储器)的两个最近最少使用列表:作用中列表和非作用中列表。匿名存储器的存储器页面由用户进程的堆、栈使用,且页面高速缓冲存储器的存储器页面由磁盘数据备份,其中在对磁盘数据的第一访问之后在存储器中对内容进行高速缓冲以减少接下来可能进行磁盘I/O的时间。作用中列表中的存储器页面视为被较频繁地访问的页面,称作热页;非作用中列表中的页面视为被较不频繁地访问的页面,称作冷页。在分配后,每一存储器页面默认地放入到作用中列表中。In more detail, in order for memory compaction to be effective, it is necessary to identify the working set of the virtual machine 150 and compress memory pages outside the working set by utilizing the page reclamation mechanism. Intuitively, the working set of a virtual machine 150 is defined as the amount of memory that was recently actively used by the virtual machine 150 . For page reclamation, the Linux guest operating system 155 uses the least recently used (LRU) criterion to determine the order in which to evict pages and maintains the two most recent Least used lists: active and inactive lists. The memory pages of the anonymous memory are used by the heap, stack of the user process, and the memory pages of the page cache are backed up by disk data, where the contents are cached in memory after the first access to the disk data to reduce the possibility of subsequent The time spent doing disk I/O. Memory pages in the active list are considered to be accessed more frequently, called hot pages; pages in the inactive list are considered to be accessed less frequently, called cold pages. After allocation, each memory page is put into the active list by default.
在本揭露一实施例中,可在客户操作系统155的内核(例如,domU内核)不能分配存储器时直接触发页面回收机制。举例来说,当请求存储器页面时,内核可能不能从超管理器160的自由存储器池中找到存储器页面。内核从含有被视为相对较冷的存储器页面的非作用中列表回收存储器,使得在近期将不会很快访问所回收的存储器。当非作用中列表中的存储器页面的数目不足以实现存储器分配请求时,内核可遍历作用中列表且将冷页从作用中列表移动到非作用中列表。In an embodiment of the present disclosure, the page reclamation mechanism may be directly triggered when the kernel of the guest operating system 155 (for example, the domU kernel) cannot allocate memory. For example, when a memory page is requested, the kernel may not be able to find the memory page from the hypervisor 160 free memory pool. The kernel reclaims memory from an inactive list containing memory pages that are considered relatively cool, such that the reclaimed memory will not be accessed anytime soon in the near future. When the number of memory pages in the inactive list is insufficient to fulfill a memory allocation request, the kernel may traverse the active list and move cold pages from the active list to the inactive list.
判断存储器页面为热页还是冷页的一种方式为检查并清除存储器页面的页面表项的硬件参考位。因此,如果在内核审视作用中列表参考位是开启的,那么清除参考位,且接着此存储器页面被视为热页且被保持停留在作用中列表中。否则,存储器页面被视为冷页且移动到非作用中列表。One way to determine whether a memory page is a hot page or a cold page is to check and clear the hardware reference bit of the page table entry of the memory page. Therefore, if the list reference bit is on during kernel peeking, then the reference bit is cleared, and then the memory page is considered a hot page and is kept in the active list. Otherwise, the memory page is considered a cold page and moved to the inactive list.
如果非作用中列表上的存储器页面为匿名存储器,那么内核可将内容换出到(例如)交换内存空间,将进程的对应页面表项标记为不存在,且接着释放对应存储器页面。稍后,如果再次访问存储器页面,那么通过将页面内容从交换内存空间带到新分配的存储器页面中(即,换入)来执行访问时复制(COA)机制。或者,如果非作用中列表上的存储器页面属于页面高速缓冲存储器,那么内核可在页面内容已变脏的情况下将页面内容刷新到交换内存空间,且接着释放所述页面。在下一档访问后,内核必需再次执行磁盘访问(称作再次快取错误(refault))以将内容带回到页面高速缓冲存储器中的新分配的页面。应注意,在其中一实施例中中,交换内存空间可为计算机系统100中的硬盘(未图标)上的用以从系统存储器120卸载LSU页面的区域。然而,在另一示范性实施例中,如果用户体验计算机系统100的运作速度缓慢,那么系统存储器120的一部分也可用以作为交换内存空间。If a memory page on the inactive list is anonymous memory, the kernel may swap out the contents to, for example, swap memory space, mark the process's corresponding page table entry as absent, and then free the corresponding memory page. Later, if the memory page is accessed again, a copy-on-access (COA) mechanism is performed by bringing the page content from the swap memory space into a newly allocated memory page (ie, swapping in). Alternatively, if the memory page on the inactive list belongs to the page cache, the kernel can flush the page content to swap memory space if the page content has become dirty, and then free the page. After the next access, the kernel must perform another disk access (called a refault) to bring the contents back to the newly allocated page in the page cache. It should be noted that in one embodiment, the swap memory space may be an area on a hard disk (not shown) in the computer system 100 for unloading LSU pages from the system memory 120 . However, in another exemplary embodiment, if the user experiences slow operation of the computer system 100, a part of the system memory 120 can also be used as a swap memory space.
每当发生换入或再次快取错误事件时,虚拟机150将具有因为磁盘I/O延迟而降级的性能。在一实施例中,从页面回收的角度来看,可通过对换入计数和再次快取错误计数求和来量化虚拟机150的性能开销,称作消费计数overhead_count,其可写作方程式(1):Whenever a swap-in or re-cache error event occurs, the virtual machine 150 will have degraded performance due to disk I/O latency. In one embodiment, from the perspective of page reclamation, the performance overhead of the virtual machine 150 can be quantified by summing the swap-in count and the re-cache error count, called overhead_count, which can be written as equation (1) :
消费计数overhead_count=换入计数+再次快取错误计数方程式(1)Consumption count overhead_count=swap-in count+again cache error count equation (1)
在执行虚拟机150的存储器管理的进程中,可指示气球驱动程序膨胀或收缩。当计算机系统100处于存储器压力下时将发布膨胀命令。当存储器压力已减轻时将发布收缩命令。每一膨胀或收缩命令包含虚拟机150的存储器页面的数目的指示(本文中称作气球目标)。与膨胀或收缩命令相关联的气球目标表示从客户操作系统155回收或返回到客户操作系统155的客户物理存储器页面的数目。在Linux操作系统中,可使用回收机制通过Committed_AS值来识别由所有进程在一时刻消耗的匿名存储器的总大小。Committed_AS值表示虚拟机150消耗的匿名存储器,但不必对应于虚拟机150的工作集大小。换句话说,Committed_AS值在对每一新分配的匿名存储器页面的第一访问后递增,但在拥有者进程明确地释放存储器页面时递减。举例来说,如果当程序开始但不触碰存储器页面直到程序退出时,程序分配并访问存储器页面仅一次,那么Linux内核可能不从Committed_AS排除此冷页,即使所述冷页清楚地在工作集之外也是如此。也就是说,如果因存储器回收而收回属于工作集的页面高速缓冲存储器页面,那么再次快取错误事件可发生且因此可用作应再添加一个存储器页面到工作集以容纳页面高速缓冲存储器的信号。In the process of performing memory management of the virtual machine 150, the balloon driver may be instructed to expand or contract. The inflation command will be issued when the computer system 100 is under memory pressure. A shrink command is issued when memory pressure has eased. Each expand or shrink command includes an indication of the number of memory pages of virtual machine 150 (referred to herein as a balloon target). The balloon object associated with the expand or shrink command represents the number of guest physical memory pages reclaimed from or returned to the guest operating system 155 . In the Linux operating system, the reclaim mechanism can be used to identify the total size of anonymous memory consumed by all processes at a moment through the Committed_AS value. The Committed_AS value represents the anonymous memory consumed by the virtual machine 150 , but does not necessarily correspond to the working set size of the virtual machine 150 . In other words, the Committed_AS value is incremented after the first access to each newly allocated anonymous memory page, but decremented when the owner process explicitly frees the memory page. For example, if a program allocates and accesses a memory page only once when the program starts but does not touch the memory page until the program exits, then the Linux kernel may not exclude this cold page from Committed_AS even though it is clearly in the working set The same is true outside. That is, if a page cache page that belongs to the working set is evicted due to memory reclamation, a re-cache fault event can occur and thus can be used as a signal that one more memory page should be added to the working set to accommodate the page cache .
因此,在实施例中,可维护虚拟机150中的再次快取错误事件的计数器且可根据再次快取错误计数而调整气球目标,以使得由收回的页面高速缓冲存储器产生的性能罚分可减到最小。此外,可积极地探测虚拟机150的真工作集。如果虚拟机150不执行显著磁盘I/O,那么虚拟机150的真工作集比Committed_AS值低,显著磁盘I/O需要工作集中所包含的额外缓冲器高速缓冲存储器页面。也就是说,虚拟机150的物理存储器分配等于其工作集大小,与换入和再次快取错误相关联的磁盘访问开销应接近零。因此,为了探测虚拟机150的真工作集,气球驱动程序的气球目标可逐步提高,且直到换入和再次快取错误计数开始变成非零才停止。Thus, in an embodiment, a counter of re-caching error events in virtual machine 150 may be maintained and the balloon target may be adjusted according to the re-caching error count such that the performance penalty incurred by evicted page caches may be reduced. to the minimum. Additionally, the true working set of virtual machine 150 may be actively probed. If virtual machine 150 is not performing significant disk I/O, then the true working set of virtual machine 150 is lower than the Committed_AS value, and significant disk I/O requires additional buffer cache pages contained in the working set. That is, for a virtual machine 150 with a physical memory allocation equal to its working set size, the disk access overhead associated with swapping in and recaching errors should be close to zero. Therefore, in order to probe the true working set of virtual machine 150, the balloon driver's balloon target may be incrementally increased, and not stopped until the swap-in and re-cache error counts start to become non-zero.
更具体地说,在示范性实施例中的一个中,虚拟机150的所估计的工作集大小EWSS可定义为方程式(2)More specifically, in one of the exemplary embodiments, the estimated working set size EWSS of a virtual machine 150 may be defined as equation (2)
EWSS=所分配的存储器大小+消费计数方程式(2)EWSS = allocated memory size + consumption count equation (2)
其中所分配的存储器大小为分配到虚拟机150的存储器页面的数目,且开销计数为出错到虚拟机150中的页面的数目,而且也定义于方程式(1)中。然而,在另一示范性实施例中,虚拟机150的所估计的工作集大小EWSS可为如方程式(3)中所写的所分配的存储器大小与开销计数的线性组合Where the allocated memory size is the number of memory pages allocated to the virtual machine 150, and the overhead count is the number of pages faulted into the virtual machine 150, and is also defined in equation (1). However, in another exemplary embodiment, the estimated working set size EWSS of the virtual machine 150 may be a linear combination of the allocated memory size and the overhead count as written in equation (3)
EWSS=A1×(所分配的存储器大小)+A2×(开销计数)+C方程式(3)EWSS=A1×(allocated memory size)+A2×(overhead count)+C Equation (3)
其中A1≧1,A2≦1,且C为常数。Where A1≧1, A2≦1, and C is a constant.
当虚拟机150的所分配的存储器量高于真工作集时,定制的气球驱动程序用于每秒收集换入和再次快取错误计数且气球目标经调整以探测虚拟机150的真工作集。气球目标的上限由处理器110在虚拟机150启动时设定为虚拟机150的所配置的存储器大小,且下限根据由客户操作系统155约束的存储器的量而设定。换句话说,基于自我气球算法而计算初始下限值,其中添加了针对系统紧急情况和压缩页面而保留的页面要求。在不调整的情况下,当Committed_AS值低时,客户操作系统155可容易遇到存储器不足的例外情况。为了更好地近似虚拟机150的真工作集大小,处理器110使用三个运行时间阶段且自适应地调整气球目标,如图2所示。When the amount of allocated memory of a virtual machine 150 is higher than the true working set, a custom balloon driver is used to collect swap-in and re-cache error counts every second and the balloon target is adjusted to detect the true working set of the virtual machine 150 . The upper limit of the balloon target is set by the processor 110 to the configured memory size of the virtual machine 150 when the virtual machine 150 is started, and the lower limit is set according to the amount of memory constrained by the guest operating system 155 . In other words, an initial lower bound value is calculated based on a self-ballooning algorithm, adding page requirements reserved for system panics and compressed pages. Without tuning, the guest operating system 155 may be prone to low memory exceptions when the Committed_AS value is low. To better approximate the true working set size of virtual machine 150 , processor 110 uses three runtime phases and adaptively adjusts the balloon target, as shown in FIG. 2 .
图2为说明根据本揭露的示范性实施例的用于对虚拟机的存储器管理的方法的阶段图。FIG. 2 is a stage diagram illustrating a method for memory management of a virtual machine according to an exemplary embodiment of the present disclosure.
参看图2与图1A和图1B中的组件,从第一调整阶段S1开始,处理器110将客户操作系统155的气球驱动程序的气球目标初始化为第一阀值(例如,Committed_AS值),这是因此此时页面高速缓冲存储器上不存在明确信息。可从导出的内核变量vm_committed_AS检索Committed_AS值,其中导出的内核变量可由Linux中的任何内核组件或可加载内核模块访问。接着,处理器110以逐步方式将气球目标降低第一递减值,其中气球目标不小于第一阀值。在本示范性实施例中,第一递减值可为当前Committed_AS值的百分比(例如,5%)。每当发生换入或再次快取错误事件时,处理器110可产生换入/再次快取错误检测结果,其中换入/再次快取错误检测结果可指示换入或再次快取错误事件的发生。基于换入/再次快取错误检测结果,处理器110可停止降低气球目标且经由路径P12将气球驱动程序从第一调整阶段S1切换到冷却阶段S2。接着,处理器110在冷却阶段S2中将气球目标升高页面计数等于换入和再次快取错误计数的总数目的存储器的量,这是因为换入或再次快取错误事件中的每一个提出对额外自由存储器页面的需要。Referring to FIG. 2 and the components in FIGS. 1A and 1B , starting from the first adjustment stage S1, the processor 110 initializes the balloon target of the balloon driver of the guest operating system 155 to a first threshold value (for example, Committed_AS value), which Yes so no explicit information exists on the page cache at this time. The Committed_AS value can be retrieved from the exported kernel variable vm_committed_AS, which can be accessed by any kernel component or loadable kernel module in Linux. Next, the processor 110 decreases the balloon target by a first decrement value step by step, wherein the balloon target is not smaller than the first threshold. In this exemplary embodiment, the first decrement value may be a percentage (eg, 5%) of the current Committed_AS value. Processor 110 may generate a swap-in/re-caching error detection result whenever a swap-in or re-caching error event occurs, wherein the swap-in/re-caching error detection result may indicate the occurrence of a swap-in or re-caching error event . Based on the swap-in/re-cache error detection result, the processor 110 may stop lowering the balloon target and switch the balloon driver from the first adjustment phase S1 to the cool-down phase S2 via path P12. Next, the processor 110 raises the balloon target by an amount of memory equal to the total number of swap-in and re-caching error counts in cool-down phase S2 because each of the swap-in or re-caching error events raises the Additional free memory pages are required.
应注意,可从内核组件(例如,内核组件vmstat)的变量(例如,变量pswpin)检索换入计数,所述内核组件收集关于概要信息的统计,例如Linux中的操作系统存储器(operating system memory)、进程(processes)、中断(interrupts)、分页(paging)和块I/O(block I/O)。可通过使用Linux内核中的设施(例如,设施blktrace)拦截(例如)交换内存空间的I/O操作来获得来自磁盘I/O的再次快取错误的信息,Linux内核提供应用程序编程接口(API)来将挂钩点(hook point)添加到磁盘I/O且执行统计。一旦跟踪磁盘I/O,磁盘的每一区块便初始化为“0”位图且在对应磁盘块被访问时设定为“1”位图。当位图在改变为一之前已被设定(即,对应块之前已被访问但需要另一访问)时对再次快取错误事件进行计数。It should be noted that the swap-in count can be retrieved from a variable (e.g. variable pswpin) of a kernel component (e.g. kernel component vmstat) that collects statistics on profile information such as operating system memory in Linux , processes (processes), interrupts (interrupts), paging (paging) and block I/O (block I/O). Information about recaching errors from disk I/O can be obtained by intercepting (for example) I/O operations to swap memory space using facilities in the Linux kernel (for example, the facility blktrace ), which provide an application programming interface (API ) to add hook points to disk I/O and perform statistics. Once the disk I/O is tracked, each block of the disk is initialized with a "0" bitmap and set to a "1" bitmap when the corresponding disk block is accessed. Re-cache error events are counted when the bitmap has been set before changing to one (ie, the corresponding block has been accessed before but requires another access).
此换入和再次快取错误事件指示气球目标正接近真工作集或突然爆发来自应用程序的存储器需求。因此,进一步减少虚拟机150的存储器分配是不明智的。即使当发生换入或再次快取错误事件时,仍允许对虚拟机150的存储器分配超过Committed_AS值。此种灵活性的配置对于运行輸出入密集工作负荷的虚拟机150尤其重要,其中Committed_AS值不反映因页面高速缓存而引起的额外存储器需求。This swap-in and re-cache error event indicates that the balloon target is approaching the true working set or a sudden burst of memory demand from the application. Therefore, it is unwise to further reduce the memory allocation of the virtual machine 150 . Even when a swap-in or re-cache error event occurs, the memory allocation to the virtual machine 150 is allowed to exceed the Committed_AS value. This flexible configuration is especially important for virtual machines 150 running I/O-intensive workloads, where the Committed_AS value does not reflect the additional memory requirements caused by page caching.
而且在冷却阶段S2中,处理器110将冷却计数器初始化为第二阀值(例如,任意地设定为8)且从那时起在每一时间周期将所述计数器递减,其中所述时间周期可为一秒。当冷却计数器达到零时,处理器110可认为工作负荷爆发已过去且接着经由路径P23将气球驱动程序切换到第二调整阶段S3。Also in the cooling phase S2, the processor 110 initializes the cooling counter to a second threshold value (eg, arbitrarily set to 8) and decrements the counter every time period from then on, wherein the time period Can be one second. When the cooling counter reaches zero, the processor 110 may consider the workload burst to have passed and then switch the balloon driver to the second tuning stage S3 via path P23.
气球目标在第二调整阶段S3中应用与在第一调整阶段S1中相同的逻辑,不同之处在于处理器110以逐步方式将气球目标降低第二递减值,其中第二递减值小于第一递减值。在本实施例中,第二递减值可为当前Committed_AS值的百分比(例如,1%)。更具体地说,处理器110在第二调整阶段S3中以逐步方式将气球目标降低当前Committed_AS的1%。每当发生换入或再次快取错误事件时,处理器110通过经由路径P32将气球驱动程序从第二调整阶段S3切换到冷却阶段S2来停止降低气球目标。处理器110还将气球目标升高页面计数等于换入和再次快取错误计数的总数目的存储器的量,以及将冷却计数器重新初始化为第二阀值。The balloon target applies the same logic in the second adjustment stage S3 as in the first adjustment stage S1, except that the processor 110 lowers the balloon target by a second decrement value in a step-by-step manner, where the second decrement value is smaller than the first decrement value value. In this embodiment, the second decrement value may be a percentage (for example, 1%) of the current Committed_AS value. More specifically, the processor 110 reduces the balloon target by 1% of the current Committed_AS in a stepwise manner in the second adjustment stage S3. Whenever a swap-in or re-cache error event occurs, the processor 110 stops lowering the balloon target by switching the balloon driver from the second tuning phase S3 to the cooling phase S2 via path P32. Processor 110 also balloons the target up page count by an amount equal to the total amount of swap-in and re-cache error counts, and re-initializes the cooldown counter to a second threshold.
另一方面,当气球驱动程序处于冷却阶段S2或第二调整阶段S3时,当虚拟机150的Committed_AS值改变时,处理器110认为虚拟机150的工作集大小将显著改变且通过进入第一调整阶段S1来重置气球目标。也就是说,如果在处理器110将气球驱动程序从第二调整阶段S3经由路径P31或从冷却阶段S2经由路径P21切换到第一调整阶段S1之前,气球目标因换入或再次快取错误爆发而超过Committed_AS值,那么处理器110可根据方程式(2)将气球目标重新初始化为Committed_AS值加超过量作为新估计的工作集大小。也就是说,第一阀值在此时改变。在另一实施例中,处理器110还可根据方程式(3)将气球目标重新初始化为Committed_AS值和开销计数的线性组合。还应注意,当冷却计数器尚未达到零时,处理器110将气球驱动程序从冷却阶段S2切换到第一调整阶段S1时,气球驱动程序仍进入第一调整阶段S1,但处理器110在恢复工作集探测之前继续向下计数,直到冷却计数器达到零为止。On the other hand, when the balloon driver is in the cool-down phase S2 or the second adjustment phase S3, when the Committed_AS value of the virtual machine 150 changes, the processor 110 considers that the working set size of the virtual machine 150 will change significantly and enters the first adjustment Phase S1 to reset the balloon target. That is, if before the processor 110 switches the balloon driver from the second tuning stage S3 via path P31 or from the cooling stage S2 via path P21 to the first tuning stage S1, the balloon object bursts with a swap-in or recaching error If the Committed_AS value is exceeded, the processor 110 may reinitialize the balloon object to the Committed_AS value plus the excess amount as a new estimated working set size according to equation (2). That is, the first threshold value changes at this time. In another embodiment, the processor 110 may also reinitialize the balloon target as a linear combination of the Committed_AS value and the overhead count according to equation (3). It should also be noted that when the processor 110 switches the balloon driver from the cooling stage S2 to the first conditioning stage S1 when the cooling counter has not yet reached zero, the balloon driver still enters the first conditioning stage S1, but the processor 110 resumes working Continue counting down until the cooldown counter reaches zero before set detection.
在一实施例中,在处理器110将气球驱动程序从第二调整阶段S2切换到第一调整阶段S1之前,可在一个以上情形中检测到换入/再次快取错误事件。也就是说,气球驱动程序在进入第一调整阶段S1之前可在冷却阶段S2与第二调整阶段S2之间循环。作为实例,图3为说明根据本揭露的示范性实施例的用于对虚拟机的存储器管理的方法的另一阶段图。应注意,下文仅解释与图2所示的示范性实施例的差异。本揭露不限于此。In one embodiment, a swap-in/recaching error event may be detected in more than one situation before the processor 110 switches the balloon driver from the second tuning phase S2 to the first tuning phase S1. That is, the balloon driver may cycle between a cooling phase S2 and a second conditioning phase S2 before entering the first conditioning phase S1 . As an example, FIG. 3 is another stage diagram illustrating a method for memory management of a virtual machine according to an exemplary embodiment of the present disclosure. It should be noted that only the differences from the exemplary embodiment shown in FIG. 2 are explained below. The present disclosure is not limited thereto.
参看图3与图1A和图1B中的组件,在处理器110将气球驱动程序从第二调整阶段S2切换到第一调整阶段S1之前,其可根据换入/再次快取错误检测结果将气球驱动程序交替地切换到至少一个另一冷却阶段S4和至少一个另一第二调整阶段(称作至少一个第三调整阶段S5)。在本示范性实施例中,出于清晰和易于解释的目的,仅说明一个另一冷却阶段S4和一个第三调整阶段S5。本揭露不限于此。Referring to FIG. 3 and the components in FIGS. 1A and 1B, before the processor 110 switches the balloon driver from the second tuning stage S2 to the first tuning stage S1, it may switch the balloon driver according to the swap-in/re-caching error detection result. The driver switches alternately to at least one further cooling phase S4 and at least one further second adjustment phase (referred to as at least one third adjustment phase S5 ). In this exemplary embodiment, for the sake of clarity and ease of explanation, only one further cooling stage S4 and one third adjustment stage S5 are illustrated. The present disclosure is not limited thereto.
更具体地说,当气球驱动程序处于第二调整阶段S3时,每当发生换入或再次快取错误事件时,处理器110通过经由路径P34将气球驱动程序从第二调整阶段S3切换到另一冷却阶段S4而停止降低气球目标。处理器110还将气球目标升高页面计数等于换入和再次快取错误计数的总数目的存储器的量,且进一步将冷却计数器初始化为第三阀值(例如,任意地设定为8)。在另一冷却阶段S4中,处理器110在每一时间周期递减另一冷却计数器,其中所述时间周期可为一秒。当另一冷却计数器达到零时,处理器110可认为工作负荷爆发已过去且接着经由路径P45将气球驱动程序切换到第三调整阶段S5。More specifically, when the balloon driver is in the second tuning stage S3, whenever a swap-in or recaching error event occurs, the processor 110 switches the balloon driver from the second tuning stage S3 to another via path P34. A cooling phase S4 stops lowering the balloon target. Processor 110 also balloons the target up page count by an amount equal to the total amount of swap-in and re-cache error counts, and further initializes the cooldown counter to a third threshold (eg, arbitrarily set to 8). In another cooling phase S4, the processor 110 decrements another cooling counter every time period, wherein the time period may be one second. When the other cooling counter reaches zero, the processor 110 may consider the workload burst to have passed and then switch the balloon driver to the third tuning stage S5 via path P45.
气球目标在第三调整阶段S5中应用与第二调整阶段S2中相同的逻辑,不同之处在于处理器110以逐步方式将气球目标降低第三递减值。在本示范性实施例中,将第三递减值设定为等于第二递减值,即,Committed_AS值的1%。The balloon target applies the same logic in the third adjustment stage S5 as in the second adjustment stage S2, except that the processor 110 lowers the balloon target by a third decrement value in a stepwise manner. In this exemplary embodiment, the third decrement value is set equal to the second decrement value, ie, 1% of the Committed_AS value.
在一实施例中,存在彼此交替的多个另一冷却阶段S4和第三调整阶段S5。每当发生换入或再次快取错误事件时,处理器110可接着通过以类似方式将气球驱动程序从第三调整阶段S5切换到下一另一冷却阶段来停止降低气球目标。当然,处理器110可根据另一冷却计数器而进一步将气球驱动程序从下一另一冷却阶段切换到下一另一第三调整阶段。还应注意,在另一冷却阶段中的每一个中的第三递减值可不同。本揭露不限于此。In an embodiment, there are a plurality of further cooling phases S4 and third adjustment phases S5 alternating with each other. Whenever a swap-in or re-cache error event occurs, the processor 110 may then stop lowering the balloon target by switching the balloon driver from the third adjustment stage S5 to the next further cooling stage in a similar manner. Certainly, the processor 110 may further switch the balloon driver from the next another cooling stage to the next another third adjustment stage according to another cooling counter. It should also be noted that the third decrement value may be different in each of the further cooling stages. The present disclosure is not limited thereto.
另外,类似于图2中的第二调整阶段S2,当气球驱动程序处于另一冷却阶段S4或另一第三调整阶段S5时,当虚拟机150的Committed_AS值改变时,处理器110认为虚拟机150的工作集大小将显著改变且通过分别经由路径S41和路径S51进入第一调整阶段S1来重置气球目标。细节可参考图3的示范性实施例中的相关描述。In addition, similar to the second adjustment stage S2 in FIG. 2, when the balloon driver is in another cooling stage S4 or another third adjustment stage S5, when the Committed_AS value of the virtual machine 150 changes, the processor 110 considers the virtual machine The working set size of 150 will change significantly and the balloon target will be reset by entering the first adjustment stage S1 via paths S41 and S51 respectively. For details, refer to related descriptions in the exemplary embodiment in FIG. 3 .
透过上述存储器管理方法(即,TWS气球算法),处理器110可探测虚拟机150的真工作集,且将不必要的冷页从系统存储器120回收到超管理器160的存储器池以节省较多资源,同时保持应用程序效能而无显著降级。Through the above-mentioned memory management method (i.e., the TWS balloon algorithm), the processor 110 can detect the true working set of the virtual machine 150, and reclaim unnecessary cold pages from the system memory 120 to the memory pool of the hypervisor 160 to save more memory. multiple resources while maintaining application performance without significant degradation.
在实施例中,可通过在例如个人计算机和工作站等计算机上执行已准备程序来实施上述存储器管理方法。所述程序存储在计算机可读记录媒体(例如,硬盘、软盘、CD-ROM、MO和DVD)上,从所述计算机可读媒体读出,且由计算机执行。所述程序可遍及网络(例如,因特网)散布。In the embodiments, the memory management method described above can be implemented by executing prepared programs on computers such as personal computers and workstations. The program is stored on a computer-readable recording medium (eg, hard disk, floppy disk, CD-ROM, MO, and DVD), read from the computer-readable medium, and executed by a computer. The program can be distributed throughout a network (eg, the Internet).
总地来说,通过利用客户OS的现有页面回收机制,本揭露中的存储器管理方法经设计以探测虚拟机的真工作集且闲置存储器页面被收回以作为压缩目标。通过使用定制气球驱动程序,基于虚拟机的当前存储器使用,在考虑换入/再次快取错误事件的情况下,可准确地且动态地估计虚拟机的真工作集,且真工作集进一步提供存储器资源管理的优点。In general, by utilizing the existing page reclamation mechanism of the guest OS, the memory management method in the present disclosure is designed to detect the virtual machine's true working set and idle memory pages are evicted as compaction targets. By using a custom balloon driver, the true working set of a virtual machine can be accurately and dynamically estimated based on the current memory usage of the virtual machine, taking into account swap-in/re-caching error events, and the true working set further provides memory Advantages of resource management.
Claims (22)
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201261712279P | 2012-10-11 | 2012-10-11 | |
| US61/712,279 | 2012-10-11 | ||
| US13/951,474 US9069669B2 (en) | 2012-10-11 | 2013-07-26 | Method and computer system for memory management on virtual machine |
| US13/951,474 | 2013-07-26 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN103729249A CN103729249A (en) | 2014-04-16 |
| CN103729249B true CN103729249B (en) | 2017-04-12 |
Family
ID=50453332
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201310459723.3A Active CN103729249B (en) | 2012-10-11 | 2013-09-30 | Method and computer system for memory management of virtual machine |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN103729249B (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10037270B2 (en) * | 2015-04-14 | 2018-07-31 | Microsoft Technology Licensing, Llc | Reducing memory commit charge when compressing memory |
| CN111666226B (en) * | 2020-06-16 | 2022-09-13 | 北京紫光展锐通信技术有限公司 | Page bump protection method for operating system memory recovery and user equipment |
| CN116243850B (en) * | 2021-06-08 | 2024-05-28 | 荣耀终端有限公司 | Memory management method and electronic equipment |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101681268A (en) * | 2007-06-27 | 2010-03-24 | 国际商业机器公司 | System, method and program for managing virtual machine memory |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2009029496A1 (en) * | 2007-08-24 | 2009-03-05 | Yiping Ding | Virtualization planning system |
-
2013
- 2013-09-30 CN CN201310459723.3A patent/CN103729249B/en active Active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101681268A (en) * | 2007-06-27 | 2010-03-24 | 国际商业机器公司 | System, method and program for managing virtual machine memory |
Also Published As
| Publication number | Publication date |
|---|---|
| CN103729249A (en) | 2014-04-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| TWI507870B (en) | Method and computer system for memory management on virtual machine | |
| US8484405B2 (en) | Memory compression policies | |
| US9600317B2 (en) | Page compressibility checker | |
| US9262214B2 (en) | Efficient readable ballooning of guest memory by backing balloon pages with a shared page | |
| US9940228B2 (en) | Proactive memory reclamation for java virtual machines | |
| US9720717B2 (en) | Virtualization support for storage devices | |
| EP2588957B1 (en) | Cooperative memory resource management via application-level balloon | |
| US9183015B2 (en) | Hibernate mechanism for virtualized java virtual machines | |
| CN103729305B (en) | Method and computer system for memory management of virtual machines | |
| US10152409B2 (en) | Hybrid in-heap out-of-heap ballooning for java virtual machines | |
| US20080307188A1 (en) | Management of Guest OS Memory Compression In Virtualized Systems | |
| Chiang et al. | Working set-based physical memory ballooning | |
| WO2007009940A1 (en) | Facilitating processing within computing environments supporting pageable guests | |
| US9658775B2 (en) | Adjusting page sharing scan rates based on estimation of page sharing opportunities within large pages | |
| CN103729249B (en) | Method and computer system for memory management of virtual machine | |
| US11762573B2 (en) | Preserving large pages of memory across live migrations of workloads | |
| US20230027307A1 (en) | Hypervisor-assisted transient cache for virtual machines | |
| Egger et al. | Efficiently restoring virtual machines | |
| US12147844B2 (en) | VM memory reclamation by buffering hypervisor-swapped pages | |
| Chiang et al. | Memory reclamation and compression using accurate working set size estimation | |
| Song et al. | An Efficient Stack Management by The Selective Revocation of Mapping from Virtual Memory to Physical memory | |
| VMware | Understanding Memory Resource Management in VMware ESX 4.1 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |