CN111432006B - Lightweight resource virtualization and distribution method - Google Patents
Lightweight resource virtualization and distribution method Download PDFInfo
- Publication number
- CN111432006B CN111432006B CN202010234317.7A CN202010234317A CN111432006B CN 111432006 B CN111432006 B CN 111432006B CN 202010234317 A CN202010234317 A CN 202010234317A CN 111432006 B CN111432006 B CN 111432006B
- Authority
- CN
- China
- Prior art keywords
- mirror image
- container
- layer
- command
- mirror
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1074—Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to a lightweight resource virtualization and distribution method, which comprises the following steps: (a) Dividing a container public mirror image in a service center into different mirror image layers, and acquiring mirror image layer management information; (b) Selecting a mirror image of any node in the container public mirror image as a basic mirror image, developing an application service mirror image on the basis of the basic mirror image, and dividing a necessary basic commonality mirror image layer according to the basic mirror image; (c) Analyzing the management information of the mirror image layer, and preloading the necessary basic commonality mirror image layer by combining the hardware characteristics and the task mission of the nodes; (d) And pulling the mirror layer data in the same service center in a blocking way by utilizing the container engine or establishing a P2P network for mirror data transmission between different service centers. Therefore, the resource utilization rate in the container stack can be optimized, the mirror image distribution speed and the mirror image loading speed are improved, and finally, service starting is realized as fast as possible and system resource consumption is reduced as low as possible.
Description
Technical Field
The invention belongs to the technical field of networks, and relates to a lightweight resource virtualization and distribution method.
Background
Traditional virtualization technologies such as vSphere or Hyper-V are operating system-centric, while Container technology (i.e., container technology) is an application-centric virtualization technology. In 2013, a lightweight virtualization technology represented by Docker has attracted wide attention, and therefore a container technology has attracted wide attention (Docker is an LXC-based advanced container engine that is an open source of doccloud by PaaS providers). Meanwhile, the Docker company itself is also actively developing a Docker container-based management scheme; the Docker Machine is a tool which can directly install the Docker engine through a command; the Docker Swarm is a cluster and scheduling tool, which can self-optimize the infrastructure of the distributed application based on the requirements of the life cycle of the application, the use of the container and the performance; the Docker Compose tool may build a multi-container application running on Swarm.
The cloud computing resource allocation needs to uniformly manage and reasonably allocate heterogeneous resource resources in the cloud computing environment, and the target of the allocation scheme comprises a user target and a service provider target. Different cloud computing resource allocation strategies are often realized through different resource allocation algorithms, so that the resource allocation algorithms become the research focus. The meta-heuristic algorithm is very suitable for the field of resource allocation due to the characteristics of random search, a quick learning mechanism, adaptability and the like; in recent years, foreign scholars have achieved a certain amount of research results. Professor Rajkumar Buyya, melbourne university, australia, proposes an allocation algorithm based on user task completion time and service cost that guides users to select a cost-effective resource allocation scheme. The professor Ke Liu of the schweiken science and technology further provides an allocation algorithm based on time and cost balance, the algorithm takes a task final completion deadline and user cost as consideration parameters, and can enable a user to meet the user completion time expectation and reduce the cost, or shorten the task completion time on the premise of meeting the user budget. The Suraj Pandey of the IBM company provides a resource allocation algorithm for particle swarm optimization by taking the idea of adding price parameters into a meta-heuristic algorithm, and the algorithm takes communication and calculation costs in the task allocation and execution processes as the fitness of task particles, so that the load balance of the algorithm is improved. Shabera professor t.p. of calictiut university, india, proposed a priority-driven ant colony algorithm to optimize DAG model-based job scheduling. Experimental results show that the improved algorithm improves the throughput of the virtual machine. Meanwhile, domestic scholars also make some researches based on the field of resource allocation algorithms. Professor mary has devised a method of ordering the cost of computing resource usage from high to low to achieve the lowest cost of task completion while meeting the user QoS requirements. Zhangonghao professor takes the minimum completion time as a distribution target, combines the characteristics of an ant colony algorithm and a simulated annealing algorithm, and introduces a matching factor of calculation resources and distribution tasks; the algorithm has good effect in the aspects of task completion time and system load balance degree. Aiming at the defects that the ant colony algorithm is low in convergence speed and easy to fall into local optimum in searching; war african et al introduces chaos theory into the ant colony algorithm, initializes the ant colony algorithm path according to chaos ergodicity, and then adds chaos disturbance factor to adjust the ant colony algorithm pheromone updating rule. In order to enable resource distribution of the improved ant colony algorithm to be more reasonable, a combined optimization model is designed by the yoyanfei professor in yolin university, the pheromone volatilization coefficient of the ant colony algorithm is dynamically adjusted, and meanwhile, a reward punishing mechanism is added in the pheromone updating rule of the ant colony algorithm. Aiming at the defect that the traditional ant colony algorithm is easy to get into local optimization when the problem of task allocation is solved, the Rogojunying and the like provide an ant colony algorithm based on probability self-adaptation. Wei\36191, et al propose a scheduling model that enhances the parallelism of tasks while balancing the serial relationship between tasks.
However, in the mobile environment, in order to ensure reliable and efficient service and sufficient utilization of resources, new requirements on resource virtualization and allocation are provided for the conditions that the service center is high in mobility, the resources of the service center are limited, the network connection environment is poor, and the like.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a lightweight resource virtualization and allocation method.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows: a lightweight resource virtualization and allocation method comprises the following steps:
(a) Dividing a container public mirror image in a service center into different mirror image layers, and acquiring mirror image layer management information; the mirror image layer sequentially comprises an inner core layer, an operating system layer, a common component layer, a development language layer and a development framework layer from bottom to top;
(b) Selecting a mirror image of any node in the container public mirror image as a basic mirror image, developing an application service mirror image on the basis of the basic mirror image, and dividing a necessary basic commonality mirror image layer according to the basic mirror image;
(c) Analyzing the management information of the mirror image layer, and preloading the necessary basic commonality mirror image layer by combining the hardware characteristics and the task mission of the nodes;
(d) And pulling the mirror layer data in the same service center in a blocking way by utilizing the container engine or establishing a P2P network for mirror data transmission between different service centers.
Optimally, in step (a), the generation of the container common image is regulated based on the Dockerfile syntax.
Further, the Dockerfile syntax adjustment rule is:
(a1) Synthesizing RUN commands of consecutive adjacent pieces into one RUN command using & & symbol;
(a2) Combining a plurality of ENV commands into one ENV command;
(a3) Judging whether the source address of the ADD command is a local compressed file or not, and modifying the ADD command into a COPY command if the source address of the ADD command is not the local compressed file;
(a4) The parameters of the CMD and ENRTYPOINT commands are expressed in an array.
Optimally, in step (a), the Docker is also replaced to prepare the proc file system for the container to prepare a mount in place of the proc virtual file system at/proc locations within the container.
Further, a capacity limit for the rootfs of the vessel was also added to the Docker.
Due to the application of the technical scheme, compared with the prior art, the invention has the following advantages: according to the lightweight resource virtualization and distribution method, the common mirror image of the container in the service center is divided into different mirror image layers, and the common mirror image layer to be basic is preloaded, so that the resource utilization rate in the container stack can be optimized, the mirror image distribution speed and the mirror image loading speed are improved, and finally the service starting is as fast as possible and the system resource consumption is as low as possible.
Drawings
FIG. 1 is a schematic diagram of a hierarchy of common mirror images of containers in the lightweight resource virtualization and allocation method of the present invention;
FIG. 2 is a diagram illustrating mirror preloading in a lightweight resource virtualization and allocation method according to the present invention;
FIG. 3 is a schematic view of image distribution in the same service center in the lightweight resource virtualization and allocation method of the present invention;
FIG. 4 is a schematic view illustrating mirror image distribution among different service centers in the lightweight resource virtualization and allocation method according to the present invention;
fig. 5 is a schematic diagram of accessing a proc agent in the lightweight resource virtualization and allocation method of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The invention relates to a lightweight resource virtualization and distribution method, which comprises the following steps:
(a) Dividing a container public mirror image in a service center into different mirror image layers, and acquiring mirror image layer management information; the mirror image layer sequentially comprises a kernel layer, an operating system layer, a common component layer, a development language layer and a development framework layer from bottom to top (as shown in figure 1 \22014; messenger);
(b) Selecting a mirror image of any node in the container public mirror image as a basic mirror image, developing an application service mirror image on the basis of the basic mirror image, and dividing a necessary basic commonality mirror image layer according to the basic mirror image; a sender can select the mirror image of any node in the figure 1 as a basic mirror image according to requirements, and develop an application service mirror image on the basis of the mirror image, so that the utilization rate of the public mirror image can be maximized by fully utilizing the container layering characteristic;
(c) Analyzing management information of a mirror image layer, and preloading the necessary basic commonality mirror image layer by combining hardware characteristics and task mission of the nodes; through the management information analysis of the mirror image layer and the information such as node hardware characteristics, task mission and the like, the necessary basic commonality mirror image layer is preloaded, the mirror image distribution efficiency is improved, and the network transmission amount is reduced; by analysis, it is expected that image preloading can save up to 90% more of image bandwidth at best. If one application utilizes a mirror image preloading mechanism, a service center node of the application is loaded with mirror image base layers such as a Python library, a public component and the like based on rules and preset requirements before executing a task, and only missing contents are transmitted based on a mirror image layering mechanism when the application service is deployed subsequently, so that the data volume needing to be transmitted (as shown in FIG. 2) is greatly reduced;
(d) And pulling the mirror layer data in the same service center in a blocking way by utilizing the container engine or establishing a P2P network for mirror data transmission between different service centers. The efficiency of obtaining the mirror image can be further improved by adopting P2P mirror image distribution, the time and the bandwidth are saved, and the usability of the information platform in a specific environment is improved; the use scenario of the mirror P2P distribution is discussed in two scenarios: one is P2P mirror image distribution among the service centers and the physical nodes of the same cluster; the other is the problem of mirror image distribution between service centers (namely mobile information service centers), between service centers and nodes, namely between weak connection nodes.
In each service center (the service center is provided with a host A and a host B), a unique mirror image warehouse is arranged, a mirror image loader is arranged on each physical node where the container engine is deployed, and the mirror image loader reports that a local machine (namely the host) has mirror image element information to the mirror image warehouse, so that the mirror image warehouse has distribution data of all mirror image layers in a cluster; when the container engine needs to pull the mirror image, the mirror loader first goes to the mirror repository to retrieve the distribution of the required mirror image layer, then selects a plurality of nodes, and gets the block-by-block pull mirror image data in parallel (as shown in fig. 3).
For image distribution between service centers, a central image loader in each service center is relied upon. Each central mirror loader acquires mirror metadata from the mirror warehouse of the service center, establishes contact with the central mirror loaders in other service centers, communicates with the mirror metadata, and establishes a P2P network for mirror data transmission (as shown in fig. 4).
In the present application, in step (a), the generation of the container common image is regulated based on the Dockerfile syntax; the method is characterized in that a container grammar optimization strategy is researched based on Dockerfile grammar by researching a mirror image generation process, and the purposes of simplifying mirror image layering and reducing the size of a mirror image file are achieved through optimization.
Specifically, a Dockerfile grammar optimizer is researched and realized to perform grammar optimization on the original Dockerfile, and the grammar optimization rule is as follows:
(a1) Synthesizing RUN commands of consecutive adjacent pieces into one RUN command using & & symbol; RUN commands are very important commands in a Dockerfile, usually many RUN commands appear in one Dockerfile, and each command generates a layer of mirror image; the use of & & can synthesize a plurality of RUN commands which are continuously adjacent into one, can greatly reduce the number of layers and the size of the final generated mirror image;
(a2) Combining a plurality of ENV commands into one ENV command; because each command can generate a layer of mirror image, in order to reduce the number of mirror image layers, the system integrates all ENV commands in the Dockerfile into one command;
(a3) Judging whether the source address of the ADD command is a local compressed file or not, and modifying the ADD command into a COPY command if the ADD command is not the local compressed file; both the ADD command and COPY command in Dockerfile can COPY the directory or file of the source address to the destination location of the mirrored file system, but the ADD command has the function of decompression, if only copying the local file to the container destination location, then a lighter COPY command should be used;
(a4) Representing parameters of the CMD and ENRTYPOINT commands in an array mode; in the syntax of Dockerfile, the parameters of CMD and ENTRYPOINT commands have two specifying modes, the first mode is that the parameters are added to the commands by space separation, and the second mode is that array specification is used; when the first mode is used for designation, the container is added/bin/sh-c in the command designated by the user, so that unpredictable errors can occur, and therefore, the parameters of the CMD and ENTRYPOINT commands are designated in a unified mode by using an array mode.
Docker lacks isolation of container resource views. Docker can restrict the use of resources by processes in a container that cannot be automatically perceived in the runtime environment. Therefore, when the Docker provides an independent file system environment for the container, the proc file system can be mounted to/proc at the same time, so that the process in the container can interact with the kernel of the system by using the proc file system. The total memory amount, CPU information, system start time, and other information acquired by the process in the container via the proc file system are the same as those acquired on the host, but are inconsistent with the restrictions set when the container is started. For a large number of JVM-based Java applications, the application startup script relies primarily on system memory resource capacity to allocate heap and stack sizes for the JVM. Thus, an application in a 200 MB memory-limited container created on a 2GB host will assume that it has access to the entire 2GB of memory, and the boot script tells the Java runtime the 2GB of memory upon which to allocate the heap and stack size, and the application boot will certainly fail if it is completely different from the 200 MB memory size that it can actually use at most. Many applications also determine their own performance and set the number of threads based on the number of CPU cores detected from the proc file system. Although the user may use the parameters to limit the available CPU core set of the container when starting the container, the CPU core number information of the container host is obtained in the container, which may cause the operation condition of the application program to be unexpected.
The method and the device have the advantages that the step of preparing the proc file system for the container by replacing Docker is adopted, and a special mount is prepared at the/proc position in the container to replace the proc virtual file system of the prior direct mount system, so that the access of the process in the container to/proc is taken over, and the isolation of the container is enhanced. Taking over/proc will be a fuse program running on the host, which may be called proc agent, which will implement a virtual file system using fuse. When the process in the container accesses/proc, the process accesses the virtual file system prepared by the proc agent, the proc agent can prepare special data to return to the process according to the strategy, and the strategy can be changed dynamically. The resource views between the containers can be isolated to a certain extent by using the proc agent, and most of problems caused by the fact that the resource views are not isolated are solved (as shown in fig. 5). Docker also lacks the limit of the quota of disks in the container. Docker prepares a writable rootfs save container for the container to save all modifications to the file system by processes in the container when starting the container, but does not limit the capacity of the rootfs. Rootfs of all containers on a host can share a host file system, and processes in any container can write the file system to be full, so that other containers have no space. In this way, the application adds the capacity limit to the rootfs of the container in the Docker, ensures that the administrator of the container can limit the disk quota (including the space size and the inode number) in the container, and supports the resource management based on the container to take the use of the disk by the container into consideration.
The above embodiments are only for illustrating the technical idea and features of the present invention, and the purpose of the present invention is to enable those skilled in the art to understand the content of the present invention and implement the present invention, and not to limit the protection scope of the present invention by this means. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.
Claims (2)
1. A lightweight resource virtualization and allocation method is characterized by comprising the following steps:
(a) Dividing a container public mirror image in a service center into different mirror image layers, and acquiring mirror image layer management information; the mirror image layer sequentially comprises a kernel layer, an operating system layer, a common component layer, a development language layer and a development framework layer from bottom to top; regulating the generation of the container common image based on the Dockerfile syntax,
the Dockerfile grammar regulation rule is as follows:
(a1) Synthesizing RUN commands of consecutive adjacent pieces into one RUN command using & & symbol; RUN commands are very important commands in a Dockerfile, usually many RUN commands appear in one Dockerfile, and each command generates a layer of mirror image; the use of & & can synthesize a plurality of RUN commands which are continuously adjacent into one, can greatly reduce the number of layers and the size of the final generated mirror image;
(a2) Combining a plurality of ENV commands into one ENV command; because each command can generate a layer of mirror image, in order to reduce the number of mirror image layers, the system integrates all ENV commands in the Dockerfile into one command;
(a3) Judging whether the source address of the ADD command is a local compressed file or not, and modifying the ADD command into a COPY command if the source address of the ADD command is not the local compressed file; both the ADD command and COPY command in Dockerfile can COPY the directory or file of the source address to the destination location of the mirrored file system, but the ADD command has the function of decompression, if only copying the local file to the container destination location, then a lighter COPY command should be used;
(a4) Representing the parameters of the CMD and ENRTYPOINT commands in an array mode;
when the Docker provides an independent file system environment for the container, the proc file system to/proc can be mounted at the same time, so that the process in the container can interact with the kernel of the system by using the proc file system; mounting a proc virtual file system in a/proc position in the container instead of directly mounting the system before, and taking over the access of a process in the container to/proc so as to enhance the isolation of the container;
(b) Selecting a mirror image of any node in the container public mirror image as a basic mirror image, developing an application service mirror image on the basis of the basic mirror image, and dividing a necessary basic commonality mirror image layer according to the basic mirror image; developers select the mirror image of any node as a basic mirror image according to requirements, and develop an application service mirror image on the basis of the mirror image;
(c) Analyzing the management information of the mirror image layer, and preloading the necessary basic commonality mirror image layer by combining the hardware characteristics and the task mission of the nodes; through the management information analysis of the mirror image layer, the hardware characteristics of the nodes and the mission information of tasks, the necessary basic commonality mirror image layer is preloaded, the mirror image distribution efficiency is improved, and the network transmission amount is reduced; when an application utilizes a mirror image preloading mechanism, a service center node of the application is loaded with a Python library and a public component mirror image base layer before executing a task based on rules and preset requirements, and can be based on a mirror image layering mechanism when the application service is deployed subsequently;
(d) And pulling the mirror layer data in the same service center in a blocking way by utilizing the container engine or establishing a P2P network for mirror data transmission between different service centers.
2. The lightweight resource virtualization and allocation method according to claim 1, wherein: a capacity limit on the rootfs of the vessel was also added to the Docker.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010234317.7A CN111432006B (en) | 2020-03-30 | 2020-03-30 | Lightweight resource virtualization and distribution method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010234317.7A CN111432006B (en) | 2020-03-30 | 2020-03-30 | Lightweight resource virtualization and distribution method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111432006A CN111432006A (en) | 2020-07-17 |
| CN111432006B true CN111432006B (en) | 2023-03-31 |
Family
ID=71549857
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010234317.7A Active CN111432006B (en) | 2020-03-30 | 2020-03-30 | Lightweight resource virtualization and distribution method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111432006B (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113760453B (en) * | 2021-08-04 | 2024-05-28 | 南方电网科学研究院有限责任公司 | Container mirror image distribution system and container mirror image pushing, pulling and deleting method |
| CN113918281A (en) * | 2021-09-30 | 2022-01-11 | 浪潮云信息技术股份公司 | Method for improving cloud resource expansion efficiency of container |
| CN114185641B (en) * | 2021-11-11 | 2024-02-27 | 北京百度网讯科技有限公司 | Virtual machine cold migration method and device, electronic equipment and storage medium |
| CN116204305B (en) * | 2022-12-21 | 2023-11-03 | 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) | Method for limiting number of dock container inodes |
| CN119557042A (en) * | 2023-09-04 | 2025-03-04 | 华为云计算技术有限公司 | Method for providing container service, container system and related device |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2017092672A1 (en) * | 2015-12-03 | 2017-06-08 | 华为技术有限公司 | Method and device for operating docker container |
| CN108616419A (en) * | 2018-03-30 | 2018-10-02 | 武汉虹旭信息技术有限责任公司 | A kind of packet capture analysis system and its method based on Docker |
| CN110119377A (en) * | 2019-04-24 | 2019-08-13 | 华中科技大学 | Online migratory system towards Docker container is realized and optimization method |
Family Cites Families (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080082665A1 (en) * | 2006-10-02 | 2008-04-03 | Dague Sean L | Method and apparatus for deploying servers |
| CN106227579B (en) * | 2016-07-12 | 2020-01-31 | 深圳市中润四方信息技术有限公司 | Docker container construction method and Docker management console |
| CN107819802B (en) * | 2016-09-13 | 2021-02-26 | 华为技术有限公司 | Mirror image obtaining method in node cluster, node equipment and server |
| CN108021608A (en) * | 2017-10-31 | 2018-05-11 | 赛尔网络有限公司 | A kind of lightweight website dispositions method based on Docker |
| US10536563B2 (en) * | 2018-02-06 | 2020-01-14 | Nicira, Inc. | Packet handling based on virtual network configuration information in software-defined networking (SDN) environments |
| CN108446166A (en) * | 2018-03-26 | 2018-08-24 | 中科边缘智慧信息科技(苏州)有限公司 | Quick virtual machine starts method |
| CN110096333B (en) * | 2019-04-18 | 2021-06-29 | 华中科技大学 | A non-volatile memory-based container performance acceleration method |
| CN110673923B (en) * | 2019-09-06 | 2024-09-13 | 中国平安财产保险股份有限公司 | XWIKI system configuration method, XWIKI system and computer equipment |
| CN110674043B (en) * | 2019-09-24 | 2023-09-12 | 聚好看科技股份有限公司 | Processing method and server for application debugging |
| CN111125003B (en) * | 2019-11-25 | 2024-01-26 | 中科边缘智慧信息科技(苏州)有限公司 | A lightweight and rapid distribution method for container images |
-
2020
- 2020-03-30 CN CN202010234317.7A patent/CN111432006B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2017092672A1 (en) * | 2015-12-03 | 2017-06-08 | 华为技术有限公司 | Method and device for operating docker container |
| CN108616419A (en) * | 2018-03-30 | 2018-10-02 | 武汉虹旭信息技术有限责任公司 | A kind of packet capture analysis system and its method based on Docker |
| CN110119377A (en) * | 2019-04-24 | 2019-08-13 | 华中科技大学 | Online migratory system towards Docker container is realized and optimization method |
Non-Patent Citations (2)
| Title |
|---|
| "Docker – the Solution for Isolated Environments";Nicolae Sirbu;《https://dzone.com/articles/docker-the-solution-for-isolated-environments》;20180502;全文 * |
| "FID: A Faster Image Distribution System for Docker Platform";Wang Kangjin; Yang Yong; Li Ying; Luo Hanmei; Ma Lin;《2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems (FAS*W)》;20170922;全文 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111432006A (en) | 2020-07-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111432006B (en) | Lightweight resource virtualization and distribution method | |
| Menouer | KCSS: Kubernetes container scheduling strategy. | |
| Talebian et al. | Optimizing virtual machine placement in iaas data centers: taxonomy, review and open issues | |
| US10129333B2 (en) | Optimization of computer system logical partition migrations in a multiple computer system environment | |
| CN113474765B (en) | Managed virtual warehouse for tasks | |
| KR20210019533A (en) | Operating system customization in on-demand network code execution systems | |
| US7702784B2 (en) | Distributing and geographically load balancing location aware communication device client-proxy applications | |
| CN111381928B (en) | Virtual machine migration method, cloud computing management platform and storage medium | |
| US10970113B1 (en) | Systems and methods for orchestrating seamless, distributed, and stateful high performance computing | |
| CN104050042A (en) | Resource allocation method and resource allocation device for ETL (Extraction-Transformation-Loading) jobs | |
| JP2010134518A (en) | Method of managing configuration of computer system, computer system, and program for managing configuration | |
| CN114697308A (en) | Edge node application update method and device | |
| CN113918281A (en) | Method for improving cloud resource expansion efficiency of container | |
| CN116149843A (en) | Resource allocation method, network, storage medium and processor based on dynamic programming | |
| Lingayat et al. | Integration of linux containers in openstack: An introspection | |
| US20250258796A1 (en) | Continuous ingestion of custom file formats | |
| CN111459668A (en) | Lightweight resource virtualization method and device for server | |
| US11138178B2 (en) | Separation of computation from storage in database for better elasticity | |
| Tudoran et al. | Adaptive file management for scientific workflows on the azure cloud | |
| Koutsovasilis et al. | A holistic approach to data access for cloud-native analytics and machine learning | |
| US11868805B2 (en) | Scheduling workloads on partitioned resources of a host system in a container-orchestration system | |
| CN118591794A (en) | Expose controls to enable interactive schedulers for cloud cluster orchestration systems | |
| Neuwirth | Assessment of the I/O and storage subsystem in modular supercomputing architectures | |
| CN114579269B (en) | Task scheduling method and device | |
| CN119829296B (en) | Load balancing method and system of integrative super fusion server of deposit and calculation |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |