CN109491794A - Method for managing resource, device and electronic equipment - Google Patents
Method for managing resource, device and electronic equipment Download PDFInfo
- Publication number
- CN109491794A CN109491794A CN201811392734.3A CN201811392734A CN109491794A CN 109491794 A CN109491794 A CN 109491794A CN 201811392734 A CN201811392734 A CN 201811392734A CN 109491794 A CN109491794 A CN 109491794A
- Authority
- CN
- China
- Prior art keywords
- resource
- queue
- resources
- target
- thread
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5018—Thread allocation
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
This application provides a kind of method for managing resource, device and electronic equipments, divide resource in the method for managing resource for queue, determine the target resource of the queue;The queue and business thread are bound, so that the business thread realizes the function of the business thread based on the queue invocation target resource bound with it.The queue and resource that each business thread uses oneself are realized based on the application, so that each business thread possesses independent target resource, when handling business thread, the mutually indepedent of each business thread dispatching resource may be implemented, it is independent of each other, realizes that business thread maximally utilizes resource.
Description
Technical Field
The present application relates to the field of resource processing technologies, and in particular, to a resource management method and apparatus, and an electronic device.
Background
All businesses in the enterprise big data analysis platform use the same resource queue. When services are processed, because resources adopted by each service are the same, if a fault occurs when a certain service is called by the resources, the calling of the resources of other services can be affected, and the resources cannot be fully utilized.
Disclosure of Invention
In view of this, the present application provides the following technical solutions:
a method of resource management, comprising:
dividing resources for a queue, and determining target resources of the queue;
and binding the queue with a service thread, so that the service thread calls a target resource based on the queue bound with the service thread to realize the function of the service thread.
Optionally, the method further comprises:
and adjusting the resources of the target resources of the queue so as to enable the target resources of the queue to meet the resource calling condition of the business thread.
Optionally, the resource adjusting the target resource of the queue includes:
and adjusting the resources of the target resources of the queue based on preset adjustment conditions.
Optionally, the preset adjustment condition includes historical data of service thread resource invocation;
wherein the resource adjustment of the target resource of the queue based on the preset adjustment condition includes:
analyzing historical data called by the service thread resources to determine historical resource information;
and adjusting the target resource in the queue corresponding to the business thread based on the historical resource information.
Optionally, the resource adjusting the target resource of the queue includes:
and performing resource adjustment on the target resource of the queue based on the received adjustment request.
Optionally, the resource adjustment of the target resource of the queue based on the received adjustment request includes:
determining an incremental resource based on the received resource request;
and according to the incremental resources, performing resource adjustment on the target resources of the queue.
Optionally, the incremental resource is a first resource other than the target resource; wherein,
the resource adjustment of the target resource of the queue according to the incremental resource comprises:
and storing the first resource to a reserved area of the queue, and adding the first resource to a target resource corresponding to the queue.
Optionally, the dividing resources for the queue and determining the target resource of the queue includes:
and determining the target resource of the queue based on the resource matching condition of the queue.
A resource management apparatus, comprising:
the device comprises a dividing unit, a processing unit and a processing unit, wherein the dividing unit is used for dividing resources for a queue and determining target resources of the queue;
and the binding unit is used for binding the queue and the service thread, so that the service thread calls a target resource based on the queue bound with the service thread, and the function of the service thread is realized.
An electronic device, comprising: a memory and a processor, wherein,
the processor is used for dividing resources for the queue and determining target resources of the queue; and binding the queue with a service thread, so that the service thread calls a target resource based on the queue bound with the service thread to realize the function of the thread.
Therefore, compared with the prior art, the resource management method, the resource management device and the electronic equipment are provided, resources are divided for the queues, the target resource matched with each queue is determined, and the queues are bound with the service threads, so that when the service threads call the resources, the target resource corresponding to the queues can only be obtained based on the queues bound with the service threads. The service threads use the queues and resources thereof, so that each service thread has independent target resources, and when the service threads are processed, the service threads can call the resources independently without influencing each other, and the service threads can realize the maximum utilization of the resources.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart illustrating a resource management method according to an embodiment of the present application;
fig. 2 is a flowchart illustrating a resource adjustment method according to an embodiment of the present application;
fig. 3 is a schematic flowchart illustrating another resource adjustment method provided in an embodiment of the present application;
FIG. 4 is a diagram illustrating resource pool partitioning according to an embodiment of the present application;
fig. 5 is a schematic structural diagram illustrating a resource management apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram illustrating another resource management apparatus provided in an embodiment of the present application;
fig. 7 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In an embodiment of the present application, a resource management method is provided, and referring to fig. 1, the method includes:
s101, dividing resources for the queue and determining target resources of the queue.
In a big data management platform or a computer application technology, a plurality of resources are usually required to be cooperatively used to complete the processing of related services. For example, in the computer technology, hardware resources and software resources are included, and related hardware resources and/or software resources need to be called when a certain business thread is processed.
In the embodiment of the application, resources are divided for the queues, so that each queue has a target resource matched with the queue. The queue is a data table divided according to service requirements or processing purposes, for example, on a large data platform, the queue may be a service queue, and different services to be executed are recorded in the queue. A matching resource is determined for each queue, and the resource may be embodied in the form of a resource pool, that is, each queue has a resource pool corresponding to it, or in the form of a resource list, that is, each queue may determine a resource list corresponding to it. Specifically, a queue identifier may be set for each queue, where the queue identifier is a unique identifier of the queue and is used to distinguish from other queues, and correspondingly, a resource identifier is determined for each resource pool, where the resource identifier can represent resource characteristics in the resource pool and can be used to distinguish between the resource pools. The queue identifications are in one-to-one correspondence with the resource identifications, so that the resource pool corresponding to each queue can be clearly determined, and the target resources are obtained through matching.
When resource partitioning is performed on a queue, the actual requirement of the queue needs to be based on, for example, the queue is a hardware test requirement queue, and when resource partitioning is performed on the queue, more relevant hardware resources need to be partitioned. Meanwhile, the resources can be divided according to the service types stored in the queue, for example, if the queue is a message queue, the resources called when the historical message queue is processed can be referred to, and the target resources can be determined for the message queue.
S102, binding the queue and the service thread, so that the service thread calls the target resource based on the queue bound with the service thread, and the function of the service thread is realized.
Different business threads can be provided for processing a business, the business threads are bound with corresponding queues, and then target resources corresponding to the queues can be accessed based on the queues, so that the purpose that one business thread corresponds to one target resource is achieved, and the problems of delay and the like caused by calling the same resource by a plurality of business threads are solved. And then the business thread can call the target resource to complete the processing of the business thread, namely, the related functions of the business thread are realized. For example, the purpose of calling resources by a service thread is to implement processing of a message, and the implemented function is a message transmission function.
In order to improve the processing speed and accuracy, in an embodiment of the present application, a service thread may be bound to a queue, and the queue corresponds to a target resource, so that one service thread corresponds to one target resource, and each service thread may use its own unique corresponding queue and resource pool, which is convenient for resource utilization and improves the processing efficiency. In addition, several service threads with similar processing procedures can also correspond to one queue and one target resource, so that the process of resource division can be reduced, but in order to avoid the optimization of the processing procedures of each service thread, each service thread in one queue needs to be used in a staggered mode when the resource is called, namely, each service thread has corresponding resource calling time. And then realize the mutual independence of each business thread resource, each other does not influence to realize the maximization of resource utilization.
In the process of binding the service thread and the queue, the service thread binding needs to be performed based on the target resource into which the queue has been divided, for example, if the target resource into which the first queue is divided is a resource of a production environment, the service thread in the production environment is preferentially selected from the first queue for binding. Of course, the business thread and the queue may be bound first, then the resource requirement of the queue is determined based on the requirement of the business thread, and finally the target resource is divided for the queue.
And each service thread can only call the target resource based on the queue bound with the service thread, namely, the target resource is accessed and used, but the resource can not be used through other queues, so that the service thread, the queue and the target resource are matched with each other, and the calling of the resource can be clearer and more accurate.
For example, in the software development process, a test environment and a production environment exist, and for a service requirement in some development processes, some resources need to be called by the test environment and the production environment at the same time, so that a problem of resource sharing between the test environment and the production environment is caused.
For example, the Resource can be divided according to the queue based on the related technology in the yarn (Yet antenna Resource manager) Resource manager, and then the queue is bound to the specific business thread. That is, the scheduler may allocate resources in the system to each queue and further to each running service thread according to the limitation conditions such as container, queue, etc., for example, a certain resource is allocated to each queue, and a certain number of service threads are executed at most.
The embodiment provides a resource management method, which is implemented by dividing resources for queues, determining a target resource matched with each queue, and binding the queues and a service thread, so that when the service thread calls the resources, the service thread can only obtain the target resource corresponding to the queue based on the queue bound to the service thread. The service threads use the queues and resources thereof, so that each service thread has independent target resources, and when the service threads are processed, the service threads can call the resources independently without influencing each other, and the service threads can realize the maximum utilization of the resources.
Since each queue divides the target resource matched with the queue in advance, in the practical application of calling the resource by the service line, the divided target resource may not meet the requirement of the current service thread, which requires dynamic adjustment of the target resource. Therefore, in another embodiment of the present application, a resource adjustment method is further included, where the method includes:
and adjusting the resources of the target resources of the queue so as to enable the target resources of the queue to meet the resource calling condition of the business thread.
In another embodiment of the present invention, the resource adjustment process includes two aspects.
On one hand, the target resource of the queue is adjusted based on the preset adjusting condition; another aspect is resource adjustment of a target resource of the queue based on the received adjustment claim.
The preset adjustment condition is a condition for determining dynamic adjustment of the target resource, and may include a service thread processing progress condition, a service thread completion efficiency condition, a time adjustment condition, and/or historical data of service thread resource invocation, and the like. The triggering conditions for the dynamic conditions of the resources are determined in the above conditions, so that the phenomena of low processing efficiency and resource waste caused by real-time adjustment of the resources can be avoided.
When historical data called according to the service thread resource is used as a resource adjustment condition, the resource adjustment method, referring to fig. 2, may include:
s201, analyzing historical data called by service thread resources, and determining historical resource information;
s202, adjusting target resources in the queue corresponding to the business thread based on the historical resource information.
When historical data called by a service thread is analyzed, the historical data of the service thread needing similar or same service processing requirements or service processing purposes needs to be counted, so that the analysis of the historical data is more pertinent, then the historical data is analyzed, historical resource information can be obtained, information such as common resources, resource calling time and resource utilization rate can be indicated in the historical resource information, and dynamic adjustment of target resources can be performed according to the resource information.
And the basic reference information for dividing the resources can be also taken as the historical data, if the resources are divided according to the historical data, and then the resources are adjusted, the resource calling historical data of the service thread can be obtained again according to the processing progress or the processing stage of the service thread, so that the target resources which can be accessed subsequently in the subsequent processing process of the service thread can better meet the actual requirements.
For example, if resource adjustment is performed on a first target resource corresponding to a first service thread, historical data corresponding to the first service thread needs to be analyzed to obtain historical resource information, and when the historical resource information includes utilization rate information and resource calling time information of different resources, resource increase is performed through the information. The calling degrees of different resources can be determined according to the utilization rate information of the resources, whether the frequently called resources are in the target resources or not is judged, and if not, the resources are supplemented into the resource pool corresponding to the target resources. The target resource can be adjusted according to the time information of resource calling, if the resource storage capacity corresponding to the target resource is limited and all resources corresponding to the service thread cannot be stored completely, the resources can be dynamically allocated and adjusted according to the time information, so that the processing requirement of the service thread can be met, and the availability of the storage capacity of the container can be ensured.
If the resource adjustment is performed on the target resource in response to the adjustment request, referring to fig. 3, the corresponding resource adjustment method may include:
s301, determining incremental resources based on the received resource request;
s302, according to the incremental resources, resource adjustment is carried out on the target resources of the queue.
Because the target resource matched with the queue is divided in advance, when the service thread bound with the queue calls the resource, the target resource may not completely match the processing requirement of the service thread, and at this time, the processing end corresponding to the service thread can send out a resource request, that is, the processing end requests to adjust the target resource. The resource request includes resources to be added and/or deleted, the resources are uniformly determined to be incremental resources, and then the target resources are adjusted according to the incremental resources.
For example, if the processing end corresponding to the service thread finds that some resources are not in the resource pool matched with the queue when calling the target resource, a corresponding resource increase request is generated, and the corresponding resource is added to the original target resource in response to the resource increase request. Correspondingly, in order to avoid the influence of overlong traversal time on all resources in the resource calling process, the processing end of the service thread can also send out a resource reduction request, namely, the temporarily unneeded resources are removed from the corresponding target resources matched with the queues.
It should be noted that, no matter whether the resource is dynamically adjusted based on the preset adjustment condition or the adjustment request, a resource adjustment time point may be set, and the unified resource adjustment may be performed at the time point, so that the resource adjustment may be performed while avoiding the peak of resource invocation, and further the resource invocation process of the service thread is not affected. Similarly, a peak shifting mode can be adopted in the resource dividing and resource using processes, and the maximum resource utilization is realized.
For example, according to the resource usage history, different resource adjustment processes are intelligently performed on the adjusted resources, such as early peak and flat peak, respectively. Specifically, in the process of isolating the resources of the production environment and the test environment from each other, the resources can be dynamically allocated at 0-9 points of a production peak and an off-peak queue.
Corresponding to the foregoing embodiment, in another embodiment of the present application, an incremental resource is a first resource other than a target resource, where performing resource adjustment on the target resource of a queue according to the incremental resource includes:
s401, storing the first resource to a reserved area of the queue, and adding the first resource to a target resource corresponding to the queue.
Since the first resource is a newly added resource, the first resource needs to be added to the original target resource. Certain reserved areas need to be divided on the arrangement of the queues, and the requirement of adding extra resources can be met. For example, the Yarn resource may be reserved, or the server may be temporarily expanded.
In another embodiment of the present application, when partitioning resources for a queue, the method further includes:
and determining the target resource of the queue based on the queue resource matching condition.
The queue resource matching condition characterizes characteristics of the queue and requirements for the target resource, and may include information such as storage capacity and storage format, and the resource is divided based on the information. For example, if the resource storage capacity of a queue is 50G, the corresponding 50G resource may be divided as the target resource of the queue, or 50G may be used as the maximum threshold of the resource storage capacity. If the determined target resource exceeds the storage capacity, the main resource and the adjusted resource can be determined, so that the subsequent resource adjustment is facilitated, and the corresponding compression storage can be performed in a data compression mode.
In another embodiment of the present application, a method for implementing resource allocation based on a Yarn queue is also provided. In the whole Yarn resource pool, each business thread uses the queue and the resource pool of the business thread, and the mode of sharing one resource pool is replaced without mutual influence. Specifically, the resource calling based on the Yarn maximizes the throughput and utilization rate of the resource. Determining a resource queue according to the requirements of a business thread, supposing that the resource queue is divided into a first queue and a second queue, maintaining information of a group of queues by the Yarn scheduler, enabling a user to submit a business thread application request to one or more queues, then selecting one queue by the scheduler according to a certain rule, and binding the business thread with the queue, so that the business thread can access corresponding resources based on the current queue. Referring to fig. 4, the resource partitioning for the entire Yarn resource pool includes service threads BU1, BU2, and BU3, where a queue corresponding to BU1 is a first service thread Test queue, that is, BU1Test, and corresponds to a first resource pool, a queue corresponding to BU2 is BU2Test and corresponds to a second resource pool, and a queue corresponding to BU3 is BU3Test and corresponds to a third resource pool.
In another embodiment of the present application, there is also provided a resource management apparatus, referring to fig. 5, the apparatus including:
a dividing unit 501, configured to divide resources for a queue and determine a target resource of the queue;
a binding unit 502, configured to bind the queue with a service thread, so that the service thread invokes a target resource based on the queue bound to the service thread, thereby implementing a function of the service thread.
In the resource management device provided in this embodiment, a target resource is matched for each queue through the dividing unit, and then the business thread is bound to the queue in the binding unit, so that the target resource can be accessed based on the queue. The service threads use the queues and resources thereof, so that each service thread has independent target resources, and when the service threads are processed, the service threads can call the resources independently without influencing each other, and the service threads can realize the maximum utilization of the resources.
On the basis of the above embodiment of the resource management device, referring to fig. 6, in another embodiment of the present application, the resource management device includes, in addition to the dividing unit 501 and the binding unit 502 in the above embodiment, further:
the adjusting unit 601 is configured to adjust a target resource of the queue, so that the target resource of the queue meets a resource calling condition of the business thread.
On the basis of the above-described embodiment, still referring to fig. 6, the adjusting unit 601 includes:
a first adjusting subunit 6011, configured to perform resource adjustment on the target resource of the queue based on a preset adjustment condition;
a second adjusting subunit 6012, configured to perform resource adjustment on the target resource of the queue based on the received adjustment request.
In the first adjusting subunit 6011, if the preset adjustment condition includes history data of the service thread resource invocation, the first adjusting subunit 6011 is specifically configured to:
analyzing historical data called by the service thread resources to determine historical resource information;
and adjusting the target resource in the queue corresponding to the business thread based on the historical resource information.
Correspondingly, the second adjusting subunit 6012 is specifically configured to:
determining an incremental resource based on the received resource request;
and according to the incremental resources, performing resource adjustment on the target resources of the queue.
Specifically, the incremental resource is a first resource other than the target resource; wherein,
the resource adjustment of the target resource of the queue according to the incremental resource comprises:
and storing the first resource to a reserved area of the queue, and adding the first resource to a target resource corresponding to the queue.
The target resource can be dynamically adjusted through the adjusting unit, so that the adjusted resource can better meet the calling requirement of the business thread. In order to achieve maximum utilization of resources in the resource adjustment process, peak shifting adjustment may be adopted or adjustment may be completed at a preset time point, for example, early peak and flat peak of resource utilization are distinguished, so as to perform efficient adjustment of resources.
On the basis of the foregoing embodiment, the dividing unit in another embodiment of the present application is specifically configured to:
and determining the target resource of the queue based on the resource matching condition of the queue.
When resource division is performed on the queue, the actual requirement of the queue and corresponding matching conditions, such as storage capacity, resource calling time, and the like, need to be considered, so that the requirement of the business thread bound with the queue can be met.
An electronic device is also provided in the embodiments of the present application, referring to fig. 7, the electronic device includes a memory 70 and a processor 71, where the memory 70 is used to store an executable program, and the processor 71 is used to execute the program stored in the memory 70, that is, the processor 71 is specifically used to execute the following program steps:
s701, dividing resources for a queue, and determining target resources of the queue;
s702, binding the queue and the service thread, so that the service thread calls a target resource based on the queue bound with the service thread, and the function of the service thread is realized.
In another embodiment, the processor 71 may also be implemented by executing an executable program stored in the memory 70:
s703, adjusting the resources of the target resources of the queue, so that the target resources of the queue meet the resource calling conditions of the business thread.
In another embodiment, the processor 71 may also be implemented by executing an executable program stored in the memory 70:
s704, based on preset adjusting conditions, performing resource adjustment on the target resources of the queue.
In another embodiment, where the preset adjustment condition includes historical data of the service thread resource invocation, the first processor 71 may further implement, by executing an executable program stored in the memory 70:
s705, analyzing historical data called by the service thread resource to determine historical resource information;
s706, based on the historical resource information, adjusting the target resource in the queue corresponding to the business thread.
In another embodiment, the processor 71 may also be implemented by executing an executable program stored in the memory 70:
s707, based on the received adjustment request, performing resource adjustment on the target resource of the queue.
In another embodiment, the processor 71 may also be implemented by executing an executable program stored in the memory 70:
s708, determining incremental resources based on the received resource request;
and S709, according to the incremental resource, adjusting the resource of the target resource of the queue.
In another embodiment, when the incremental resource is a first resource other than the target resource, the processor 71 may further implement, by executing an executable program stored in the memory 30:
s710, storing the first resource to a reserved area of the queue, and adding the first resource to a target resource corresponding to the queue.
In another embodiment, the processor 71 may also be implemented by executing an executable program stored in the memory 70:
s711, determining target resources of the queue based on the resource matching conditions of the queue.
According to the electronic device, the resources are divided for the queues, the target resources matched with each queue are determined, and the queues are bound with the service threads, so that when the service threads call the resources, the target resources corresponding to the queues can be obtained only based on the queues bound with the service threads. The service threads use the queues and resources thereof, so that each service thread has independent target resources, and when the service threads are processed, the service threads can call the resources independently without influencing each other, and the service threads can realize the maximum utilization of the resources.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media capable of storing program codes, such as a removable Memory device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
In addition, it should be further noted that, in the embodiments described above, relational terms such as first, second and the like are only used for distinguishing one operation, unit or module from another operation, unit or module, and do not necessarily require or imply any actual relation or order between the units, the units or modules. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method or system that comprises the element.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.
Claims (10)
1. A method of resource management, comprising:
dividing resources for a queue, and determining target resources of the queue;
and binding the queue with a service thread, so that the service thread calls a target resource based on the queue bound with the service thread to realize the function of the service thread.
2. The method of claim 1, further comprising:
and adjusting the resources of the target resources of the queue so as to enable the target resources of the queue to meet the resource calling condition of the business thread.
3. The method of claim 2, the resource adjusting the target resource of the queue, comprising:
and adjusting the resources of the target resources of the queue based on preset adjustment conditions.
4. The method of claim 3, wherein the preset adjustment condition comprises historical data of business thread resource invocation;
wherein the resource adjustment of the target resource of the queue based on the preset adjustment condition includes:
analyzing historical data called by the service thread resources to determine historical resource information;
and adjusting the target resource in the queue corresponding to the business thread based on the historical resource information.
5. The method of claim 2, the resource adjusting the target resource of the queue, comprising:
and performing resource adjustment on the target resource of the queue based on the received adjustment request.
6. The method of claim 5, the resource adjusting the target resource of the queue based on the received adjustment request, comprising:
determining an incremental resource based on the received resource request;
and according to the incremental resources, performing resource adjustment on the target resources of the queue.
7. The method of claim 6, the incremental resource being a first resource outside of the target resource; wherein,
the resource adjustment of the target resource of the queue according to the incremental resource comprises:
and storing the first resource to a reserved area of the queue, and adding the first resource to a target resource corresponding to the queue.
8. The method of any of claims 1-7, wherein partitioning resources for a queue, determining a target resource for the queue, comprises:
and determining the target resource of the queue based on the resource matching condition of the queue.
9. A resource management apparatus, comprising:
the device comprises a dividing unit, a processing unit and a processing unit, wherein the dividing unit is used for dividing resources for a queue and determining target resources of the queue;
and the binding unit is used for binding the queue and the service thread, so that the service thread calls a target resource based on the queue bound with the service thread, and the function of the service thread is realized.
10. An electronic device, comprising: a memory and a processor, wherein,
the processor is used for dividing resources for the queue and determining target resources of the queue; and binding the queue with a service thread, so that the service thread calls a target resource based on the queue bound with the service thread to realize the function of the thread.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811392734.3A CN109491794A (en) | 2018-11-21 | 2018-11-21 | Method for managing resource, device and electronic equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811392734.3A CN109491794A (en) | 2018-11-21 | 2018-11-21 | Method for managing resource, device and electronic equipment |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN109491794A true CN109491794A (en) | 2019-03-19 |
Family
ID=65697225
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811392734.3A Pending CN109491794A (en) | 2018-11-21 | 2018-11-21 | Method for managing resource, device and electronic equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109491794A (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110767302A (en) * | 2019-09-06 | 2020-02-07 | 广东宝莱特医用科技股份有限公司 | Data storage method, system and equipment for hemodialysis machine |
| CN113590313A (en) * | 2021-07-08 | 2021-11-02 | 杭州朗和科技有限公司 | Load balancing method and device, storage medium and computing equipment |
| CN114327899A (en) * | 2021-12-29 | 2022-04-12 | 中国电信股份有限公司 | Method, apparatus, non-volatile storage medium and processor for responding to an access request |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1794185A (en) * | 2005-12-30 | 2006-06-28 | 北京金山软件有限公司 | Resources calling method in multiline range process |
| CN101266559A (en) * | 2007-03-13 | 2008-09-17 | 国际商业机器公司 | Configurable microprocessor and method for dividing single microprocessor core as multiple cores |
| CN102207891A (en) * | 2011-06-10 | 2011-10-05 | 浙江大学 | Method for achieving dynamic partitioning and load balancing of data-partitioning distributed environment |
| CN104239150A (en) * | 2014-09-15 | 2014-12-24 | 杭州华为数字技术有限公司 | Method and device for adjusting hardware resources |
| CN105373434A (en) * | 2015-12-16 | 2016-03-02 | 上海携程商务有限公司 | Resource management system and method |
| CN106201723A (en) * | 2016-07-13 | 2016-12-07 | 浪潮(北京)电子信息产业有限公司 | The resource regulating method of a kind of data center and device |
-
2018
- 2018-11-21 CN CN201811392734.3A patent/CN109491794A/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1794185A (en) * | 2005-12-30 | 2006-06-28 | 北京金山软件有限公司 | Resources calling method in multiline range process |
| CN101266559A (en) * | 2007-03-13 | 2008-09-17 | 国际商业机器公司 | Configurable microprocessor and method for dividing single microprocessor core as multiple cores |
| CN102207891A (en) * | 2011-06-10 | 2011-10-05 | 浙江大学 | Method for achieving dynamic partitioning and load balancing of data-partitioning distributed environment |
| CN104239150A (en) * | 2014-09-15 | 2014-12-24 | 杭州华为数字技术有限公司 | Method and device for adjusting hardware resources |
| CN105373434A (en) * | 2015-12-16 | 2016-03-02 | 上海携程商务有限公司 | Resource management system and method |
| CN106201723A (en) * | 2016-07-13 | 2016-12-07 | 浪潮(北京)电子信息产业有限公司 | The resource regulating method of a kind of data center and device |
Non-Patent Citations (1)
| Title |
|---|
| 曹倩著: "《异构多核任务模型优化技术》", 31 May 2013, 国防工业出版社 * |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110767302A (en) * | 2019-09-06 | 2020-02-07 | 广东宝莱特医用科技股份有限公司 | Data storage method, system and equipment for hemodialysis machine |
| CN113590313A (en) * | 2021-07-08 | 2021-11-02 | 杭州朗和科技有限公司 | Load balancing method and device, storage medium and computing equipment |
| CN113590313B (en) * | 2021-07-08 | 2024-02-02 | 杭州网易数之帆科技有限公司 | Load balancing method, device, storage medium and computing equipment |
| CN114327899A (en) * | 2021-12-29 | 2022-04-12 | 中国电信股份有限公司 | Method, apparatus, non-volatile storage medium and processor for responding to an access request |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112988390B (en) | Computing power resource allocation method and device | |
| CN105159782B (en) | Based on the method and apparatus that cloud host is Order splitting resource | |
| CN112162865A (en) | Server scheduling method and device and server | |
| CN115134371B (en) | Scheduling method, system, device and medium including edge network computing resources | |
| CN112148468B (en) | Resource scheduling method and device, electronic equipment and storage medium | |
| CN107222531B (en) | Container cloud resource scheduling method | |
| US11496413B2 (en) | Allocating cloud computing resources in a cloud computing environment based on user predictability | |
| CN108205541A (en) | The dispatching method and device of distributed network reptile task | |
| US20170155596A1 (en) | Method And Electronic Device For Bandwidth Allocation | |
| US20140282540A1 (en) | Performant host selection for virtualization centers | |
| CN112073532B (en) | Resource allocation method and device | |
| CN110502321A (en) | A kind of resource regulating method and system | |
| CN109614236B (en) | Cluster resource dynamic adjustment method, device and equipment and readable storage medium | |
| CN109491794A (en) | Method for managing resource, device and electronic equipment | |
| CN111796933A (en) | Resource scheduling method, device, storage medium and electronic equipment | |
| CN111709723A (en) | RPA business process intelligent processing method, device, computer equipment and storage medium | |
| Delavar et al. | A synthetic heuristic algorithm for independent task scheduling in cloud systems | |
| CN116610422A (en) | Task scheduling method, device and system | |
| EP4521709A1 (en) | Resource use method and apparatus | |
| CN110750330A (en) | Virtual machine creating method, system, electronic equipment and storage medium | |
| CN110750350A (en) | Large resource scheduling method, system, device and readable storage medium | |
| CN117311899A (en) | Task processing, task scheduling method, computing device and computer storage medium | |
| CN115964152A (en) | GPU resource scheduling method, equipment and storage medium | |
| CN109189581B (en) | A job scheduling method and device | |
| CN114253716B (en) | A weighted load balancing method and system based on container cluster performance |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190319 |