CN105677484A - A multi-core CPU real-time data processing method with automatic load balancing - Google Patents
A multi-core CPU real-time data processing method with automatic load balancing Download PDFInfo
- Publication number
- CN105677484A CN105677484A CN201610011906.2A CN201610011906A CN105677484A CN 105677484 A CN105677484 A CN 105677484A CN 201610011906 A CN201610011906 A CN 201610011906A CN 105677484 A CN105677484 A CN 105677484A
- Authority
- CN
- China
- Prior art keywords
- real
- cpu
- task
- time data
- core
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
Abstract
Description
技术领域technical field
本发明涉及数据处理领域,尤其涉及一种自动负载均衡的多核CPU实时数据处理方法。The invention relates to the field of data processing, in particular to a multi-core CPU real-time data processing method with automatic load balancing.
背景技术Background technique
在当今微电子技术和计算机技术飞速发展的过程中,CPU(CentralProcessingUnit,中央处理器)的时钟主频逐渐达到极限。为了进一步提高CPU处理性能,计算机技术越来越倾向于多线程、多核、多维等方向发展,尤其是多核CPU,其已成为维持摩尔定律的主流技术。During the rapid development of today's microelectronics technology and computer technology, the clock frequency of a CPU (Central Processing Unit, central processing unit) gradually reaches its limit. In order to further improve CPU processing performance, computer technology tends to develop in the direction of multi-threading, multi-core, multi-dimensional, etc., especially multi-core CPU, which has become the mainstream technology for maintaining Moore's Law.
对大规模的实时数据进行并行处理,使用多核CPU,充分利用多处理器核的并行性,以提高数据的处理性能,是一种常见的方法。然而,数据流如何分配到多核CPU的各个处理器核,以达到整个系统的最优化利用,却是整个系统中的一个难点。It is a common method to process large-scale real-time data in parallel, use multi-core CPU, and make full use of the parallelism of multi-processor cores to improve data processing performance. However, how to allocate data streams to each processor core of the multi-core CPU to achieve optimal utilization of the entire system is a difficult point in the entire system.
另外,实时数据处理最大的特点就是实时性,若不能保证在一定的时间内处理完数据流,将导致部分数据未被处理而丢失。如何保证数据处理的实时性,又成为整个系统的另一个难点。In addition, the biggest feature of real-time data processing is real-time performance. If the data flow cannot be guaranteed to be processed within a certain period of time, some data will be lost without being processed. How to ensure the real-time performance of data processing has become another difficulty of the whole system.
发明内容Contents of the invention
本发明提出一种自动负载均衡的多核CPU实时数据处理方法,以解决现有技术中的一项或多项缺失。The invention proposes a multi-core CPU real-time data processing method with automatic load balancing to solve one or more deficiencies in the prior art.
本发明提出一种自动负载均衡的多核CPU实时数据处理方法,包括:接收多核CPU的实时待处理数据;根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性,将所述实时待处理数据分发到相应的任务接口,以供与该任务接口连接的CPU核进行数据处理。The present invention proposes a multi-core CPU real-time data processing method with automatic load balancing, comprising: receiving real-time data to be processed by the multi-core CPU; Attributes of the real-time data to be processed, distributing the real-time data to be processed to a corresponding task interface for the CPU core connected to the task interface to perform data processing.
一个实施例中,根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性,将所述实时待处理数据分发到相应的任务接口,包括:根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性识别当前能够接收所述实时待处理数据的至少一个任务接口;将所述实时待处理数据分发到识别的任务接口。In one embodiment, according to the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the real-time data to be processed, the real-time data to be processed is distributed to the corresponding task interface, including : According to the status of all current task interfaces, the task running status of the CPU core connected to each task interface and the attribute identification of the real-time data to be processed, at least one task interface that can currently receive the real-time data to be processed; Pending data is distributed to identified task interfaces.
一个实施例中,将所述实时待处理数据分发到相应的任务接口之前,还包括:将所述实时待处理数据缓存至一共享缓存,以待被分发到任务接口。In one embodiment, before distributing the real-time data to be processed to the corresponding task interface, the method further includes: caching the real-time data to be processed in a shared cache, to be distributed to the task interface.
一个实施例中,将所述实时待处理数据分发到识别的任务接口之前,包括:将所述实时待处理数据缓存至所述识别的任务接口的独立缓存,以平滑实时待处理数据被传送至CPU核的速度和被CPU核处理的速度之间的不均衡。In one embodiment, before distributing the real-time data to be processed to the identified task interface, it includes: buffering the real-time data to be processed to an independent cache of the identified task interface, so as to smooth the transmission of the real-time data to be processed to The imbalance between the speed of the CPU core and the speed processed by the CPU core.
一个实施例中,还包括:根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性中的一个或多个,设置CPU核中任务接口的数目和/或CPU处理任务所处理数据的属性。In one embodiment, it also includes: according to one or more of the state of all current task interfaces, the task running status of the CPU core connected to each task interface and the attributes of the real-time data to be processed, setting the task interface in the CPU core The number and/or properties of the data processed by the CPU processing task.
一个实施例中,还包括:根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性中的一个或多个,设置所述共享缓存的大小。In one embodiment, it also includes: according to one or more of the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the real-time data to be processed, setting the shared cache size.
一个实施例中,还包括:根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性中的一个或多个,设置所述独立缓存的大小。In one embodiment, it also includes: according to one or more of the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the real-time data to be processed, setting the independent cache size.
一个实施例中,所述实时待处理数据包括多个数据颗粒;根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性,将所述实时待处理数据分发到相应的任务接口,以供与该任务接口连接的CPU核进行数据处理,包括:根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述数据颗粒的属性,并行地将各所述数据颗粒分发到相应的任务接口以供与该任务接口连接的CPU核进行数据处理。In one embodiment, the real-time data to be processed includes a plurality of data particles; according to the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the real-time data to be processed, the The real-time data to be processed is distributed to the corresponding task interface for the CPU core connected to the task interface to perform data processing, including: according to the status of all current task interfaces, the task running status of the CPU core connected to each task interface and the data attributes of the granules, and distribute each of the data granules to corresponding task interfaces in parallel for the CPU cores connected to the task interfaces to perform data processing.
一个实施例中,还包括:探测各CPU核的运行任务的数目变化;根据数目变化后的运行任务,重新分配已分发到任务接口的所述实时待处理数据,以均衡所述多核CPU上各CPU核的数据负载。In one embodiment, it also includes: detecting the change in the number of running tasks of each CPU core; redistributing the real-time data to be processed that has been distributed to the task interface according to the running tasks after the number change, so as to balance the number of running tasks on the multi-core CPU. The data load of the CPU core.
一个实施例中,还包括:判断所述多核CPU的一CPU核的资源消耗比例是否超过一设定值;如果是,将资源消耗比例超过所述设定值的CPU核中的实时待处理数据,分配给所述多核CPU中资源消耗比例未超过所述设定值的其他CPU核。In one embodiment, it also includes: judging whether the resource consumption ratio of a CPU core of the multi-core CPU exceeds a set value; if so, real-time data to be processed in the CPU core whose resource consumption ratio exceeds the set value , allocating to other CPU cores in the multi-core CPU whose resource consumption ratio does not exceed the set value.
本发明的自动负载均衡的多核CPU实时数据处理方法,能够自动负载均衡的多核CPU实时数据处理方法。对于使用多核CPU进行多任务实时数据处理的应用,可以进行自动负载均衡,根据多核CPU启动的任务数目、任务的运行状态、任务接口的状态、以及数据的属性等信息,将待处理的数据自动地分配到各个CPU核,以保证CPU资源的合理充分利用和数据处理的实时性。该方法还能够使用动态共享缓存的方法,对每个CPU核的待处理数据可进行缓存,以提高数据处理的实时性,减少或避免未处理数据的丢失,从而优化整个系统的性能。The multi-core CPU real-time data processing method with automatic load balancing of the present invention is a multi-core CPU real-time data processing method capable of automatic load balancing. For applications that use multi-core CPUs for multi-task real-time data processing, automatic load balancing can be performed. According to information such as the number of tasks started by the multi-core CPUs, the running status of tasks, the status of task interfaces, and the attributes of data, the data to be processed is automatically distributed. It is allocated to each CPU core to ensure the reasonable and full utilization of CPU resources and the real-time performance of data processing. The method can also use a dynamic shared cache method to cache the data to be processed of each CPU core, so as to improve the real-time performance of data processing, reduce or avoid the loss of unprocessed data, thereby optimizing the performance of the entire system.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。在附图中:In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without creative work. In the attached picture:
图1是本发明一实施例的基于多核CPU的实时数据处理系统的结构示意图;Fig. 1 is the structural representation of the real-time data processing system based on multi-core CPU of an embodiment of the present invention;
图2是本发明实施例的自动负载均衡的多核CPU实时数据处理方法的流程示意图;Fig. 2 is the schematic flow chart of the multi-core CPU real-time data processing method of the automatic load balancing of the embodiment of the present invention;
图3是本发明一实施例中数据分发方法的流程示意图;Fig. 3 is a schematic flowchart of a data distribution method in an embodiment of the present invention;
图4是本发明另一实施例的自动负载均衡的多核CPU实时数据处理方法的流程示意图;Fig. 4 is the schematic flow chart of the multi-core CPU real-time data processing method of the automatic load balancing of another embodiment of the present invention;
图5是本发明另一实施例中数据分发方法的流程示意图;FIG. 5 is a schematic flowchart of a data distribution method in another embodiment of the present invention;
图6是本发明又一实施例的自动负载均衡的多核CPU实时数据处理方法的流程示意图;6 is a schematic flow diagram of a multi-core CPU real-time data processing method for automatic load balancing in another embodiment of the present invention;
图7是本发明再一实施例的自动负载均衡的多核CPU实时数据处理方法的流程示意图;7 is a schematic flow diagram of a multi-core CPU real-time data processing method for automatic load balancing in yet another embodiment of the present invention;
图8是本发明再一实施例中数据分发方法的流程示意图;Fig. 8 is a schematic flowchart of a data distribution method in another embodiment of the present invention;
图9是本发明又一实施例的自动负载均衡的多核CPU实时数据处理方法的流程示意图;9 is a schematic flow diagram of a multi-core CPU real-time data processing method for automatic load balancing in another embodiment of the present invention;
图10是本发明另一实施例的自动负载均衡的多核CPU实时数据处理方法的流程示意图;10 is a schematic flow diagram of a multi-core CPU real-time data processing method for automatic load balancing according to another embodiment of the present invention;
图11是本发明实施例的自动负载均衡的多核CPU实时数据处理装置的结构示意图;11 is a schematic structural diagram of a multi-core CPU real-time data processing device with automatic load balancing according to an embodiment of the present invention;
图12是本发明一实施例中实时数据处理单元的结构示意图;Fig. 12 is a schematic structural diagram of a real-time data processing unit in an embodiment of the present invention;
图13是本发明另一实施例的自动负载均衡的多核CPU实时数据处理装置的结构示意图;13 is a schematic structural diagram of a multi-core CPU real-time data processing device with automatic load balancing according to another embodiment of the present invention;
图14是本发明另一实施例中实时数据处理单元的结构示意图;Fig. 14 is a schematic structural diagram of a real-time data processing unit in another embodiment of the present invention;
图15是本发明再一实施例的自动负载均衡的多核CPU实时数据处理装置的结构示意图;15 is a schematic structural diagram of a multi-core CPU real-time data processing device with automatic load balancing according to yet another embodiment of the present invention;
图16是本发明又一实施例的自动负载均衡的多核CPU实时数据处理装置的结构示意图;16 is a schematic structural diagram of a multi-core CPU real-time data processing device with automatic load balancing according to another embodiment of the present invention;
图17是本发明又一实施例中实时数据处理单元的结构示意图;Fig. 17 is a schematic structural diagram of a real-time data processing unit in another embodiment of the present invention;
图18是本发明另一实施例的自动负载均衡的多核CPU实时数据处理装置的结构示意图;18 is a schematic structural diagram of a multi-core CPU real-time data processing device with automatic load balancing according to another embodiment of the present invention;
图19是本发明又一实施例的自动负载均衡的多核CPU实时数据处理装置的结构示意图。FIG. 19 is a schematic structural diagram of a multi-core CPU real-time data processing device with automatic load balancing according to another embodiment of the present invention.
具体实施方式detailed description
为使本发明实施例的目的、技术方案和优点更加清楚明白,下面结合附图对本发明实施例做进一步详细说明。在此,本发明的示意性实施例及其说明用于解释本发明,但并不作为对本发明的限定。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention more clear, the embodiments of the present invention will be further described in detail below in conjunction with the accompanying drawings. Here, the exemplary embodiments and descriptions of the present invention are used to explain the present invention, but not to limit the present invention.
本发明提出一种自动负载均衡的多核CPU实时数据处理方法、系统及装置。本发明的方法可基于本发明的系统实现,也可视需要基于其他结构的系统实现。该方法对于使用多核CPU进行多任务实时数据处理的应用,可以进行自动负载均衡,根据多核CPU启动的任务数目、任务的运行状态、任务接口的状态、以及数据的属性等信息,将待处理的数据自动地分配到各个CPU核,以保证CPU资源的合理充分利用和数据处理的实时性。The invention proposes a multi-core CPU real-time data processing method, system and device with automatic load balancing. The method of the present invention can be realized based on the system of the present invention, and can also be realized based on systems of other structures as required. This method can perform automatic load balancing for applications that use multi-core CPUs for multi-task real-time data processing. Data is automatically allocated to each CPU core to ensure reasonable and full utilization of CPU resources and real-time data processing.
图1是本发明一实施例的基于多核CPU的实时数据处理系统的结构示意图。如图1所示,该实时数据处理系统包括多核CPU100和任务接口模块200。其中,多核CPU100中的每个CPU核可运行一个或多个任务101;任务接口模块200中包含多个任务接口201;每个任务101与一个任务接口201绑定,以接收实时数据。FIG. 1 is a schematic structural diagram of a real-time data processing system based on a multi-core CPU according to an embodiment of the present invention. As shown in FIG. 1 , the real-time data processing system includes a multi-core CPU 100 and a task interface module 200 . Wherein, each CPU core in the multi-core CPU 100 can run one or more tasks 101; the task interface module 200 includes multiple task interfaces 201; each task 101 is bound to one task interface 201 to receive real-time data.
上述实时数据处理系统还可包括负载均衡分配通道300和负载均衡调度模块400。其中,每个负载均衡分配通道300和一个任务接口201相对应。负载均衡分配通道300可接受负载均衡调度模块400的控制,将实时数据分发到各个任务接口201。CPU核上运行的任务101接收从负载均衡分配通道300通过任务接口201传送过来的数据。其中,负载均衡调度模块400,可根据任务的数目、各个任务的运行状态、任务接口的状态以及实时数据的属性等信息,控制实时数据的流向,确定实时数据被分发到哪一个任务接口。The above real-time data processing system may further include a load balancing distribution channel 300 and a load balancing scheduling module 400 . Wherein, each load balancing distribution channel 300 corresponds to a task interface 201 . The load balancing distribution channel 300 can be controlled by the load balancing scheduling module 400 to distribute real-time data to each task interface 201 . The task 101 running on the CPU core receives the data transmitted from the load balancing distribution channel 300 through the task interface 201 . Among them, the load balancing scheduling module 400 can control the flow of real-time data and determine which task interface the real-time data is distributed to according to information such as the number of tasks, the running status of each task, the status of the task interface, and the attributes of the real-time data.
上述实时数据处理系统还可包括共享缓存500。如此一来,实时数据可不必直接传送至CPU被处理,而可先进入到共享缓存500,暂存在该共享缓存500中,经过一定时间间隔,再进入到负载均衡分配通道300。The above real-time data processing system may further include a shared cache 500 . In this way, the real-time data does not need to be directly transmitted to the CPU to be processed, but can first enter the shared cache 500 , temporarily store in the shared cache 500 , and then enter the load balancing distribution channel 300 after a certain time interval.
上述任务接口模块200,为负载均衡分配通道300和任务101之间的接口,负责将数据传送到每一个任务。在每个任务接口模块200中也可设置有独立缓存,以平滑数据传送和任务执行速度之间的不均衡性。The above-mentioned task interface module 200 is an interface between the load balancing distribution channel 300 and the task 101, and is responsible for transmitting data to each task. An independent cache can also be set in each task interface module 200 to smooth the imbalance between data transmission and task execution speed.
上述实时数据处理系统还可包括配置模块600。该配置模块600可负责对各个模块的配置,包括配置共享缓存500的大小、负载均衡调度模块400的调度方式、任务接口模块200中独立缓存的大小及任务接口101的数目等。The above real-time data processing system may further include a configuration module 600 . The configuration module 600 is responsible for configuring each module, including configuring the size of the shared cache 500, the scheduling mode of the load balancing scheduling module 400, the size of the independent cache in the task interface module 200, and the number of task interfaces 101, etc.
任务101与其所处理实时数据的具体业务相关。任务101一般为应用程序。任务101可以和负载均衡调度模块400进行交互。任务101是否正常运行、任务101的运行状态等信息,可以被负载均衡调度模块400获取。不同任务101可处理的实时数据的属性可不一样,例如,在以太网数据处理中,有的任务101处理web流量,有的任务101处理P2P的音视频流量,这些可处理数据的属性也可被负载均衡模块400获取。获取的方式例如可以是通过自动交互,或者可以通过配置模块600进行手工配置。Task 101 is related to the specific business of the real-time data it processes. Task 101 is generally an application program. The task 101 can interact with the load balancing scheduling module 400 . Whether the task 101 is running normally, the running status of the task 101 and other information can be acquired by the load balancing scheduling module 400 . The attributes of real-time data that can be processed by different tasks 101 may be different. For example, in Ethernet data processing, some tasks 101 process web traffic, and some tasks 101 process P2P audio and video traffic. The load balancing module 400 obtains. The manner of obtaining may be, for example, through automatic interaction, or may be manually configured through the configuration module 600 .
任务接口201为负载均衡分配通道300和任务101之间的接口,负责将数据传送到每一个任务101。由于负载均衡分配通道300输入到任务接口201的速度,和任务101执行以处理数据的速度必然存在不均衡型,可在任务接口201内部设置独立缓存,从而可以缓冲输入的待处理数据。当任务101执行的速度总低于输入数据的速率时,上述独立的数据缓存会逐渐被塞满,负载均衡调度模块400可逐渐将数据均衡到其它任务接口201,以减少进入该任务接口201的数据;当任务101执行的速度总高于输入数据的速率时,上述独立的数据缓存会逐渐变空,负载均衡调度模块400可以增加进入该任务接口201的数据。因此,根据任务接口201中独立的数据缓存的空满状态,可使负载均衡调度模块400进行有效的自动负载均衡。The task interface 201 is an interface between the load balancing distribution channel 300 and the tasks 101 , and is responsible for transmitting data to each task 101 . Since the input speed of the load balancing distribution channel 300 to the task interface 201 and the execution speed of the task 101 to process data must be unbalanced, an independent cache can be set inside the task interface 201 so that the input data to be processed can be buffered. When the speed of task 101 execution is always lower than the rate of input data, the above-mentioned independent data cache will gradually be filled, and the load balancing scheduling module 400 can gradually balance the data to other task interfaces 201 to reduce the number of tasks entering the task interface 201. Data: When the execution speed of the task 101 is always higher than the rate of input data, the independent data cache will gradually become empty, and the load balancing scheduling module 400 can increase the data entering the task interface 201 . Therefore, according to the empty and full state of the independent data cache in the task interface 201 , the load balancing scheduling module 400 can perform effective automatic load balancing.
任务接口201,从逻辑上看起来就是一个数据缓存队列,但在物理实现上,还意味着数据从外部到主机内存的数据传送接口。任何类型的数据,被运行于CPU上的多个任务进行并行实时处理,必须实时地进入到主机内存中。从主机的外设,进入到主机内存,最常见的方式可为不需要CPU参与数据传输过程的DMA(DirectMemoryAccess,直接内存访问)通道。The task interface 201 logically looks like a data cache queue, but in terms of physical implementation, it also means a data transfer interface for data from the outside to the host memory. Any type of data that is processed in parallel by multiple tasks running on the CPU must be entered into the host memory in real time. From the host peripherals to the host memory, the most common way is a DMA (DirectMemoryAccess, direct memory access) channel that does not require the CPU to participate in the data transfer process.
输入的实时数据进入整个系统最后被运行多核CPU100上的多个任务101可并行进行处理,实时数据的各个数据颗粒之间可具有并行性。上述数据颗粒是指负载均衡模块进行数据分配时的粒度。例如对于以太网数据处理,每个以太数据包就是一个数据颗粒;对于ATM网络数据处理,每一个ATM信元就是一个数据颗粒;对于语音信号数据处理,每一路独立的语音编码信号就是一个数据颗粒。对数据进行负载均衡调度,进行调度的最小单位就是一个数据颗粒。对于每个数据颗粒,要么全部进行这个任务接口201,要不全部进入另外一个任务接口201;不能部分进入一个任务接口201,另一部分进入另一个任务接口201。The input real-time data enters the whole system and is finally processed by multiple tasks 101 running on the multi-core CPU 100 in parallel, and each data particle of the real-time data can have parallelism. The above-mentioned data granularity refers to the granularity when the load balancing module performs data distribution. For example, for Ethernet data processing, each Ethernet data packet is a data particle; for ATM network data processing, each ATM cell is a data particle; for voice signal data processing, each independent voice coded signal is a data particle . Load balancing scheduling is performed on data, and the smallest unit of scheduling is a data particle. For each data granule, either all of it goes through this task interface 201, or all of it enters another task interface 201;
负载均衡负载分配通道300,可以接受负载均衡调度模块400的控制,将数据分发到各个任务接口201。负载均衡分配通道300可与共享缓存500、负载均衡调度模块400以及任务接口201之间都连接,可从共享缓存500模块中读出数据,根据数据的属性、以及当前任务接口的状态、任务的运行状态等信息,决定数据颗粒的去向,即目标任务接口。如果当前的目标任务接口中的缓存不满,可以接收数据颗粒,那么数据颗粒被送入到相应的任务接口中。如果当前的目标任务接口的缓存已满,无法接收数据颗粒,那么数据颗粒继续保存在共享缓存500中;过一段时间之后,目标任务接口数据减少,可触发负载均衡分配通道300将对应的共享缓存500中的数据颗粒读出。也就是说,负载均衡分配通道300将数据传送到各个任务接口201,可有两种启动的方式,一种是有实时数据的输入,一种是目标任务接口中的缓存有满变为不满。The load balancing load distribution channel 300 can accept the control of the load balancing scheduling module 400 and distribute data to each task interface 201 . The load balancing distribution channel 300 can be connected to the shared cache 500, the load balancing scheduling module 400, and the task interface 201, and can read data from the shared cache 500 module. Information such as running status determines the whereabouts of data particles, that is, the target task interface. If the cache in the current target task interface is not full and data particles can be received, then the data particles are sent to the corresponding task interface. If the cache of the current target task interface is full and cannot receive data granules, the data granules will continue to be stored in the shared cache 500; Data granular readout in 500. That is to say, the load balancing distribution channel 300 transmits data to each task interface 201, and there are two ways to start it, one is that there is real-time data input, and the other is that the cache in the target task interface is full and becomes full.
负载均衡调度模块400,可根据任务101的数目、各个任务101的运行状态、任务接口201的状态以及数据颗粒的属性等信息,确定数据颗粒被分发到哪一个任务接口201,并通过对负载均衡分配通道300的控制,将数据颗粒传送到该任务接口201。The load balancing scheduling module 400 can determine which task interface 201 the data granules are distributed to based on information such as the number of tasks 101, the running status of each task 101, the status of the task interface 201, and the attributes of the data granules, and through load balancing The control of the distribution channel 300 transmits data particles to the task interface 201 .
任务101的数目可以作为均衡调度模块400关注的重要信息。将数据均衡地调度给每一个任务101,需要知道当前任务的数目。任务101的数目可能是动态变化的,负载均衡调度模块400,可以探测当前任务的数目,并动态地进行调整。当任务101的数目增多时,将数据分配到增加的任务101上;当任务101的数目减少时,将原来分配到已不运行任务上的数据,改为分配到仍在运行的任务上。探测当前任务是否运行的方法,可以是与任务协商的自动检测,也可以是手工配置。The number of tasks 101 can be used as important information that the balanced scheduling module 400 pays attention to. To evenly schedule data to each task 101, it is necessary to know the number of current tasks. The number of tasks 101 may change dynamically, and the load balancing scheduling module 400 can detect the number of current tasks and adjust dynamically. When the number of tasks 101 increases, the data is allocated to the increased tasks 101; when the number of tasks 101 decreases, the data originally allocated to the tasks that are not running is changed to the tasks that are still running. The method of detecting whether the current task is running can be an automatic detection negotiated with the task, or a manual configuration.
任务101运行的状态,可包括任务运行是否正常、任务运行消耗的CPU资源、任务运行消耗的内存资源、任务运行的优先级、任务可处理的数据属性等信息。任务101运行不正常时,可将数据均衡到其它任务;对于任务运行消耗的CPU和内存资源,可以设置阈值,当超过一定程度,可将数据均衡到其它任务;任务优先级较低时,在总体CPU和内存资源消耗较大的情况下,可降低该任务101的数据量;任务101具有可处理的数据属性时,可仅与其传送属性一致的数据。The running state of the task 101 may include information such as whether the task is running normally, CPU resources consumed by the task running, memory resources consumed by the task running, priority of the task running, and data attributes that the task can process. When the task 101 is not running normally, the data can be balanced to other tasks; for the CPU and memory resources consumed by the task running, a threshold can be set, and when it exceeds a certain level, the data can be balanced to other tasks; when the task priority is low, the When the overall CPU and memory resource consumption is large, the data volume of the task 101 can be reduced; when the task 101 has a data attribute that can be processed, it can only transmit data with the same attribute.
任务接口201的状态,可包括任务接口中独立缓存的空满状态、数据在独立缓存中所占比例等。当任务接口201中的数据过多时,负载均衡调度模块400可以将数据调度到其它任务接口201,以避免数据在某个任务接口中堆积而在其它任务接口过少的情况出现。The status of the task interface 201 may include the empty and full status of the independent cache in the task interface, the proportion of data in the independent cache, and the like. When there is too much data in the task interface 201, the load balancing scheduling module 400 can schedule the data to other task interfaces 201, so as to avoid the situation that the data accumulates in a certain task interface and is too little in other task interfaces.
数据颗粒的属性信息,与任务101的具体处理内容相关。例如对于以太网数据包,IPv4/IPv6数据包、TCP/UDP数据包、数据包的五元组、数据包的TCP标志位、数据包的应用层协议等,都是数据颗粒的属性。有的任务101只能处理IPv4数据包,那么负载均衡调度模块,只将IPv4的数据包调度到这个任务101;有的任务101只能处理Web流量,则仅将http协议的数据包调度到这个任务101。The attribute information of the data granule is related to the specific processing content of the task 101 . For example, Ethernet data packets, IPv4/IPv6 data packets, TCP/UDP data packets, quintuples of data packets, TCP flag bits of data packets, application layer protocols of data packets, etc., are all attributes of data particles. Some tasks 101 can only process IPv4 data packets, so the load balancing scheduling module only dispatches IPv4 data packets to this task 101; Mission 101.
共享缓存500可负责对数据流进行缓存,数据流最终会负载均衡送到多个任务接口。属于每个任务接口201的数据可共享这部分缓存空间,以提高缓存的利用率。对于每一个任务接口201,都可有独立缓存模块,若某个任务处理速度较慢,其任务接口201中的独立缓存会逐渐填满,此时需要共享缓存的空间来暂存数据。由于各个任务执行情况不同,有快有慢,有需要共享缓存暂存数据的,也有任务接口201内的独立缓存已足够使用不需要共享缓存的,将共享缓存设计为各个任务可以共享的方式,可以最大程度地提高缓存的利用率。The shared cache 500 can be responsible for caching the data flow, and the data flow will eventually be load-balanced and sent to multiple task interfaces. The data belonging to each task interface 201 can share this part of the cache space, so as to improve the utilization rate of the cache. For each task interface 201, there may be an independent cache module. If the processing speed of a certain task is slow, the independent cache in the task interface 201 will gradually fill up. At this time, the shared cache space is needed to temporarily store data. Due to the different execution conditions of each task, some are fast and some are slow, some need to share cache temporary data, and some independent caches in the task interface 201 are sufficient for use and do not need shared caches. The shared cache is designed as a way that each task can be shared. Can maximize the utilization of the cache.
配置模块600可负责对各个模块的配置,可负责向用户提供接口,使用户可以对整个系统的各个参数进行配置。这些参数可包括配置共享缓存的大小、进行数据负载均衡调度的方式、任务接口模块缓存的大小、任务接口的数目等。The configuration module 600 can be responsible for the configuration of each module, and can be responsible for providing an interface to the user so that the user can configure various parameters of the entire system. These parameters may include configuring the size of the shared cache, the way of data load balance scheduling, the size of the task interface module cache, the number of task interfaces, and the like.
数据负载均衡调度模块400的调度方式,可包括较多可设置的参数。例如,包括任务可处理的数据属性,优先给优先级高的任务101提供数据,任务接口201内独立缓存将满时进行自动负载均衡时的阈值,任务201消耗CPU或内存资源过多时进行自动负载均衡的阈值等。The scheduling mode of the data load balancing scheduling module 400 may include many configurable parameters. For example, include the data attributes that tasks can process, give priority to providing data to high priority tasks 101, the threshold for automatic load balancing when the independent cache in task interface 201 is about to be full, and automatic load when task 201 consumes too much CPU or memory resources Equilibrium threshold, etc.
图2是本发明实施例的自动负载均衡的多核CPU实时数据处理方法的流程示意图。如图2所示,本发明实施例的自动负载均衡的多核CPU实时数据处理方法,可包括步骤:FIG. 2 is a schematic flowchart of a multi-core CPU real-time data processing method with automatic load balancing according to an embodiment of the present invention. As shown in Figure 2, the multi-core CPU real-time data processing method of the automatic load balancing of the embodiment of the present invention may comprise steps:
S110:接收多核CPU的实时待处理数据;S110: receiving real-time data to be processed by the multi-core CPU;
S120:根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性,将所述实时待处理数据分发到相应的任务接口,以供与该任务接口连接的CPU核进行数据处理。S120: Distribute the real-time data to be processed to corresponding task interfaces according to the status of all current task interfaces, the task running status of the CPU cores connected to each task interface, and the attributes of the real-time data to be processed, for use with the task The CPU core connected to the interface performs data processing.
上述步骤S120可通过图1所示的基于多核CPU的实时数据处理系统中的负载均衡调度模块400和负载均衡负载分配通道300,结合其他必要的模块(例如,任务接口模块200)实现。The above step S120 can be implemented by the load balancing scheduling module 400 and the load balancing load distribution channel 300 in the multi-core CPU-based real-time data processing system shown in FIG. 1 in combination with other necessary modules (for example, the task interface module 200).
本发明实施例的自动负载均衡的多核CPU实时数据处理方法,通过根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性,分发实时待处理数据,能够充分利用一部分CPU核中闲置的资源,避免另一部分CPU核中数据负载过重,从而对多核CPU进行自动负载均衡,以保证CPU资源的合理充分利用和数据处理的实时性。The multi-core CPU real-time data processing method with automatic load balancing in the embodiment of the present invention distributes the real-time pending data according to the status of all current task interfaces, the task running status of the CPU cores connected to each task interface, and the attributes of the real-time pending data. Data processing can make full use of idle resources in some CPU cores and avoid overloading data load in another part of CPU cores, so as to automatically load balance multi-core CPUs to ensure reasonable and full utilization of CPU resources and real-time data processing.
图3是本发明一实施例中数据分发方法的流程示意图。如图3所示,图2所示的自动负载均衡的多核CPU实时数据处理方法,在步骤S120中,根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性,将所述实时待处理数据分发到相应的任务接口的方法,可包括步骤:Fig. 3 is a schematic flowchart of a data distribution method in an embodiment of the present invention. As shown in Figure 3, the multi-core CPU real-time data processing method of automatic load balancing shown in Figure 2, in step S120, according to the status of all current task interfaces, the task running status of the CPU core connected with each task interface and the described The attribute of the real-time data to be processed, and the method of distributing the real-time data to be processed to the corresponding task interface may include the steps of:
S121:根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性识别当前能够接收所述实时待处理数据的至少一个任务接口;S121: Identify at least one task interface that can currently receive the real-time data to be processed according to the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the real-time data to be processed;
S122:将所述实时待处理数据分发到识别的任务接口。S122: Distribute the real-time data to be processed to the identified task interface.
上述步骤S121可通过上述系统中的负载均衡调度模块400实现,上述步骤S122可通过上述系统中的负载均衡负载分配通道300,结合必要的模块(例如任务接口模块200)实现。The above step S121 can be realized by the load balancing scheduling module 400 in the above system, and the above step S122 can be realized by combining the necessary modules (such as the task interface module 200 ) through the load balancing load distribution channel 300 in the above system.
本发明实施例中,通过首先识别出当前能够接收所述实时待处理数据的任务接口,再将实时待处理数据分发到识别的任务接口,能够有效实现多核CPU的自动负载均衡。In the embodiment of the present invention, by first identifying the task interface capable of receiving the real-time data to be processed, and then distributing the real-time data to be processed to the identified task interface, automatic load balancing of multi-core CPUs can be effectively realized.
图4是本发明另一实施例的自动负载均衡的多核CPU实时数据处理方法的流程示意图。如图4所示,图2所示的自动负载均衡的多核CPU实时数据处理方法,在步骤S120之前,即将所述实时待处理数据分发到相应的任务接口之前,该自动负载均衡的多核CPU实时数据处理方法,还可包括步骤:FIG. 4 is a schematic flowchart of a multi-core CPU real-time data processing method with automatic load balancing according to another embodiment of the present invention. As shown in Figure 4, the multi-core CPU real-time data processing method of automatic load balancing shown in Figure 2, before step S120, before the described real-time data to be processed is distributed to corresponding task interface, the multi-core CPU of this automatic load balancing real-time The data processing method may also include the steps of:
S130:将所述实时待处理数据缓存至一共享缓存,以待被分发到任务接口。S130: Cache the real-time data to be processed in a shared cache, to be distributed to the task interface.
上述步骤S130可通过上述系统的共享缓存500实现。The above step S130 can be implemented by the shared cache 500 of the above system.
本发明实施例中,上述共享缓存可供各个任务可以共享使用,可以最大程度地提高缓存的利用率。通过使用动态共享缓存的方法,对每个CPU核的待处理数据进行缓存,可以提高数据处理的实时性,减少或避免未处理数据的丢失。In the embodiment of the present invention, the above-mentioned shared cache can be shared and used by various tasks, which can maximize the utilization rate of the cache. By using the method of dynamic shared cache to cache the data to be processed of each CPU core, the real-time performance of data processing can be improved, and the loss of unprocessed data can be reduced or avoided.
图5是本发明另一实施例中数据分发方法的流程示意图。如图5所示,图3所示的自动负载均衡的多核CPU实时数据处理方法,在步骤S122之前,即将所述实时待处理数据分发到识别的任务接口之前,可包括步骤:Fig. 5 is a schematic flowchart of a data distribution method in another embodiment of the present invention. As shown in Figure 5, the multi-core CPU real-time data processing method of automatic load balancing shown in Figure 3, before the step S122, before the real-time data to be processed is distributed to the identified task interface, may include the steps:
S123:将所述实时待处理数据缓存至所述识别的任务接口的独立缓存,以平滑实时待处理数据被传送至CPU核的速度和被CPU核处理的速度之间的不均衡。S123: Cache the real-time data to be processed to an independent cache of the identified task interface, so as to smooth the imbalance between the speed at which the real-time data to be processed is transmitted to the CPU core and the speed at which it is processed by the CPU core.
上述步骤S123可通过上述系统的任务接口中设置的独立缓存实现。The above step S123 can be realized through the independent cache set in the task interface of the above system.
本发明实施例中,通过任务接口内的独立缓存可以对任务接口中的数据进行缓冲,以平滑数据被传送至CPU核的速度和数据被CPU核处理的速度。In the embodiment of the present invention, the data in the task interface can be buffered through the independent cache in the task interface, so as to smooth the speed at which the data is transmitted to the CPU core and the speed at which the data is processed by the CPU core.
图6是本发明又一实施例的自动负载均衡的多核CPU实时数据处理方法的流程示意图。如图6所示,图2所示的自动负载均衡的多核CPU实时数据处理方法,在步骤S120之前,即将所述实时待处理数据分发到相应的任务接口之前,该自动负载均衡的多核CPU实时数据处理方法,还可包括步骤:FIG. 6 is a schematic flowchart of a multi-core CPU real-time data processing method with automatic load balancing according to another embodiment of the present invention. As shown in Figure 6, the multi-core CPU real-time data processing method of automatic load balancing shown in Figure 2, before step S120, before the described real-time data to be processed is distributed to corresponding task interface, the multi-core CPU of this automatic load balancing real-time The data processing method may also include the steps of:
S140:根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性中的一个或多个,设置CPU核中任务接口的数目和/或CPU处理任务所处理数据的属性。S140: Set the number of task interfaces in the CPU core and/or the CPU according to one or more of the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the real-time data to be processed The properties of the data processed by the processing task.
上述步骤S140可通过上述系统中的配置模块600实现,可通过向用户提供接口,使用户对设置CPU核中任务接口的数目和/或CPU处理任务所处理数据的属性。The above-mentioned step S140 can be realized by the configuration module 600 in the above-mentioned system, and an interface can be provided to the user to enable the user to set the number of task interfaces in the CPU core and/or the attributes of the data processed by the CPU processing task.
本发明实施例中,通过设置CPU核中任务接口的数目和/或CPU处理任务所处理数据的属性,能够进一步均衡CPU核的负载。In the embodiment of the present invention, by setting the number of task interfaces in the CPU core and/or the attributes of the data processed by the CPU processing task, the load of the CPU core can be further balanced.
图7是本发明再一实施例的自动负载均衡的多核CPU实时数据处理方法的流程示意图。如图7所示,图4所示的自动负载均衡的多核CPU实时数据处理方法,在步骤S130之前,即将所述实时待处理数据缓存至一共享缓存之前,该自动负载均衡的多核CPU实时数据处理方法,还可包括步骤:FIG. 7 is a schematic flowchart of a multi-core CPU real-time data processing method with automatic load balancing according to yet another embodiment of the present invention. As shown in Figure 7, the multi-core CPU real-time data processing method of automatic load balancing shown in Figure 4, before step S130, before the described real-time data to be processed is cached to a shared cache, the multi-core CPU real-time data of this automatic load balancing The processing method may also include the steps of:
S150:根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性中的一个或多个,设置所述共享缓存的大小。S150: Set the size of the shared cache according to one or more of the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the real-time data to be processed.
上述步骤S150可通过上述系统中的配置模块600实现,可通过向用户提供接口,使用户根据任务接口的状态、任务运行状态及实时待处理数据的属性中的一个或多个,设置共享缓存的大小。The above-mentioned step S150 can be realized by the configuration module 600 in the above-mentioned system. By providing an interface to the user, the user can set the shared cache according to one or more of the status of the task interface, the task running status and the attributes of the real-time data to be processed. size.
本发明实施例中,通过设置共享缓存的大小,可以在数据堆积较多时,进一步防止待处理数据的丢失。In the embodiment of the present invention, by setting the size of the shared cache, it is possible to further prevent the loss of data to be processed when there is a lot of data accumulation.
图8是本发明再一实施例中数据分发方法的流程示意图。如图8所示,图5所示的自动负载均衡的多核CPU实时数据处理方法,在步骤S123之前,即将所述实时待处理数据缓存至所述识别的任务接口的独立缓存之前,可包括步骤:Fig. 8 is a schematic flowchart of a data distribution method in still another embodiment of the present invention. As shown in Figure 8, the multi-core CPU real-time data processing method of automatic load balancing shown in Figure 5, before step S123, before the independent cache of described real-time data to be processed is cached to the task interface of described identification, can comprise the step :
S124:根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性中的一个或多个,设置所述独立缓存的大小。S124: Set the size of the independent cache according to one or more of the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the real-time data to be processed.
上述步骤S124可通过上述系统中的配置模块600实现,可通过向用户提供接口,使用户根据任务接口的状态、任务运行状态及实时待处理数据的属性中的一个或多个,设置独立缓存的大小。The above-mentioned step S124 can be realized by the configuration module 600 in the above-mentioned system. By providing an interface to the user, the user can set the independent buffer according to one or more of the status of the task interface, the task running status and the attributes of the real-time data to be processed. size.
本发明实施例中,通过设置独立缓存的大小,可以进一步提高缓存的利用率,提高实时待处理数据被传送至CPU核的速度和被CPU核处理的速度之间的不均衡的平滑效果。In the embodiment of the present invention, by setting the size of the independent cache, the utilization rate of the cache can be further improved, and the unbalanced smoothing effect between the speed at which real-time data to be processed is transmitted to the CPU core and the speed at which it is processed by the CPU core can be improved.
一个实施例中,所述实时待处理数据包括多个数据颗粒。图2所示的自动负载均衡的多核CPU实时数据处理方法,步骤S120,根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性,将所述实时待处理数据分发到相应的任务接口,以供与该任务接口连接的CPU核进行数据处理,可包括步骤:In one embodiment, the real-time data to be processed includes multiple data particles. The multi-core CPU real-time data processing method of automatic load balancing shown in Figure 2, step S120, according to the status of all current task interfaces, the task running status of the CPU core connected with each task interface and the attributes of the real-time data to be processed, the The real-time data to be processed is distributed to the corresponding task interface for data processing by the CPU core connected to the task interface, which may include steps:
S1221:根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述数据颗粒的属性,并行地将各所述数据颗粒分发到相应的任务接口以供与该任务接口连接的CPU核进行数据处理。S1221: According to the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the data particles, distribute each of the data particles to the corresponding task interface in parallel for connection with the task interface CPU cores for data processing.
本发明实施例中,以数据粒为单元将数据分发到相应的任务接口,能够有效实现数据的分配,并且能够并行处理具有同种属性数据粒,从而提高实时数据处理的速度。In the embodiment of the present invention, data is distributed to corresponding task interfaces in units of data particles, which can effectively realize data distribution, and can process data particles with the same attribute in parallel, thereby increasing the speed of real-time data processing.
图9是本发明又一实施例的自动负载均衡的多核CPU实时数据处理方法的流程示意图。如图9所示,图2所示的自动负载均衡的多核CPU实时数据处理方法,还可包括步骤:FIG. 9 is a schematic flowchart of a multi-core CPU real-time data processing method with automatic load balancing according to another embodiment of the present invention. As shown in Figure 9, the multi-core CPU real-time data processing method of automatic load balancing shown in Figure 2 may also include steps:
S160:探测各CPU核的运行任务的数目变化;S160: Detect changes in the number of running tasks of each CPU core;
S170:根据数目变化后的运行任务,重新分配已分发到任务接口的所述实时待处理数据,以均衡所述多核CPU上各CPU核的数据负载。S170: Redistributing the real-time data to be processed that has been distributed to the task interface according to the number of running tasks changed, so as to balance the data load of each CPU core on the multi-core CPU.
上述步骤S160可通过上述系统中的负载均衡调度模块400实现。上述步骤S170可通过负载均衡调度模块400和负载均衡负载分配通道300,结合其他必要的模块(例如,任务接口模块200)实现。The above step S160 can be implemented by the load balancing scheduling module 400 in the above system. The above step S170 can be realized through the load balancing scheduling module 400 and the load balancing load distribution channel 300 in combination with other necessary modules (eg, the task interface module 200 ).
本发明实施例中,通过探测各CPU核的运行任务的数目变化,并动态地进行调整CPU负载,当任务数目增多时,将数据分配到增加的任务上;当任务数目减少时,将原来分配到已不运行任务上的数据,改为分配到仍在运行的任务上,能够及时调整CPU的负载情况,以防止由于任务数目变化导致CPU负载失衡。In the embodiment of the present invention, by detecting the change in the number of running tasks of each CPU core, and dynamically adjusting the CPU load, when the number of tasks increases, the data is allocated to the increased tasks; when the number of tasks decreases, the original allocation The data from tasks that are no longer running are allocated to tasks that are still running, which can adjust the CPU load in time to prevent CPU load imbalance due to changes in the number of tasks.
图10是本发明另一实施例的自动负载均衡的多核CPU实时数据处理方法的流程示意图。如图10所示,图2所示的自动负载均衡的多核CPU实时数据处理方法,还可包括步骤:FIG. 10 is a schematic flowchart of a multi-core CPU real-time data processing method with automatic load balancing according to another embodiment of the present invention. As shown in Figure 10, the multi-core CPU real-time data processing method of automatic load balancing shown in Figure 2 may also include steps:
S180:判断所述多核CPU的一CPU核的资源消耗比例是否超过一设定值;S180: Determine whether the resource consumption ratio of a CPU core of the multi-core CPU exceeds a set value;
S190:如果是,将资源消耗比例超过所述设定值的CPU核中的实时待处理数据,分配给所述多核CPU中资源消耗比例未超过所述设定值的其他CPU核。S190: If yes, allocate the real-time data to be processed in the CPU core whose resource consumption ratio exceeds the set value to other CPU cores in the multi-core CPU whose resource consumption ratio does not exceed the set value.
上述步骤S180可通过上述系统中的负载均衡调度模块400实现。The above step S180 can be implemented by the load balancing scheduling module 400 in the above system.
本发明实施例中,对于任务运行消耗的CPU资源,设置阈值,可以在CPU资源超过一定程度,将数据均衡到其它任务。通过将资源消耗较多的CPU核中的待处理数据重新分配给资源消耗较少的CPU核,可以使CPU的数据负载更加均衡。In the embodiment of the present invention, a threshold is set for the CPU resources consumed by task running, and the data can be balanced to other tasks when the CPU resources exceed a certain level. By redistributing the data to be processed in the CPU core that consumes more resources to the CPU core that consumes less resources, the data load of the CPU can be more balanced.
一个具体实施例中,上述实时待处理数据例如是以太网数据。利用本发明实施例的方法对以太网数据进行处理。以8个以太网数据处理任务为例,任务可处理的数据包属性和任务优先级如表1所示。多核处理器例如为4个CPU,内存为16GB,对以太网流量处理占用的CPU情况如表2所示。输入的实时以太网数据流量可为10Gbps,其数据包属性以及所占的比例如表3所示。In a specific embodiment, the above real-time data to be processed is, for example, Ethernet data. The Ethernet data is processed by using the method of the embodiment of the present invention. Taking 8 Ethernet data processing tasks as an example, the data packet attributes and task priorities that can be processed by the tasks are shown in Table 1. The multi-core processor is, for example, 4 CPUs, and the memory is 16 GB. Table 2 shows the CPU occupied by Ethernet traffic processing. The input real-time Ethernet data traffic can be 10Gbps, and its data packet attributes and proportion are shown in Table 3.
表1以太网数据包属性和任务优先级Table 1 Ethernet packet attributes and task priorities
表2处理以太网数据流量时CPU的占用情况Table 2 CPU usage when processing Ethernet data traffic
表3以太网数据包属性以及所占的比例Table 3 Ethernet packet attributes and their proportions
利用本发明实施例的方法进行负载均衡,得到调度结果,如表4所示。其中10%的其它流量,被自动负载均衡模块直接丢弃;40%的HTTP流量被均衡到任务接口1~4上,每个任务接口处理10%的流量,每个任务消耗CPU资源为20%,内存为1GB;20%的MAIL流量被负载均衡到任务接口5~6上,每个任务接口处理10%的流量,每个任务消耗CPU资源为20%,内存为1GB;30%的P2P音视频流量被负载均衡到任务接口7~8上,每个任务接口处理15%的流量,每个任务消耗CPU资源为60%,内存为1.5GB。合计消耗CPU资源为260%,内存为9GB。多核处理器为4个CPU,CPU的总资源为400%,内存总资源为16GB,可以完全处理上述流量。负载均衡模块取得了较好的效果。The load balancing is performed by using the method of the embodiment of the present invention, and the scheduling result is obtained, as shown in Table 4. 10% of other traffic is directly discarded by the automatic load balancing module; 40% of HTTP traffic is balanced to task interfaces 1-4, each task interface handles 10% of traffic, and each task consumes 20% of CPU resources. The memory is 1GB; 20% of the MAIL traffic is load-balanced to task interfaces 5-6, each task interface handles 10% of the traffic, each task consumes 20% of CPU resources, and the memory is 1GB; 30% of P2P audio and video Traffic is load-balanced to task interfaces 7-8, each task interface handles 15% of traffic, each task consumes 60% of CPU resources, and memory is 1.5GB. The total consumption of CPU resources is 260%, and the memory is 9GB. The multi-core processor is 4 CPUs, the total resource of the CPU is 400%, and the total resource of the memory is 16GB, which can completely handle the above traffic. The load balancing module has achieved good results.
表4负载均衡模块调度结果Table 4 Scheduling results of the load balancing module
若以太网数据流量的情况发生变化,假设输入流量变为20Gbps,那么,上述的调度变化如表5所示。合计消耗CPU资源为520%,内存为18GB,已经超出了多核CPU和内存资源的最大能力。根据本发明实施例的方法,可以根据任务的优先级,进行负载均衡。例如,由于P2P音视频处理任务的优先级最低,会首先降低P2P音视频处理的以太网流量数据,调度情况变化如表6所示。通过本发明实施例的方法调度后,降低了P2P音视频流量,丢弃了15%共3Gbps的流量,给任务接口7~8每个任务7.5%即1.5Gbps的流量,使得整个系统的CPU资源占用为400%,内存占用为15GB,达到了系统的最大处理能力。If the situation of the Ethernet data traffic changes, assuming that the input traffic becomes 20Gbps, then the above scheduling changes are shown in Table 5. The total consumption of CPU resources is 520%, and the memory is 18GB, which has exceeded the maximum capacity of multi-core CPU and memory resources. According to the method of the embodiment of the present invention, load balancing can be performed according to the priority of tasks. For example, since the P2P audio and video processing task has the lowest priority, the Ethernet traffic data for P2P audio and video processing will be reduced first, and the scheduling changes are shown in Table 6. After scheduling through the method of the embodiment of the present invention, the P2P audio and video traffic is reduced, 15% of the traffic of 3Gbps in total is discarded, and 7.5% of the traffic of 1.5Gbps is given to each task of task interfaces 7-8, so that the CPU resources of the entire system are occupied is 400%, and the memory usage is 15GB, reaching the maximum processing capacity of the system.
表5调度变化结果Table 5 Scheduling change results
表6调度变化结果Table 6 Scheduling change results
本发明的自动负载均衡的多核CPU实时数据处理方法,对于使用多核CPU进行多任务实时数据处理的应用,可以进行自动负载均衡,根据多核CPU启动的任务数目、任务的运行状态、任务接口的状态、以及数据的属性等信息,将待处理的数据自动地分配到各个CPU核,以保证CPU资源的合理充分利用和数据处理的实时性。该方法还能够使用动态共享缓存的方法,对每个CPU核的待处理数据可进行缓存,以提高数据处理的实时性,减少或避免未处理数据的丢失。The multi-core CPU real-time data processing method of automatic load balancing of the present invention can perform automatic load balancing for the application of multi-task real-time data processing using a multi-core CPU, according to the number of tasks started by the multi-core CPU, the running state of the task, and the state of the task interface , and data attributes and other information, automatically distribute the data to be processed to each CPU core, so as to ensure the reasonable and full utilization of CPU resources and the real-time performance of data processing. The method can also use a dynamic shared cache method to cache the data to be processed by each CPU core, so as to improve the real-time performance of data processing and reduce or avoid the loss of unprocessed data.
基于与图2所示的自动负载均衡的多核CPU实时数据处理方法相同的发明构思,本申请实施例还提供了一种自动负载均衡的多核CPU实时数据处理装置,如下面实施例所述。由于该自动负载均衡的多核CPU实时数据处理装置解决问题的原理与自动负载均衡的多核CPU实时数据处理方法相似,因此该自动负载均衡的多核CPU实时数据处理装置的实施可以参见自动负载均衡的多核CPU实时数据处理方法的实施,重复之处不再赘述。Based on the same inventive concept as the automatic load balancing multi-core CPU real-time data processing method shown in FIG. 2 , the embodiment of the present application also provides an automatic load balancing multi-core CPU real-time data processing device, as described in the following embodiments. Since the problem-solving principle of the automatic load balancing multi-core CPU real-time data processing device is similar to the automatic load balancing multi-core CPU real-time data processing method, the implementation of the automatic load balancing multi-core CPU real-time data processing device can be found in the automatic load balancing multi-core The implementation of the CPU real-time data processing method will not be repeated.
图11是本发明实施例的自动负载均衡的多核CPU实时数据处理装置的结构示意图。如图11所示,该自动负载均衡的多核CPU实时数据处理装置,可包括:实时数据接收单元210和实时数据处理单元220,二者相互连接。FIG. 11 is a schematic structural diagram of a multi-core CPU real-time data processing device with automatic load balancing according to an embodiment of the present invention. As shown in FIG. 11 , the automatic load balancing multi-core CPU real-time data processing device may include: a real-time data receiving unit 210 and a real-time data processing unit 220 , which are connected to each other.
实时数据接收单元210用于接收多核CPU的实时待处理数据。The real-time data receiving unit 210 is configured to receive real-time data to be processed of the multi-core CPU.
实时数据处理单元220用于根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性,将所述实时待处理数据分发到相应的任务接口,以供与该任务接口连接的CPU核进行数据处理。The real-time data processing unit 220 is used for distributing the real-time data to be processed to corresponding task interfaces according to the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the real-time data to be processed , for the CPU core connected to the task interface to perform data processing.
本发明实施例的自动负载均衡的多核CPU实时数据处理装置,通过根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性,分发实时待处理数据,能够充分利用一部分CPU核中闲置的资源,避免另一部分CPU核中数据负载过重,从而对多核CPU进行自动负载均衡,以保证CPU资源的合理充分利用和数据处理的实时性。The multi-core CPU real-time data processing device with automatic load balancing in the embodiment of the present invention distributes real-time pending data according to the status of all current task interfaces, the task running status of the CPU cores connected to each task interface, and the attributes of the real-time pending data. Data processing can make full use of idle resources in some CPU cores and avoid overloading data load in another part of CPU cores, so as to automatically load balance multi-core CPUs to ensure reasonable and full utilization of CPU resources and real-time data processing.
图12是本发明一实施例中实时数据处理单元的结构示意图。如图12所示,上述实时数据处理单元220可包括:实时数据流向识别模块221和实时数据分发模块222,二者相互连接。Fig. 12 is a schematic structural diagram of a real-time data processing unit in an embodiment of the present invention. As shown in FIG. 12 , the real-time data processing unit 220 may include: a real-time data flow identification module 221 and a real-time data distribution module 222 , which are connected to each other.
实时数据流向识别模块221用于根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性识别当前能够接收所述实时待处理数据的至少一个任务接口。The real-time data flow identification module 221 is used to identify at least one currently capable of receiving the real-time data to be processed according to the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the real-time data to be processed. task interface.
实时数据分发模块222用于将所述实时待处理数据分发到识别的任务接口。The real-time data distribution module 222 is used to distribute the real-time data to be processed to the identified task interface.
本发明实施例中,通过实时数据流向识别模块首先识别出当前能够接收所述实时待处理数据的任务接口,再通过实时数据分发模块将实时待处理数据分发到识别的任务接口,能够有效实现多核CPU的自动负载均衡。In the embodiment of the present invention, the real-time data flow identification module first identifies the task interface that can currently receive the real-time data to be processed, and then distributes the real-time data to be processed to the identified task interface through the real-time data distribution module, which can effectively implement multi-core Automatic load balancing of CPUs.
图13是本发明另一实施例的自动负载均衡的多核CPU实时数据处理装置的结构示意图。如图13所示,图11所示的自动负载均衡的多核CPU实时数据处理装置,还可包括共享缓存模块230,连接于实时数据接收单元210和实时数据处理单元220之间。FIG. 13 is a schematic structural diagram of a multi-core CPU real-time data processing device with automatic load balancing according to another embodiment of the present invention. As shown in FIG. 13 , the automatic load balancing multi-core CPU real-time data processing device shown in FIG. 11 may further include a shared cache module 230 connected between the real-time data receiving unit 210 and the real-time data processing unit 220 .
共享缓存模块230用于将所述实时待处理数据缓存至一共享缓存,以待被分发到任务接口。The shared cache module 230 is used to cache the real-time data to be processed in a shared cache, to be distributed to the task interface.
本发明实施例中,上述共享缓存模块中的缓存空间可供各个任务可以共享使用,可以最大程度地提高缓存的利用率。通过使用动态共享缓存的方法,对每个CPU核的待处理数据进行缓存,可以提高数据处理的实时性,减少或避免未处理数据的丢失。In the embodiment of the present invention, the cache space in the above-mentioned shared cache module can be shared by various tasks, which can maximize the utilization rate of the cache. By using the method of dynamic shared cache to cache the data to be processed of each CPU core, the real-time performance of data processing can be improved, and the loss of unprocessed data can be reduced or avoided.
图14是本发明另一实施例中实时数据处理单元的结构示意图。如图14所示,图12所示的实时数据处理单元220还可包括:独立缓存模块223,连接于实时数据流向识别模块221和实时数据分发模块222之间。Fig. 14 is a schematic structural diagram of a real-time data processing unit in another embodiment of the present invention. As shown in FIG. 14 , the real-time data processing unit 220 shown in FIG. 12 may further include: an independent cache module 223 connected between the real-time data flow identification module 221 and the real-time data distribution module 222 .
独立缓存模块223用于将所述实时待处理数据缓存至所述识别的任务接口的独立缓存,以平滑实时待处理数据被传送至CPU核的速度和被CPU核处理的速度之间的不均衡。The independent cache module 223 is used to cache the real-time data to be processed to the independent cache of the identified task interface, so as to smooth the imbalance between the speed at which the real-time data to be processed is transmitted to the CPU core and the speed at which it is processed by the CPU core .
本发明实施例中,通过独立缓存模块可以对任务接口中的数据进行缓冲,以平滑数据被传送至CPU核的速度和数据被CPU核处理的速度。In the embodiment of the present invention, the data in the task interface can be buffered by the independent cache module, so as to smooth the speed at which the data is transmitted to the CPU core and the speed at which the data is processed by the CPU core.
图15是本发明再一实施例的自动负载均衡的多核CPU实时数据处理装置的结构示意图。如图15所示,图11所示的自动负载均衡的多核CPU实时数据处理装置,还可包括:CPU任务配置单元240,与实时数据处理单元220连接。FIG. 15 is a schematic structural diagram of a multi-core CPU real-time data processing device with automatic load balancing according to yet another embodiment of the present invention. As shown in FIG. 15 , the automatic load balancing multi-core CPU real-time data processing device shown in FIG. 11 may further include: a CPU task configuration unit 240 connected to the real-time data processing unit 220 .
CPU任务配置单元240用于根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性中的一个或多个,设置CPU核中任务接口的数目和/或CPU处理任务所处理数据的属性。The CPU task configuration unit 240 is used to set the task interface in the CPU core according to one or more of the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the real-time data to be processed. The number and/or attributes of the data processed by the CPU processing task.
本发明实施例中,通过CPU任务配置单元设置CPU核中任务接口的数目和/或CPU处理任务所处理数据的属性,能够进一步均衡CPU核的负载。In the embodiment of the present invention, the CPU core load can be further balanced by setting the number of task interfaces in the CPU core and/or the attributes of the data processed by the CPU processing task through the CPU task configuration unit.
图16是本发明又一实施例的自动负载均衡的多核CPU实时数据处理装置的结构示意图。如图16所示,图13所示的自动负载均衡的多核CPU实时数据处理装置,还可包括:共享缓存配置模块250,与共享缓存模块230连接。FIG. 16 is a schematic structural diagram of a multi-core CPU real-time data processing device with automatic load balancing according to another embodiment of the present invention. As shown in FIG. 16 , the automatic load balancing multi-core CPU real-time data processing device shown in FIG. 13 may further include: a shared cache configuration module 250 connected to the shared cache module 230 .
共享缓存配置模块250用于根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性中的一个或多个,设置所述共享缓存的大小。The shared cache configuration module 250 is used to set the size of the shared cache according to one or more of the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the real-time data to be processed .
本发明实施例中,通过共享缓存配置模块设置共享缓存的大小,可以在数据堆积较多时,进一步防止待处理数据的丢失。In the embodiment of the present invention, the size of the shared cache is set by the shared cache configuration module, which can further prevent the loss of data to be processed when there is a lot of data accumulation.
图17是本发明又一实施例中实时数据处理单元的结构示意图。如图17所示,图14所示的实时数据处理单元220还可包括独立缓存配置模块224,与上述独立缓存模块223连接。Fig. 17 is a schematic structural diagram of a real-time data processing unit in another embodiment of the present invention. As shown in FIG. 17 , the real-time data processing unit 220 shown in FIG. 14 may further include an independent cache configuration module 224 connected to the above-mentioned independent cache module 223 .
独立缓存配置模块224用于根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述实时待处理数据的属性中的一个或多个,设置所述独立缓存的大小。The independent cache configuration module 224 is used to set the size of the independent cache according to one or more of the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the real-time data to be processed .
本发明实施例中,通过独立缓存配置模块设置独立缓存的大小,可以进一步提高缓存的利用率,提高实时待处理数据被传送至CPU核的速度和被CPU核处理的速度之间的不均衡的平滑效果。In the embodiment of the present invention, by setting the size of the independent cache through the independent cache configuration module, the utilization rate of the cache can be further improved, and the imbalance between the speed at which real-time data to be processed is transmitted to the CPU core and the speed at which it is processed by the CPU core can be improved. smoothing effect.
一个实施例中,所述实时待处理数据包括多个数据颗粒。图11所示的自动负载均衡的多核CPU实时数据处理装置中,所述实时数据处理单元220,可包括:数据颗粒处理模块。上述各实施例的方法可针对数据粒进行处理。例如,数据颗粒替换上述实时待处理数据。In one embodiment, the real-time data to be processed includes multiple data particles. In the automatic load balancing multi-core CPU real-time data processing device shown in FIG. 11 , the real-time data processing unit 220 may include: a data granule processing module. The methods in the above embodiments can be processed on data granules. For example, data granules replace the aforementioned real-time data to be processed.
数据颗粒处理模块2221用于根据当前所有任务接口的状态、与各个任务接口连接的CPU核的任务运行状态及所述数据颗粒的属性,并行地将各所述数据颗粒分发到相应的任务接口以供与该任务接口连接的CPU核进行数据处理。The data granule processing module 2221 is used to distribute the data granules to corresponding task interfaces in parallel according to the status of all current task interfaces, the task running status of the CPU core connected to each task interface, and the attributes of the data granules. For data processing by the CPU core connected to the task interface.
本发明实施例中,数据颗粒处理模块以数据粒为单元将数据分发到相应的任务接口,能够有效实现数据的分配,并且能够并行处理具有同种属性数据粒,从而提高实时数据处理的速度。In the embodiment of the present invention, the data granule processing module distributes data to corresponding task interfaces in units of data granules, which can effectively realize data distribution, and can process data granules with the same attribute in parallel, thereby improving the speed of real-time data processing.
图18是本发明另一实施例的自动负载均衡的多核CPU实时数据处理装置的结构示意图。如图18所示,图11所示的自动负载均衡的多核CPU实时数据处理装置,还可包括:任务状态探测单元260和第一数据重新分配单元270,二者相互连接,任务状态探测单元260与上述实时数据处理单元220连接。FIG. 18 is a schematic structural diagram of a multi-core CPU real-time data processing device with automatic load balancing according to another embodiment of the present invention. As shown in Figure 18, the multi-core CPU real-time data processing device of automatic load balancing shown in Figure 11 can also include: a task state detection unit 260 and a first data redistribution unit 270, the two are connected to each other, and the task state detection unit 260 It is connected with the above-mentioned real-time data processing unit 220.
任务状态探测单元260用于探测各CPU核的运行任务的数目变化。The task state detection unit 260 is used to detect the change in the number of running tasks of each CPU core.
第一数据重新分配单元270用于根据数目变化后的运行任务,重新分配已分发到任务接口的所述实时待处理数据,以均衡所述多核CPU上各CPU核的数据负载。The first data redistribution unit 270 is configured to redistribute the real-time data to be processed that has been distributed to the task interface according to the changed number of running tasks, so as to balance the data load of each CPU core on the multi-core CPU.
本发明实施例中,通过任务状态探测单元探测各CPU核的运行任务的数目变化,并通过第一数据重新分配单元动态地进行调整CPU负载,当任务数目增多时,将数据分配到增加的任务上;当任务数目减少时,将原来分配到已不运行任务上的数据,改为分配到仍在运行的任务上,能够及时调整CPU的负载情况,以防止由于任务数目变化导致CPU负载失衡。In the embodiment of the present invention, the change in the number of running tasks of each CPU core is detected by the task state detection unit, and the CPU load is dynamically adjusted by the first data redistribution unit. When the number of tasks increases, the data is allocated to the increased tasks. When the number of tasks decreases, the data originally allocated to tasks that are not running can be allocated to tasks that are still running, so that the CPU load can be adjusted in time to prevent CPU load imbalance due to changes in the number of tasks.
图19是本发明又一实施例的自动负载均衡的多核CPU实时数据处理装置的结构示意图。如图19所示,图11所示的自动负载均衡的多核CPU实时数据处理装置,还可包括:CPU资源状态检测单元280和第二数据重新分配单元290,二者相互连接,CPU资源状态检测单元280与上述实时数据处理单元220连接。FIG. 19 is a schematic structural diagram of a multi-core CPU real-time data processing device with automatic load balancing according to another embodiment of the present invention. As shown in Figure 19, the multi-core CPU real-time data processing device of automatic load balancing shown in Figure 11 can also include: CPU resource status detection unit 280 and the second data redistribution unit 290, the two are connected to each other, CPU resource status detection The unit 280 is connected with the above-mentioned real-time data processing unit 220 .
CPU资源状态检测单元280用于判断所述多核CPU的一CPU核的资源消耗比例是否超过一设定值。The CPU resource state detection unit 280 is used to determine whether the resource consumption ratio of a CPU core of the multi-core CPU exceeds a set value.
第二数据重新分配单元290用于如果是,将资源消耗比例超过所述设定值的CPU核中的实时待处理数据,分配给所述多核CPU中资源消耗比例未超过所述设定值的其他CPU核。The second data redistribution unit 290 is configured to, if so, distribute the real-time data to be processed in the CPU core whose resource consumption ratio exceeds the set value to the multi-core CPU whose resource consumption ratio does not exceed the set value other CPU cores.
本发明实施例中,通过CPU资源状态检测单元,可以对于任务运行消耗的CPU资源,设置阈值,可在CPU资源超过一定程度,将数据均衡到其它任务。通过将资源消耗较多的CPU核中的待处理数据重新分配给资源消耗较少的CPU核,可以使CPU的数据负载更加均衡。In the embodiment of the present invention, through the CPU resource state detection unit, a threshold can be set for the CPU resources consumed by task operation, and the data can be balanced to other tasks when the CPU resources exceed a certain level. By redistributing the data to be processed in the CPU core that consumes more resources to the CPU core that consumes less resources, the data load of the CPU can be more balanced.
本发明的自动负载均衡的多核CPU实时数据处理装置,对于使用多核CPU进行多任务实时数据处理的应用,可以进行自动负载均衡,根据多核CPU启动的任务数目、任务的运行状态、任务接口的状态、以及数据的属性等信息,将待处理的数据自动地分配到各个CPU核,以保证CPU资源的合理充分利用和数据处理的实时性。该装置还能够使用动态的共享缓存模块,对每个CPU核的待处理数据可进行缓存,以提高数据处理的实时性,减少或避免未处理数据的丢失。The multi-core CPU real-time data processing device with automatic load balancing of the present invention can perform automatic load balancing for the application of multi-task real-time data processing using a multi-core CPU, and can perform automatic load balancing according to the number of tasks started by the multi-core CPU, the running status of the tasks, and the status of the task interface , and data attributes and other information, automatically distribute the data to be processed to each CPU core, so as to ensure the reasonable and full utilization of CPU resources and the real-time performance of data processing. The device can also use a dynamic shared cache module to cache the data to be processed by each CPU core, so as to improve the real-time performance of data processing and reduce or avoid the loss of unprocessed data.
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present invention may be provided as methods, systems, or computer program products. Accordingly, the present invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor of other programmable data processing equipment to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing equipment produce a An apparatus for realizing the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions The device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby The instructions provide steps for implementing the functions specified in the flow chart or blocks of the flowchart and/or the block or blocks of the block diagrams.
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above have further described the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above descriptions are only specific embodiments of the present invention and are not intended to limit the scope of the present invention. Protection scope, within the spirit and principles of the present invention, any modification, equivalent replacement, improvement, etc., shall be included in the protection scope of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610011906.2A CN105677484A (en) | 2016-01-08 | 2016-01-08 | A multi-core CPU real-time data processing method with automatic load balancing |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610011906.2A CN105677484A (en) | 2016-01-08 | 2016-01-08 | A multi-core CPU real-time data processing method with automatic load balancing |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN105677484A true CN105677484A (en) | 2016-06-15 |
Family
ID=56299571
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610011906.2A Pending CN105677484A (en) | 2016-01-08 | 2016-01-08 | A multi-core CPU real-time data processing method with automatic load balancing |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN105677484A (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106487784A (en) * | 2016-09-28 | 2017-03-08 | 东软集团股份有限公司 | A kind of method of conversation shift, device and fire wall |
| CN106686352A (en) * | 2016-12-23 | 2017-05-17 | 北京大学 | Real-time processing method of multi-channel video data on multi-GPU platform |
| CN112000462A (en) * | 2020-07-14 | 2020-11-27 | 张世民 | A data processing method and device based on shared peripheral resources |
| CN114257549A (en) * | 2021-12-21 | 2022-03-29 | 北京锐安科技有限公司 | A traffic forwarding method, device, device and storage medium |
| CN115643317A (en) * | 2022-10-24 | 2023-01-24 | 北京华耀科技有限公司 | Data transmission method, device, equipment and storage medium |
| CN119201798A (en) * | 2024-11-28 | 2024-12-27 | 苏州元脑智能科技有限公司 | Data processing method, device, data storage system and electronic device |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050125797A1 (en) * | 2003-12-09 | 2005-06-09 | International Business Machines Corporation | Resource management for a system-on-chip (SoC) |
| CN102855218A (en) * | 2012-05-14 | 2013-01-02 | 中兴通讯股份有限公司 | Data processing system, method and device |
| CN103135943A (en) * | 2013-02-21 | 2013-06-05 | 浪潮电子信息产业股份有限公司 | Self-adaptive IO (Input Output) scheduling method of multi-control storage system |
| US20130219405A1 (en) * | 2012-02-21 | 2013-08-22 | Electronics and Telecommunications Research Institute of Suwon | Apparatus and method for managing data stream distributed parallel processing service |
| CN104158764A (en) * | 2014-08-12 | 2014-11-19 | 杭州华三通信技术有限公司 | Message processing method and device |
| CN104239153A (en) * | 2014-09-29 | 2014-12-24 | 三星电子(中国)研发中心 | Method and device for balancing multi-core CPU load |
-
2016
- 2016-01-08 CN CN201610011906.2A patent/CN105677484A/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050125797A1 (en) * | 2003-12-09 | 2005-06-09 | International Business Machines Corporation | Resource management for a system-on-chip (SoC) |
| US20130219405A1 (en) * | 2012-02-21 | 2013-08-22 | Electronics and Telecommunications Research Institute of Suwon | Apparatus and method for managing data stream distributed parallel processing service |
| CN102855218A (en) * | 2012-05-14 | 2013-01-02 | 中兴通讯股份有限公司 | Data processing system, method and device |
| CN103135943A (en) * | 2013-02-21 | 2013-06-05 | 浪潮电子信息产业股份有限公司 | Self-adaptive IO (Input Output) scheduling method of multi-control storage system |
| CN104158764A (en) * | 2014-08-12 | 2014-11-19 | 杭州华三通信技术有限公司 | Message processing method and device |
| CN104239153A (en) * | 2014-09-29 | 2014-12-24 | 三星电子(中国)研发中心 | Method and device for balancing multi-core CPU load |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106487784A (en) * | 2016-09-28 | 2017-03-08 | 东软集团股份有限公司 | A kind of method of conversation shift, device and fire wall |
| CN106487784B (en) * | 2016-09-28 | 2019-06-25 | 东软集团股份有限公司 | A kind of method, apparatus and firewall of conversation shift |
| CN106686352A (en) * | 2016-12-23 | 2017-05-17 | 北京大学 | Real-time processing method of multi-channel video data on multi-GPU platform |
| CN106686352B (en) * | 2016-12-23 | 2019-06-07 | 北京大学 | The real-time processing method of the multi-path video data of more GPU platforms |
| CN112000462A (en) * | 2020-07-14 | 2020-11-27 | 张世民 | A data processing method and device based on shared peripheral resources |
| CN114257549A (en) * | 2021-12-21 | 2022-03-29 | 北京锐安科技有限公司 | A traffic forwarding method, device, device and storage medium |
| CN114257549B (en) * | 2021-12-21 | 2023-01-10 | 北京锐安科技有限公司 | A traffic forwarding method, device, equipment and storage medium |
| CN115643317A (en) * | 2022-10-24 | 2023-01-24 | 北京华耀科技有限公司 | Data transmission method, device, equipment and storage medium |
| CN119201798A (en) * | 2024-11-28 | 2024-12-27 | 苏州元脑智能科技有限公司 | Data processing method, device, data storage system and electronic device |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105677484A (en) | A multi-core CPU real-time data processing method with automatic load balancing | |
| CN107852413B (en) | Network device, method and storage medium for offloading network packet processing to GPU | |
| US9459904B2 (en) | NUMA I/O aware network queue assignments | |
| US10897428B2 (en) | Method, server system and computer program product for managing resources | |
| CN103327072B (en) | Cluster load balancing method and system | |
| KR101553649B1 (en) | Multicore apparatus and job scheduling method thereof | |
| CN105491138B (en) | A Distributed Load Scheduling Method Based on Load Rate Hierarchical Triggering | |
| CN104216774B (en) | Multi-core equipment and job scheduling method thereof | |
| US9317427B2 (en) | Reallocating unused memory databus utilization to another processor when utilization is below a threshold | |
| US9274852B2 (en) | Apparatus and method for managing virtual processing unit | |
| CN107122233B (en) | A Multi-VCPU Adaptive Real-time Scheduling Method for TSN Services | |
| CN113238848A (en) | Task scheduling method and device, computer equipment and storage medium | |
| KR20080041047A (en) | Apparatus and Method for Load Balancing in Multi-Core Processor Systems | |
| JP2011529210A (en) | Technology for managing processor resources of multiprocessor servers running multiple operating systems | |
| CN109450803B (en) | Traffic scheduling method, device and system | |
| US10778807B2 (en) | Scheduling cluster resources to a job based on its type, particular scheduling algorithm,and resource availability in a particular resource stability sub-levels | |
| CN104598298A (en) | Virtual machine dispatching algorithm based on task load and current work property of virtual machine | |
| US20190171489A1 (en) | Method of managing dedicated processing resources, server system and computer program product | |
| WO2014114072A1 (en) | Regulation method and regulation device for i/o channels in virtualization platform | |
| US20230401109A1 (en) | Load balancer | |
| CN112214299A (en) | Multi-core processor and task scheduling method and device thereof | |
| Komarasamy et al. | A novel approach for dynamic load balancing with effective bin packing and vm reconfiguration in cloud | |
| CN106325995A (en) | GPU resource distribution method and system | |
| CN107423134A (en) | A kind of dynamic resource scheduling method of large-scale calculations cluster | |
| CN105847385A (en) | Cloud computing platform virtual machine dispatching method based on operation duration |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20160615 |