[go: up one dir, main page]

CN120542602A - A satisfaction-based model training incentive method in federated learning - Google Patents

A satisfaction-based model training incentive method in federated learning

Info

Publication number
CN120542602A
CN120542602A CN202510703377.1A CN202510703377A CN120542602A CN 120542602 A CN120542602 A CN 120542602A CN 202510703377 A CN202510703377 A CN 202510703377A CN 120542602 A CN120542602 A CN 120542602A
Authority
CN
China
Prior art keywords
satisfaction
server
quality
node
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510703377.1A
Other languages
Chinese (zh)
Inventor
李晓欢
覃少雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202510703377.1A priority Critical patent/CN120542602A/en
Publication of CN120542602A publication Critical patent/CN120542602A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明涉及一种联邦学习中基于满意度的模型训练激励方法,在基于联邦学习的元计算框架中,计算信息年龄和服务延迟,利用调整质量参数调整数据大小,并结合信息年龄,得到模型质量,利用转换参数调整模型质量和服务延迟,得到满意度,根据满意度和服务器奖励得到服务器效用,根据节点效用和服务器效用构建斯坦伯格博弈模型,其中,服务器作为领导者确定奖励策略,节点作为追随者根据服务器的奖励策略选择节点更新周期,使用深度强化学习算法求解斯坦伯格博弈模型,得到激励方案。利用转换参数调整模型质量和服务延迟,以平衡模型质量和服务延迟,并将用于平衡模型质量和服务延迟的满意度融入服务器效用,从而能够有效激励节点参与联邦学习。

The present invention relates to a satisfaction-based model training incentive method in federated learning. In a meta-computing framework based on federated learning, information age and service delay are calculated. A quality adjustment parameter is used to adjust data size, and model quality is obtained by combining the information age. A conversion parameter is used to adjust model quality and service delay to obtain satisfaction. Server utility is obtained based on satisfaction and server rewards. A Steinberg game model is constructed based on node utility and server utility. In this model, the server, as a leader, determines a reward strategy, and the node, as a follower, selects a node update cycle based on the server's reward strategy. A deep reinforcement learning algorithm is used to solve the Steinberg game model to obtain an incentive scheme. The conversion parameter is used to adjust model quality and service delay to balance model quality and service delay. The satisfaction level used to balance model quality and service delay is integrated into the server utility, thereby effectively incentivizing nodes to participate in federated learning.

Description

Satisfaction-based model training excitation method in federal learning
Technical Field
The invention relates to the technical field of federal learning, in particular to a model training excitation method based on satisfaction in federal learning.
Background
The industrial meta universe is a digital virtual environment constructed by utilizing technologies such as Virtual Reality (VR), augmented Reality (AR), internet of things (IoT) and the like, and aims to simulate, optimize and manage a plurality of links such as industrial production, design, operation and the like. The basis for constructing the industrial meta-universe is to collect a large amount of real-time sensing data from each node of the industrial Internet of things, and rely on huge computing resources to support the operation of the real-time sensing data, and meanwhile, the low delay of service is ensured to be realized. In order to achieve the aim, the adoption of the distributed machine learning framework of federal learning is particularly important, and the problems of data privacy protection and calculation power distribution can be effectively solved. However, it is worth noting that industrial internet of things nodes participating in federal learning need to contribute their computing and data resources. In view of the selfish nature of nodes, they often lack the incentive to contribute resources gratuitously. Thus, it is a challenge to design an efficient incentive mechanism to promote nodes to actively participate in federal learning.
Disclosure of Invention
The invention provides a model training excitation method based on satisfaction in federal learning, which aims at solving at least one of the technical problems in the prior art.
The technical scheme of the invention is a model training excitation method based on satisfaction in federal learning, comprising the following steps:
in a meta-computing framework based on federal learning, computing information age and service delay;
Adjusting the data size by utilizing the quality adjustment parameters, and combining the information ages to obtain the model quality;
Adjusting the model quality and the service delay by using conversion parameters to obtain satisfaction;
Obtaining the utility of the server according to the satisfaction and the server rewards;
Constructing a Steueberg game model according to node utility and the server utility, wherein a server is used as a leader to determine a rewarding strategy, and a node is used as a follower to select a node update period according to the rewarding strategy of the server;
And solving the Stenberg game model by using a deep reinforcement learning algorithm to obtain an excitation scheme.
According to some embodiments of the invention, the information age is expressed as:
Wherein A i is the information age, θ i is the node update period, and t is unit time.
According to some embodiments of the invention, the adjusting the data size by adjusting the quality parameter and combining the information age to obtain the model quality includes:
multiplying the data quantity collected in the unit time period by the task duration to obtain a first intermediate value, dividing the first intermediate value by the node update period to obtain the data size, wherein the data size is expressed as:
Wherein D i is the data size, T is the task duration, D is the data amount collected in the unit time period, and θ i is the node update period;
multiplying the adjusted quality parameter by the data size to obtain a second intermediate value, dividing the second intermediate value by the information age to obtain the model quality, wherein the model quality is expressed as:
Wherein Q i is the model quality, ρ is the adjustment quality parameter, D i is the data size, and a i is the information age.
According to some embodiments of the invention, the node update period is expressed as:
θi=cit+ait,
where θ i is the node update period, c i t is the time it takes to collect and process model training data, a i t is the duration from the end of data collection to the beginning of data collection at the next stage, and t is the unit time.
According to some embodiments of the invention, the service delay is expressed as:
Wherein E i is the service delay, θ i is the node update period, and t is unit time.
According to some embodiments of the invention, the conversion parameters include a quality conversion parameter and a delay conversion parameter;
Said adjusting said model quality and said service delay using conversion parameters to obtain satisfaction, comprising:
multiplying the quality conversion parameter by the model quality to obtain a third intermediate value;
multiplying the delay conversion parameter by the service delay to obtain a fourth intermediate value;
subtracting the fourth intermediate value from the third intermediate value to obtain the satisfaction, expressed as:
Gi=τQi-λEi,
Where G i is the satisfaction, τ is the quality transition parameter, λ is the delay transition parameter, Q i is the model quality, and E i is the service delay.
According to some embodiments of the invention, obtaining server utility from the satisfaction and server rewards comprises:
Multiplying the satisfaction with unit satisfaction profit to obtain satisfaction gain;
Obtaining the utility of the server according to the difference value between the satisfaction gain and the server rewards, wherein the utility is expressed as follows:
wherein V is the utility of the server, beta is the unit satisfaction profit, G i is the satisfaction, and R i is the server reward.
According to some embodiments of the invention, the node utility is obtained by:
Dividing the unit cost of maintaining the node updating period by the node updating period to obtain cost;
Subtracting the cost from the server rewards yields the node utility expressed as:
Ui=Ri-Ci,
Wherein R i is the server rewards, R i is the unit rewards, θ i is the node update period, C i is the cost, σ i is the unit cost of maintaining the node update period, and U i is the node utility.
The technical scheme of the invention also relates to electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the model training excitation method based on satisfaction in federal learning when executing the computer program.
The technical scheme of the invention also relates to a storage medium, wherein the storage medium stores a computer program, and the computer program realizes the model training excitation method based on satisfaction in federal learning when being executed by a processor.
The method has the advantages that information age and service delay are calculated in a meta-computing framework based on federal learning, data size is adjusted by utilizing adjustment quality parameters, the information age is combined to obtain model quality, then the model quality and service delay are adjusted by utilizing conversion parameters to obtain satisfaction, server utility is obtained according to the satisfaction and server rewards, a Steinberg game model is built according to node utility and server utility, wherein the server is used as a leader to determine rewarding strategies, the node is used as a follower to select node updating periods according to the rewarding strategies of the server, and the Steinberg game model is solved by using a deep reinforcement learning algorithm to obtain an incentive scheme. And adjusting the model quality and the service delay by using the conversion parameters to balance the model quality and the service delay, and integrating satisfaction degree for balancing the model quality and the service delay into the utility of the server, so that nodes can be effectively stimulated to participate in federal learning.
Further, additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is an alternative flow chart of a satisfaction-based model training incentive method in federal learning in accordance with an embodiment of the present invention.
Fig. 2 is a schematic diagram of a DRL-based steberth gaming process in an embodiment of the invention.
Fig. 3 is a schematic diagram of a DRL controller in an embodiment of the invention.
FIG. 4 is a schematic diagram of a federally learning-based meta-computing framework in an embodiment of the present invention.
Detailed Description
The conception, specific structure, and technical effects produced by the present application will be clearly and completely described below with reference to the embodiments and the drawings to fully understand the objects, aspects, and effects of the present application. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
It should be noted that, unless otherwise specified, when a feature is referred to as being "fixed" or "connected" to another feature, it may be directly or indirectly fixed or connected to the other feature. Further, the descriptions of the upper, lower, left, right, top, bottom, etc. used in the present invention are merely with respect to the mutual positional relationship of the respective constituent elements of the present invention in the drawings.
Furthermore, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. The terminology used in the description presented herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used herein includes any combination of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element of the same type from another. For example, a first element could also be termed a second element, and, similarly, a second element could also be termed a first element, without departing from the scope of the present invention.
Referring to fig. 1 to fig. 4, in some embodiments, the present invention provides a model training excitation method based on satisfaction in federal learning, including, but not limited to, steps 101 to 106, and each step is described in turn below.
Step 101, in a meta-computing framework based on federal learning, information age and service delay are computed.
In a specific embodiment, the model training incentive method based on satisfaction in federal learning further comprises constructing a meta-computing framework based on federal learning before calculating the information age and service delay in the meta-computing framework based on federal learning.
Referring to fig. 4, the model training excitation method based on satisfaction in federal learning is applied to a federal learning-based meta-computing framework, which is composed of 5 modules, namely a device management module, a resource scheduling module, a task management module, a zero trust computing management module, and an identity and access management module.
Wherein the device management module comprises a plurality of edge nodes. The main purpose of the device management module is to collect data from the production devices, integrate the computing, storage and communication resources of the edge nodes, and then map these resources to servers, thereby converting them into objects that can be easily accessed by the resource scheduling module. The resource scheduling module includes a plurality of virtual edge nodes. These virtual edge nodes constantly monitor changes in physical node configuration details, simulate the possible states of the nodes, and dynamically perform resource optimization. The task management module is located at the server end and is responsible for receiving the target object request and decomposing the task, and designing an incentive scheme according to the constraint of the task. The zero trust calculation management module performs global aggregation of federal learning through the blockchain. The identity and access management module ensures that the target object has the appropriate data access rights.
Specifically, the target object is a user. Referring to fig. 4, a target object views a device on a virtual platform, the target object issues a task, a task management module receives the task, a server performs a stanburg game with a virtual node according to a task requirement, the task management module obtains an incentive scheme, the incentive scheme is applied to the node of the device management module, a zero trust calculation management module executes federal learning global model aggregation, the trained global model is uploaded to the target object, and the target object pays according to the incentive scheme.
In some embodiments, in federal learning where a node has data caches and information ages are considered, the information ages for node i are expressed as:
Where a i is information age, θ i is node update period, a i t is duration from end of data collection to start of data collection in next stage, and t is unit time.
In particular, the age of the information (Age of information, aoI) is used to measure model freshness, representing the time difference between the generation of the data and the receipt and processing of the data. The information age is used for measuring the freshness of the model, so that the quality of the model is guaranteed, and the information freshness is considered.
In some embodiments, the service delay is expressed as:
Where E i is the service delay, θ i is the node update period, a i t is the duration from the end of data collection to the beginning of data collection at the next stage, and t is the unit time.
In particular, the service delay represents the length of time a node has passed from receiving a request to uploading a local model, including data collection and model training periods.
And 102, adjusting the data size by utilizing the quality adjustment parameters, and combining the information age to obtain the model quality.
Wherein, the data represents training data of the node, and the data size represents the data size of the node for training. The adjustment quality parameter is used to adjust the data size. The model quality is the model quality of the node training model.
In some embodiments, adjusting the data size using the adjustment quality parameter in combination with the information age, results in a model quality comprising:
Multiplying the data amount collected in the unit time period by the task duration to obtain a first intermediate value, dividing the first intermediate value by the node update period to obtain a data size, and representing the data size as:
Wherein D i is the data size, T is the task duration, D is the data amount collected in a unit time period, and θ i is the node update period;
multiplying the adjusted quality parameter by the data size to obtain a second intermediate value, dividing the second intermediate value by the information age to obtain model quality expressed as:
Wherein Q i is model quality, ρ is adjustment quality parameter, D i is data size, and A i is information age.
In some embodiments, the time taken to collect and process model training data is added to the duration of the data collection ending to the beginning of the next stage of data collection, resulting in a node update period, expressed as:
θi=cit+ait,
Where θ i is the node update period, c i t is the time it takes to collect and process model training data, a i t is the duration from the end of data collection to the beginning of the next stage of data collection, and t is the unit time.
Specifically, the node update period indicates an update period of data collection, calculation, and transmission in the node. The node periodically updates its cached data with a node update period. The duration from the end of data collection to the beginning of data collection at the next stage includes a traffic period and an idle period.
And 103, adjusting the model quality and service delay by using the conversion parameters to obtain satisfaction.
In particular, the trade-off of low latency and high model quality in the industry metauniverse is critical, so satisfaction is used to balance model quality and service latency in node i.
It can be understood that, the satisfaction index is provided, the data size, the information age and the service delay are comprehensively considered, and the model quality and the service delay in the industry meta-universe are balanced, which is different from the traditional quality assessment mode which only focuses on a single factor or does not fully consider the key factors.
In some embodiments, the conversion parameters include a quality conversion parameter and a delay conversion parameter. Adjusting the model quality and service delay using the conversion parameters to obtain satisfaction, including:
Multiplying the quality conversion parameter by the model quality to obtain a third intermediate value;
multiplying the delay conversion parameter by the service delay to obtain a fourth intermediate value;
Subtracting the fourth intermediate value from the third intermediate value to obtain satisfaction, expressed as:
Gi=τQi-λEi,
Where G i is satisfaction, τ is a quality conversion parameter, λ is a delay conversion parameter, Q i is model quality, and E i is service delay.
In particular, quality transition parameters are used to adjust the model quality and delay transition parameters are used to adjust the service delay. It will be appreciated that the model quality is adjusted using the quality conversion parameters and the service delay is adjusted using the delay conversion parameters to balance the model quality and the service delay. And integrating satisfaction degree for balancing the quality and service delay of the model into the server utility, constructing a Steinberg game model according to the server utility, and solving the Steinberg game model to obtain an excitation scheme, so that nodes can be effectively excited to participate in federal learning.
And 104, obtaining the utility of the server according to the satisfaction and the server rewards.
In particular, a server utility is a utility that a server obtains from a node. The set of industrial internet of things nodes involved is denoted I all = {1, a method of operating a computer system for operating a computer system, I.
In some embodiments, deriving the server utility from the satisfaction and the server rewards includes:
Multiplying satisfaction with unit satisfaction profit to obtain satisfaction gain;
Obtaining the utility of the server according to the difference between the satisfaction gain and the server rewards, wherein the utility is expressed as:
Where V is the server utility, beta is the unit satisfaction profit, G i is the satisfaction, and R i is the server rewards.
The participation of a node in a federal learning task may be rewarded by a server while generating costs, and in some embodiments, the utility of the node is obtained by:
Dividing the unit cost of maintaining the node update period by the node update period to obtain cost;
subtracting the cost from the server rewards yields the node utility, expressed as:
Ui=Ri-Ci,
Where R i is server rewards, R i is unit rewards, θ i is node update period, C i is cost, σ i is unit cost for maintaining node update period, and U i is node utility.
And 105, constructing a Stebert game model according to the node utility and the server utility, wherein the server is used as a leader to determine a rewarding strategy, and the node is used as a follower to select a node update period according to the rewarding strategy of the server.
Specifically, given that both servers and nodes seek to maximize their own interests, interactions between the two are modeled as a two-stage Steinberg game. Wherein the server determines the rewarding policy as a leader and the nodes respond with a node update period as followers.
It will be appreciated that the utility model of the server and node are built separately, and that the incentive scheme is designed from the interests of both parties, unlike existing incentive schemes that focus only on node contributions or on single body interests.
In some embodiments, the Stanberg gaming model is expressed as:
Ω={(SP∪{i}i∈I),(rii),(V,Ui)},
Where, (SP ∈i } i∈I) represents a set of servers and their corresponding nodes, (r ii) represents a set of bonus policies, (V, U i) represents a set of utilities, SP is a server, r i is a unit bonus, i is a node, θ i is a node update period, V is a server utility, U i is a node utility, and Ω is a Stanberg game model.
And 106, solving the Steinberg game model by using a deep reinforcement learning algorithm to obtain an excitation scheme.
It should be noted that the incentive scheme includes a unit incentive and a node update period.
It will be appreciated that conventional methods for solving gaming balances require a large amount of participant information and are difficult to implement in the industry universe. By using deep reinforcement learning, the optimal strategy is learned based on experience, priori information is not needed, the privacy of participants is protected, and the information acquisition difficulty is solved. In addition, the model training excitation method based on satisfaction in federal learning effectively improves the utilization efficiency and the overall benefit of system resources while not reducing the model precision.
In a specific embodiment, the Stanberg gaming model is solved using MADDPG algorithm in the DRL. Specifically, the DRL algorithm is an abbreviation of Deep Reinforcement Learning, i.e., a deep reinforcement learning algorithm, which is a machine learning method that combines deep learning (DEEP LEARNING, DL) and reinforcement learning (Reinforcement Learning, RL). The MADDPG algorithm, the Multi-agent depth deterministic strategy Gradient (Multi-AGENT DEEP DETERMINISTIC Policy Gradient) algorithm, is a reinforcement learning algorithm used in Multi-agent environments.
It can be understood that the utility optimization problem of the node utility and the server utility is converted into a Stanberg game model, and the game balance is solved by utilizing MADDPG algorithm in the DRL, so that the method can better adapt to complex and non-cooperative environment in the industrial universe and realize better resource allocation and node selection strategy compared with the traditional heuristic algorithm.
With respect to solving the Stenberg gaming model using DRL, specifically, a state space (including server price policies and node caching policies), a partially observable space (node and server history policy based observation information), an action space (server rewards policies and node caching policy adjustments), and rewards functions (consistent with utility functions) are defined. Wherein the utility functions are node utility functions and server utility functions. And the Actor network outputs actions according to the environmental state by utilizing an Actor-Critic architecture, the Critic network evaluates action values, and the Actor and Critic network are trained alternately through experience playback and gradient update, so that the strategy is converged to an optimal strategy, and the accumulated rewards are maximized.
Specifically, DRL-based stebert gaming process as shown in fig. 2, the server acts as a leader and the nodes act as followers. During each training period, the server agent observes the stateAnd determines the actionNode proxy observation stateAnd determines the actionThe current state then transitions to the next state and the agent receives the reward. The detailed composition of the DRL controller of each agent is shown in fig. 3. The playback buffer is used to store the transition data collected in the interaction, including the current state, actions, rewards, and next state. These stored transition data are sampled in a batch fashion to decorrelate sequential data and stabilize the training process. The Actor network and the Critic network are composed of three full connection layers. The Actor network takes the current state as input, and outputs corresponding actions through the generation strategy. The Critic network evaluates the actions taken by the actor network and provides a value estimate to guide policy improvement. Both the Actor and Critic networks are updated through two independent optimization modules. The policy optimizer updates the parameters of the Actor network based on the policy gradients, and the value optimizer minimizes time difference errors to refine the value estimate of the Critic network.
In one possible embodiment, a Steinberg game balance is defined, a reverse induction method is used to analyze node optimal decisions, derive node and server utility functions, and determine node optimal update periods and server optimal rewards strategies.
It can be seen that, in a meta-computing framework based on federal learning, information age and service delay are calculated, data size is adjusted by using an adjustment quality parameter, and information age is combined to obtain model quality, model quality and service delay are adjusted by using a conversion parameter to obtain satisfaction, server utility is obtained according to satisfaction and server utility, a Steinberg game model is constructed according to node utility and server utility, wherein the server is used as a leader to determine unit rewards, the node is used as a follower to select a node update period according to the unit rewards of the server, and Steinberg game balance is solved by using a deep reinforcement learning algorithm to obtain an excitation scheme. And the conversion parameters are utilized to adjust the model quality and the service delay, so that the balance of the model quality and the service delay is realized, and the overall system performance is improved. And the satisfaction degree for balancing the model quality and the service delay is integrated into the utility of the server, so that the node can be effectively stimulated to participate in federal learning.
The embodiment of the invention also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the model training excitation method based on satisfaction in federal learning when executing the computer program. The electronic equipment can be any intelligent terminal including a computer and the like.
The embodiment of the invention also provides a storage medium which stores a computer program, and the computer program realizes the model training excitation method based on satisfaction in the federal learning when being executed by a processor.
It should be appreciated that the method steps in embodiments of the present invention may be implemented or carried out by computer hardware, a combination of hardware and software, or by computer instructions stored in non-transitory computer-readable memory. The method may use standard programming techniques. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Furthermore, the operations of the processes described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes (or variations and/or combinations thereof) described herein may be performed under control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications), by hardware, or combinations thereof, collectively executing on one or more processors. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable computing platform, including, but not limited to, a personal computer, mini-computer, mainframe, workstation, network or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and so forth. Aspects of the invention may be implemented in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optical read and/or write storage medium, RAM, ROM, etc., such that it is readable by a programmable computer, which when read by a computer, is operable to configure and operate the computer to perform the processes described herein. Further, the machine readable code, or portions thereof, may be transmitted over a wired or wireless network. When such media includes instructions or programs that, in conjunction with a microprocessor or other data processor, implement the steps described above, the invention described herein includes these and other different types of non-transitory computer-readable storage media. The invention may also include the computer itself when programmed according to the methods and techniques of the present invention.
The computer program can be applied to the input data to perform the functions described herein, thereby converting the input data to generate output data that is stored to the non-volatile memory. The output information may also be applied to one or more output devices such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including specific visual depictions of physical and tangible objects produced on a display.
The present invention is not limited to the above embodiments, but can be modified, equivalent, improved, etc. by the same means to achieve the technical effects of the present invention, which are included in the spirit and principle of the present invention. Various modifications and variations are possible in the technical solution and/or in the embodiments within the scope of the invention.

Claims (10)

1. A satisfaction-based model training excitation method in federal learning, comprising:
in a meta-computing framework based on federal learning, computing information age and service delay;
Adjusting the data size by utilizing the quality adjustment parameters, and combining the information ages to obtain the model quality;
Adjusting the model quality and the service delay by using conversion parameters to obtain satisfaction;
Obtaining the utility of the server according to the satisfaction and the server rewards;
Constructing a Steueberg game model according to node utility and the server utility, wherein a server is used as a leader to determine a rewarding strategy, and a node is used as a follower to select a node update period according to the rewarding strategy of the server;
And solving the Stenberg game model by using a deep reinforcement learning algorithm to obtain an excitation scheme.
2. A satisfaction-based model training stimulus method in federal learning according to claim 1, wherein said information age is expressed as:
Wherein A i is the information age, θ i is the node update period, and t is unit time.
3. The method for model training excitation based on satisfaction in federal learning according to claim 2, wherein said adjusting the data size using the quality adjustment parameter and combining the information age to obtain the model quality comprises:
multiplying the data quantity collected in the unit time period by the task duration to obtain a first intermediate value, dividing the first intermediate value by the node update period to obtain the data size, wherein the data size is expressed as:
Wherein D i is the data size, T is the task duration, D is the data amount collected in the unit time period, and θ i is the node update period;
multiplying the adjusted quality parameter by the data size to obtain a second intermediate value, dividing the second intermediate value by the information age to obtain the model quality, wherein the model quality is expressed as:
Wherein Q i is the model quality, ρ is the adjustment quality parameter, D i is the data size, and a i is the information age.
4. A satisfaction-based model training stimulus method in federal learning according to claim 3, wherein said node update period is expressed as:
θi=cit+ait,
where θ i is the node update period, c i t is the time it takes to collect and process model training data, a i t is the duration from the end of data collection to the beginning of data collection at the next stage, and t is the unit time.
5. A satisfaction-based model training incentive method in federal learning according to claim 1, wherein said service delay is expressed as:
Wherein E i is the service delay, θ i is the node update period, and t is unit time.
6. A satisfaction-based model training stimulus method in federal learning according to claim 1, wherein said transition parameters include quality transition parameters and delay transition parameters;
Said adjusting said model quality and said service delay using conversion parameters to obtain satisfaction, comprising:
multiplying the quality conversion parameter by the model quality to obtain a third intermediate value;
multiplying the delay conversion parameter by the service delay to obtain a fourth intermediate value;
subtracting the fourth intermediate value from the third intermediate value to obtain the satisfaction, expressed as:
Gi=τQi-λEi,
Where G i is the satisfaction, τ is the quality transition parameter, λ is the delay transition parameter, Q i is the model quality, and E i is the service delay.
7. A model training incentive method based on satisfaction in federal learning according to claim 1, wherein deriving server utility based on said satisfaction and server rewards comprises:
Multiplying the satisfaction with unit satisfaction profit to obtain satisfaction gain;
Obtaining the utility of the server according to the difference value between the satisfaction gain and the server rewards, wherein the utility is expressed as follows:
wherein V is the utility of the server, beta is the unit satisfaction profit, G i is the satisfaction, and R i is the server reward.
8. The model training incentive method based on satisfaction in federal learning of claim 1, wherein the node utility is obtained by:
Dividing the unit cost of maintaining the node updating period by the node updating period to obtain cost;
Subtracting the cost from the server rewards yields the node utility expressed as:
Ui=Ri-Ci,
Wherein R i is the server rewards, R i is the unit rewards, θ i is the node update period, C i is the cost, σ i is the unit cost of maintaining the node update period, and U i is the node utility.
9. An electronic device comprising a memory, a processor, the memory storing a computer program, the processor implementing a satisfaction-based model training incentive method in federal learning as claimed in any one of claims 1 to 8 when the computer program is executed.
10. A storage medium storing a computer program which when executed by a processor implements a satisfaction-based model training incentive method in federal learning according to any of claims 1 to 8.
CN202510703377.1A 2025-05-29 2025-05-29 A satisfaction-based model training incentive method in federated learning Pending CN120542602A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510703377.1A CN120542602A (en) 2025-05-29 2025-05-29 A satisfaction-based model training incentive method in federated learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510703377.1A CN120542602A (en) 2025-05-29 2025-05-29 A satisfaction-based model training incentive method in federated learning

Publications (1)

Publication Number Publication Date
CN120542602A true CN120542602A (en) 2025-08-26

Family

ID=96781531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510703377.1A Pending CN120542602A (en) 2025-05-29 2025-05-29 A satisfaction-based model training incentive method in federated learning

Country Status (1)

Country Link
CN (1) CN120542602A (en)

Similar Documents

Publication Publication Date Title
Huang et al. FedParking: A federated learning based parking space estimation with parked vehicle assisted edge computing
Jiang et al. Scalable mobile crowdsensing via peer-to-peer data sharing
Zhan et al. A learning-based incentive mechanism for federated learning
Zhan et al. Free market of multi-leader multi-follower mobile crowdsensing: An incentive mechanism design by deep reinforcement learning
Deng et al. Dynamical resource allocation in edge for trustable internet-of-things systems: A reinforcement learning method
Qi et al. High-quality model aggregation for blockchain-based federated learning via reputation-motivated task participation
Zhan et al. Incentive mechanism design for federated learning: Challenges and opportunities
Chen et al. Dim-ds: Dynamic incentive model for data sharing in federated learning based on smart contracts and evolutionary game theory
CN113992676A (en) Incentive method and system for layered federal learning under terminal edge cloud architecture and complete information
Jie et al. Online task scheduling for edge computing based on repeated Stackelberg game
CN111027709B (en) Information recommendation method and device, server and storage medium
CN117354310A (en) Node selection method, device, equipment and medium based on task collaboration
Wang et al. Reinforcement contract design for vehicular-edge computing scheduling and energy trading via deep Q-network with hybrid action space
Quan et al. An optimized task assignment framework based on crowdsourcing knowledge graph and prediction
Huang et al. A hierarchical incentive mechanism for federated learning
Fu et al. Distributed reinforcement learning-based memory allocation for edge-PLCs in industrial IoT
Ramesh et al. Comparative analysis of Q-learning, SARSA, and deep Q-network for microgrid energy management
Chen et al. A pricing approach toward incentive mechanisms for participant mobile crowdsensing in edge computing
Lin et al. When MetaVerse meets computing power networking: an energy-efficient framework for service placement
Fu et al. Incentive Mechanism Against Bounded Rationality for Federated Learning-Enabled Internet of UAVs: A Prospect Theory-Based Approach
Gao et al. A task allocation and pricing mechanism based on Stackelberg game for edge-assisted crowdsensing
CN118938966A (en) A collaborative optimization method for edge network resource scheduling and drone motion control
Zhang et al. Adaptive device sampling and deadline determination for cloud-based heterogeneous federated learning
CN120542602A (en) A satisfaction-based model training incentive method in federated learning
Liu et al. A smart grid computational offloading policy generation method for end-edge-cloud environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination