Documentation
¶
Overview ¶
Package federated provides federated learning interfaces and a FedAvg baseline implementation. (Stability: alpha)
Federated learning enables training across distributed clients without centralising raw data. The Coordinator orchestrates training rounds: it selects participating clients, distributes global weights, collects local ModelUpdate results, and aggregates them via a pluggable Strategy.
Strategy Interface ¶
Strategy defines how client updates are aggregated and which clients participate in each round. The package ships FedAvg, which computes a weighted average of model updates proportional to each client's dataset size.
Coordinator ¶
NewCoordinator creates a coordinator from a Strategy and a CoordinatorConfig. Call Coordinator.RunRound to execute a single federated learning round end-to-end:
coord := federated.NewCoordinator(federated.NewFedAvg(), federated.CoordinatorConfig{
MinClients: 3,
MaxRounds: 100,
})
result, err := coord.RunRound(clients)
Client Interface ¶
Client represents a federated participant that performs local training and reports a ModelUpdate back to the coordinator.
Index ¶
Constants ¶
const MaxCumulativeEpsilon = 1000.0
MaxCumulativeEpsilon is the upper bound on cumulative privacy budget. Aggregation fails once this threshold is reached.
Variables ¶
This section is empty.
Functions ¶
func ProximalLoss ¶
ProximalLoss computes the proximal penalty term: (mu / 2) * ||localWeights - globalWeights||^2. This value is added to the local training loss to penalize client model divergence.
Types ¶
type AggregatedModel ¶
AggregatedModel holds the result of aggregating client updates.
type Client ¶
type Client interface {
// Train performs local training starting from globalWeights and returns
// the resulting model update.
Train(globalWeights []float64) (*ModelUpdate, error)
// ID returns the client's unique identifier.
ID() ClientID
}
Client represents a federated learning participant.
type Coordinator ¶
type Coordinator struct {
// contains filtered or unexported fields
}
Coordinator manages federated learning rounds using a pluggable strategy.
func NewCoordinator ¶
func NewCoordinator(strategy Strategy, config CoordinatorConfig) *Coordinator
NewCoordinator creates a coordinator with the given strategy and config.
func (*Coordinator) Round ¶
func (c *Coordinator) Round() int
Round returns the current round number.
func (*Coordinator) RunRound ¶
func (c *Coordinator) RunRound(clients []Client) (*RoundResult, error)
RunRound executes a single federated learning round. It selects clients, distributes global weights (empty on the first round), collects updates, and aggregates them via the strategy.
type CoordinatorConfig ¶
type CoordinatorConfig struct {
// MinClients is the minimum number of clients required to run a round.
MinClients int
// MaxRounds is the maximum number of federated rounds to execute.
MaxRounds int
// ConvergenceThreshold stops training when the aggregated loss delta
// falls below this value. Zero disables convergence checking.
ConvergenceThreshold float64
}
CoordinatorConfig holds configuration for a federated learning coordinator.
type DPConfig ¶
type DPConfig struct {
// Epsilon is the privacy budget parameter (must be > 0).
Epsilon float64
// Delta is the failure probability (must be in (0, 1)).
Delta float64
// ClipNorm is the L2 norm bound for gradient clipping.
ClipNorm float64
// Mechanism selects the noise distribution: "gaussian" or "laplacian".
Mechanism string
}
DPConfig configures differential privacy noise injection.
type DPStrategy ¶
type DPStrategy struct {
// contains filtered or unexported fields
}
DPStrategy wraps any Strategy and adds differential privacy noise to aggregated weights. It clips each client update to ClipNorm before delegating to the inner strategy, then adds calibrated noise.
func NewDPStrategy ¶
func NewDPStrategy(inner Strategy, config DPConfig) (*DPStrategy, error)
NewDPStrategy creates a DPStrategy that wraps inner with the given DP config. It returns an error if the config is invalid.
func (*DPStrategy) Accountant ¶
func (d *DPStrategy) Accountant() *PrivacyAccountant
Accountant returns the privacy accountant tracking cumulative budget.
func (*DPStrategy) Aggregate ¶
func (d *DPStrategy) Aggregate(updates []ModelUpdate) (*AggregatedModel, error)
Aggregate clips each client update, delegates to the inner strategy, then adds calibrated DP noise to the aggregated weights.
func (*DPStrategy) SelectClients ¶
func (d *DPStrategy) SelectClients(round int, available []ClientID) []ClientID
SelectClients delegates to the inner strategy.
type FedAvg ¶
type FedAvg struct{}
FedAvg implements the Federated Averaging strategy. It computes a weighted average of client model updates, where each client's contribution is proportional to its dataset size (NSamples).
func (*FedAvg) Aggregate ¶
func (f *FedAvg) Aggregate(updates []ModelUpdate) (*AggregatedModel, error)
Aggregate computes the weighted average of model updates. Each update's weights are scaled by the proportion of samples that client contributed.
type FedProx ¶
type FedProx struct {
// contains filtered or unexported fields
}
FedProx implements the Federated Proximal strategy. It extends FedAvg by adding a proximal term (mu * ||w - w_global||^2) to the local training objective, which limits client model divergence from the global model. The aggregation step is identical to FedAvg (sample-weighted average), but clients are expected to incorporate the proximal penalty during training.
func NewFedProx ¶
NewFedProx returns a new FedProx strategy. The mu parameter controls the strength of the proximal term: higher values penalize divergence from the global model more strongly.
func (*FedProx) Aggregate ¶
func (f *FedProx) Aggregate(updates []ModelUpdate) (*AggregatedModel, error)
Aggregate computes a sample-weighted average of client updates, identical to FedAvg. The proximal penalty is applied during client-side training (reflected in the weights clients send back), not during aggregation.
type ModelUpdate ¶
type ModelUpdate struct {
ClientID ClientID
Weights []float64
NSamples int
Metrics map[string]float64
}
ModelUpdate holds the result of a client's local training round.
type PrivacyAccountant ¶
type PrivacyAccountant struct {
// contains filtered or unexported fields
}
PrivacyAccountant tracks the cumulative differential privacy budget spent across federated rounds using basic composition.
func (*PrivacyAccountant) CanContinue ¶
func (p *PrivacyAccountant) CanContinue(maxEpsilon float64) bool
CanContinue returns true if the cumulative epsilon is below maxEpsilon.
func (*PrivacyAccountant) Spent ¶
func (p *PrivacyAccountant) Spent() (epsilon, delta float64)
Spent returns the cumulative (epsilon, delta) privacy budget consumed.
type RoundResult ¶
type RoundResult struct {
Model *AggregatedModel
Updates []ModelUpdate
}
RoundResult holds the outcome of a single federated round.
type Strategy ¶
type Strategy interface {
// Aggregate combines multiple client updates into a single model.
Aggregate(updates []ModelUpdate) (*AggregatedModel, error)
// SelectClients chooses which clients participate in the given round.
SelectClients(round int, available []ClientID) []ClientID
}
Strategy defines the federated aggregation and client selection policy.