Fed-SE: Federated Self-Evolution for Privacy-Constrained Multi-Environment LLM Agents
Abstract
Fed-SE, a Federated Self-Evolution framework, enhances LLM agents in privacy-constrained environments by local parameter-efficient fine-tuning and global aggregation in a low-rank subspace.
LLM agents are widely deployed in complex interactive tasks, yet privacy constraints often preclude centralized optimization and co-evolution across dynamic environments. While Federated Learning (FL) has proven effective on static datasets, its extension to the open-ended self-evolution of agents remains underexplored. Directly applying standard FL is challenging: heterogeneous tasks and sparse, trajectory-level rewards introduce severe gradient conflicts, destabilizing the global optimization process. To bridge this gap, we propose Fed-SE, a Federated Self-Evolution framework for LLM agents. Fed-SE establishes a local evolution-global aggregation paradigm. Locally, agents employ parameter-efficient fine-tuning on filtered, high-return trajectories to achieve stable gradient updates. Globally, Fed-SE aggregates updates within a low-rank subspace that disentangles environment-specific dynamics, effectively reducing negative transfer across clients. Experiments across five heterogeneous environments demonstrate that Fed-SE improves average task success rates by approximately 18% over federated baselines, validating its effectiveness in robust cross-environment knowledge transfer in privacy-constrained deployments.
Community
Check this out!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ILoRA: Federated Learning with Low-Rank Adaptation for Heterogeneous Client Aggregation (2025)
- FIRM: Federated In-client Regularized Multi-objective Alignment for Large Language Models (2025)
- Implicit Federated In-context Learning For Task-Specific LLM Fine-Tuning (2025)
- ADF-LoRA: Alternating Low-Rank Aggregation for Decentralized Federated Fine-Tuning (2025)
- Scaling Agent Learning via Experience Synthesis (2025)
- DOLFIN: Balancing Stability and Plasticity in Federated Continual Learning (2025)
- Reviving Stale Updates: Data-Free Knowledge Distillation for Asynchronous Federated Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper