GC-Fed: Gradient centralized federated learning with partial client participation
  • Seo, Jungwon
  • Catak, Ferhat Ozgur
  • Rong, Chunming
  • Hong, Kibeom
  • Kim, Minhoe
Citations

WEB OF SCIENCE

0
Citations

SCOPUS

0

초록

Federated Learning (FL) enables privacy-preserving multi-source information fusion (MSIF) but suffers from client drift in highly heterogeneous data settings. Many existing approaches mitigate drift by providing clients with common reference points, typically derived from past information, to align objectives or gradient directions. However, under severe partial participation, such history-dependent references may become unreliable, as the set of client data distributions participating in each round can vary drastically. To overcome this limitation, we propose a method that mitigates client drift without relying on past information by constraining the update space through Gradient Centralization (GC). Specifically, we introduce Local GC and Global GC, which apply GC at the local and global update stages, respectively, and further present GC-Fed, a hybrid formulation that generalizes both. Theoretical analysis and extensive experiments on benchmark FL tasks demonstrate that GC Fed effectively alleviates client drift and achieves up to 20% accuracy improvement under data heterogeneous and partial participation conditions.

키워드

Multi-source information fusionFederated learningOptimizationGradient centralization
제목
GC-Fed: Gradient centralized federated learning with partial client participation
저자
Seo, JungwonCatak, Ferhat OzgurRong, ChunmingHong, KibeomKim, Minhoe
DOI
10.1016/j.inffus.2026.104148
발행일
2026-07
유형
Article
저널명
Information Fusion
131