Research Seminar on AI: Federated Learning in Heterogeneous Networks with Unreliable Communication
Dienstag, 17.05.2022, 16.00 Uhr
Due to the massive amount of data generated by the rising number and variety of Internet-of-Things devices, machine learning can be extremely computationally intensive, and training times are lengthy. Federated learning has been offered as a viable solution to communication bandwidth and privacy issues. In federated learning (FL), local workers learn a global model collaboratively using their local data by only communicating trained models to a central server. Due to its local nature, FL is subject to various heterogeneities, including variability of system characteristics on each device and non-identically distributed (non-i.i.d.) data. To address these concerns, a training algorithm called Federated Proximal (FedProx), as a generalized scheme of Federated Averaging (FedAvg), has been considered as a promising FL paradigm to provide more stable learning convergence in the presence of computation stragglers and non-i.i.d. data for target networks with error-free communications.
However, in wireless networks with limited communication resources and random channel behaviors, the errors of packet transmissions should be considered, which introduces an additional dimension of heterogeneity in FL. In this work, Paul Zheng and his team prove the convergence of FedProx in the presence transmission packet errors in a heterogeneous network and propose a client selection strategy to increase the learning convergence rate. By experimenting on MNIST image dataset and synthetic dataset, their method achieves faster convergence than other state-of-the-art client selection methods.
Paul Zheng is a research assistant in the Institute of Information theory and Data Analytics in RWTH Aachen University. He is a Ph.D. student under the supervision of Prof. Anke Schmeink. His research interests include ultra-reliable and low-latency communications and federated learning in wireless networks.