Federated Learning (FL) is a distributed machine learning approach in which clients contribute to learning a global model in a privacy preserved manner. Effective aggregation of client models is essential to create a generalised global model. To what extent a client is generalisable and contributing to this aggregation can be ascertained by analysing inter-client relationships. We use similarity between clients to model such relationships. We explore how similarity knowledge can be inferred from comparing client gradients, instead of inferring similarity on the basis of client data which violates the privacy-preserving constraint in FL. The similarity-guided FedSim algorithm, introduced in this paper, decomposes FL aggregation into local and global steps. Clients with similar gradients are clustered to provide local aggregations, which thereafter can be globally aggregated to ensure better coverage whilst reducing variance. Our comparative study also investigates the applicability of FedSim in both real-world datasets and on synthetic datasets where statistical heterogeneity can be controlled and studied systematically. A comparative study of FedSim with state-of-the-art FL baselines, FedAvg and FedProx, clearly shows significant performance gains. Our findings confirm that by exploiting latent inter-client similarities, FedSim’s performance is significantly better and more stable compared to both these baselines.
PALIHAWADANA, C., WIRATUNGA, N., WIJEKOON, A. and KALUTARAGE, H. 2022. FedSim: similarity guided model aggregation for federated learning. Neurocomputing [online], 483: distributed machine learning, optimization and applications, pages 432-445. Available from: https://doi.org/10.1016/j.neucom.2021.08.141