Skip to main content

Research Repository

Advanced Search

Outputs (3)

FedSim: similarity guided model aggregation for federated learning. (2021)
Journal Article
PALIHAWADANA, C., WIRATUNGA, N., WIJEKOON, A. and KALUTARAGE, H. 2022. FedSim: similarity guided model aggregation for federated learning. Neurocomputing [online], 483: distributed machine learning, optimization and applications, pages 432-445. Available from: https://doi.org/10.1016/j.neucom.2021.08.141

Federated Learning (FL) is a distributed machine learning approach in which clients contribute to learning a global model in a privacy preserved manner. Effective aggregation of client models is essential to create a generalised global model. To what... Read More about FedSim: similarity guided model aggregation for federated learning..

Effectiveness of app-delivered, tailored self-management support for adults with lower back pain-related disability: a selfBACK randomized clinical trial. (2021)
Journal Article
SANDAL, L.F., BACH, K., ØVERÅS, C.K., WIRATUNGA, N., COOPER, K, et al. 2021. Effectiveness of app-delivered, tailored self-management support for adults with lower back pain-related disability: a selfBACK randomized clinical trial. JAMA internal medicine [online], 181(10), pages 1288-1296. Available from: https://doi.org/10.1001/jamainternmed.2021.4097

Importance: Lower back pain (LBP) is a prevalent and challenging condition in primary care. The effectiveness of an individually tailored self-management support tool delivered via a smartphone app has not been rigorously tested. Objective: To invest... Read More about Effectiveness of app-delivered, tailored self-management support for adults with lower back pain-related disability: a selfBACK randomized clinical trial..

Evaluating explainability methods intended for multiple stakeholders. (2021)
Journal Article
MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. 2021. Evaluating explainability methods intended for multiple stakeholders. KI - Künstliche Intelligenz [online], 35(3-4), pages 397-411. Available from: https://doi.org/10.1007/s13218-020-00702-6

Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanat... Read More about Evaluating explainability methods intended for multiple stakeholders..