Skip to main content

Research Repository

Advanced Search

Enhancing Android application security through source code vulnerability mitigation using artificial intelligence: a privacy-preserved, community-driven, federated-learning-based approach.

Senanayake, Janaka Maduwantha Dias

Authors



Contributors

Mhd Omar Al-Kadri
Supervisor

Andrei Petrovski
Supervisor

Luca Piras
Supervisor

Abstract

As technology advances, Android devices and apps are rapidly increasing. It is crucial to adhere to security protocols during app development, especially as many apps lack sufficient safeguards. Despite the use of automated tools for risk mitigation, their ability to detect vulnerabilities is limited. Therefore, this doctoral research endeavours to propose a novel, highly accurate, efficient, privacy-preserved and community-driven approach that utilises artificial intelligence (AI) techniques to detect Android source code vulnerabilities in real time, with a focus on continuous model improvement. To train the initial AI model, a dataset has been curated, containing labelled Android source code based on the common weakness enumeration (CWE) obtained by scanning over 15,000 real-world Android apps. A proof-of-concept has been presented, showcasing the suitability of the dataset for training various machine learning (ML) models. The model then evolves into a deep learning-based system incorporating a shallow neural network. Enhancing the model's performance necessitates the collection of additional data from a variety of sources. This could encompass source code from both software firms and solo developers, in addition to the LVDAndro dataset. It is crucial to respect the privacy of their code in this process. To this end, the final model integrates a federated learning method underpinned by blockchain technology, ensuring security, privacy and community involvement. The ultimate models exhibit excellent performance, with both binary and multi-class models achieving an accuracy of 96% and an F1-Score of 0.96. The model's predictions are further clarified using explainable AI (XAI), providing developers with guidance on potential mitigation strategies. The AI model is designed to integrate into an API as a backend and is also integrated as a plugin in Android Studio. This setup allows for instantaneous detection of vulnerabilities, taking on average 300ms to scan a single line of code. Utilising this plugin, app developers have a way to build safer applications, thus reducing the risk of source code vulnerabilities. In addition, Android app developers have tested the solution and found the plugin to be highly effective in real-time vulnerability mitigation.

Citation

SENANAYAKE, J.M.D. 2024. Enhancing Android application security through source code vulnerability mitigation using artificial intelligence: a privacy-preserved, community-driven, federated-learning-based approach. Robert Gordon University, PhD thesis. Hosted on OpenAIR [online]. Available from: https://doi.org/10.48526/rgu-wt-2801183

Thesis Type Thesis
Deposit Date Apr 22, 2025
Publicly Available Date Apr 22, 2025
DOI https://doi.org/10.48526/rgu-wt-2801183
Keywords Android applications; Software code; Cybersecurity; Systems security; Artificial intelligence; Explainable artificial intelligence (XAI); Machine learning; Federated learning; Blockchain technologies
Public URL https://rgu-repository.worktribe.com/output/2801183
Award Date Oct 31, 2024

Files




You might also like



Downloadable Citations