The Role of Explainable AI in Understanding Phishing Susceptibility
Keywords:
Explainable AI (XAI), Phishing Detection, Cybersecurity, LIME, SHAP, Interpretable Models, Phishing Susceptibility, Machine LearningAbstract
Phishing attacks continue to pose a significant threat to cybersecurity, exploiting human vulnerabilities and deceptive tactics to steal sensitive information. As phishing techniques evolve, traditional detection methods face challenges in maintaining accuracy and user trust. This paper explores the role of Explainable AI (XAI) in enhancing phishing detection and understanding susceptibility to phishing attacks. We examine various XAI techniques, including feature importance analysis, Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), and interpretable models, and their application in improving the transparency and effectiveness of phishing detection systems. Through an analysis of these methods, we highlight how XAI can provide actionable insights, reduce false positives, and foster greater user engagement and education. The impact of XAI on user comprehension, trust, and behavior is evaluated, demonstrating its potential to bridge the gap between sophisticated AI systems and user understanding. Our findings suggest that integrating XAI into phishing detection systems not only enhances technical performance but also contributes to a more informed and resilient approach to cybersecurity. Future research directions are proposed to further refine XAI techniques and explore their long-term benefits in real-world scenarios.
References
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135-1144.
Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Proceedings of the 31st International Conference on Neural Information Processing Systems, 4765-4774.
Sure, T. A. R. (2023). Using Apple's ResearchKit and CareKit Frameworks for Explainable Artificial Intelligence Healthcare. Journal of Big Data Technology and Business Analytics, 2(3), 15-19.
Caruana, R., Gehrke, J., Koch, P., Nair, R., & Field, C. (2015). Intelligible models for healthcare: A case study of predicting pneumonia risk and hospital 30-day readmission. Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1721-1730.
Zhang, B., & Chen, Y. (2020). A survey of explainable artificial intelligence. Proceedings of the 2020 IEEE International Conference on Artificial Intelligence and Computer Engineering, 250-257.
Sure, T. A. R. (2023). The role of mobile applications and AI in continuous glucose monitoring: A comprehensive review of key scientific contributions. International Journal of Artificial Intelligence in Medicine (IJAIMED), 1(1), 9-13.
Zhang, L., Zhao, X., & Wang, X. (2018). Explainable AI: A brief survey on methods and applications. Proceedings of the 2018 International Conference on Artificial Intelligence and Big Data, 120-126.
Yang, Y., & Wang, Z. (2019). Local interpretable model-agnostic explanations for phishing detection. Journal of Cyber Security Technology, 3(2), 69-84.
Sure, T. A. R. (2023). An analysis of telemedicine and virtual care trends on iOS platforms. Journal of Health Education Research & Development, 11(5).
Zhang, S., & Zhang, K. (2021). Enhancing phishing detection with explainable AI: An empirical study. Proceedings of the 2021 IEEE Conference on Cybersecurity, 112-118.
Hodge, V. J., & Austin, J. (2004). A survey of outlier detection methodologies. Artificial Intelligence Review, 22(2), 85-126.
Liu, X., & Li, J. (2022). Analyzing phishing susceptibility and its relation to explainable AI methods. Computers & Security, 108, 102358.
Sure, T. A. R. (2023). ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING IN iOS. International Journal of Artificial Intelligence & Machine Learning (IJAIML), 2(1), 82-87.
Yang, J., & Liu, J. (2020). The role of explainable AI in improving phishing detection systems. Journal of Information Security and Applications, 54, 102606.
Lee, S. I., & Kim, J. H. (2018). Interpretable machine learning for phishing email detection. IEEE Transactions on Information Forensics and Security, 13(5), 1164-1175.
Dastin, J. (2018). Amazon scraped by diversity watchdogs over AI bias. Reuters. Available at: Reuters.com.
Gunning, D. (2017). Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA). Available at: DARPA.mil.
Sure, T. A. R. (2023). Image Processing Using Artificial Intelligence in iOS. Journal of Computer Science Engineering and Software Testing, 9(3), 10-15.
Chen, J., & Yang, L. (2021). Towards understanding phishing susceptibility using machine learning and explainable AI. Proceedings of the 2021 International Conference on Cyber Security and Protection of Digital Services, 56-65.
Mishra, S., & Verma, R. (2022). Explainable AI in phishing detection: Current state and future directions. Journal of Information Security, 53, 102311.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Govindaraaj J (Author)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.