Explainable AI For Fraud Detection in Financial Transactions
Bhanu Duggal
TLDR
This research article investigates various XAI strategies for increasing transparency and confidence in fraud detection algorithms and examines the efficacy of SHAP, LIME, and attention mechanisms in providing insight into model predictions.
摘要
Abstract—Explainable AI (XAI) improves machine learning models’ interpretability, especially for detecting financial fraud. Financial fraud is a growing threat, with criminals using increas- ingly sophisticated methods to circumvent standard security mea- sures. This research article investigates various XAI strategies for increasing transparency and confidence in fraud detection algorithms. The study examines the efficacy of SHAP (Shap- ley Additive Explanations), LIME (Local Interpretable Model- agnostic Explanations), and attention mechanisms in providing insight into model predictions. We examine the existing obstacles of using XAI in fraud detection systems and provide approaches to improve both interpretability and prediction performance. This study helps to develop more transparent and trustworthy AI-driven fraud detection tools, hence facilitating regulatory com- pliance and improving decision-making in financial institutions. Index Terms—XAI, SHAP, LIME, fraud detection, financial transactions, interpretability
