Explainable AI (XAI) Using SHAP and LIME for Financial Fraud Detection and Credit Scoring
Sophia John Chavakula,Christopher Aseer J Albert,2 作者,C. Mahamuni
TLDR
The integration of SHAP and LIME improves the model interpretability and fairness, and makes the stakeholders and regulating authorities to trust AI solutions that will be used in the future of financial companies.
摘要
Financial fraud detection and credit scoring are two important applications in the financial domain on which high accuracy and interpretability are required. While Random Forests and XGBoost algorithms of the third generation produce good prediction quality, there is no way to explain the prediction results and they do not meet transparency criteria of regulators. The work in this paper focuses on the incorporation of Explainable AI (XAI) methods such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) to combat these issues of concern under trust, compliance and accountability within these models. Decision Trees were employed for the detection of fraud while Random Forest was employed for credit scoring, SHAP was used for global feature importance and LIME for instance explanations. The model for fraud detection had accuracy of 95% and SHAP found the characteristics of transactions such as amount and frequency significant for fraud detection, While, the credit scoring model that had 76% accuracy and with the help of LIME, the debt ratios and payment history of the credit contenders was found important. The integration of SHAP and LIME improves the model interpretability and fairness, and makes the stakeholders and regulating authorities to trust AI solutions that will be used in the future of financial companies.
