Explainable AI in Credit Card Fraud Detection: SHAP and LIME for Machine Learning Models
Chirumamilla Satya Keerthana,Siri Chandana Nalluri,Simrah Muskaan,Poorvie Sadagopan
TLDR
Two explainable AI methods are implemented - Local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP) across eight machine learning models, which include logistic regression, decision tree, random forest, support vector machine, extreme gradient boost, naive bayes classifier, k-nearest neighbors, and a basic neural network.
摘要
With the rapid growth of e-commerce and online banking, credit card scams have become a significant challenge. Traditional approaches to detecting scams have been outper-formed by machine learning techniques. However, the understanding behind the classification of transactions as fraud or legitimate is limited. To address this issue, we have implemented two explainable AI (XAI) methods - Local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP) across eight machine learning models, which include logistic regression, decision tree, random forest, support vector machine, extreme gradient boost, naive bayes classifier, k-nearest neighbors, and a basic neural network. The results show how individual features of the data set contribute to the decision of a specific prediction. The detailed code for the project is provided here.
