CFP last date
20 June 2024
Reseach Article

Review on Explainable AI by using LIME and SHAP Models for Healthcare Domain

by Abujar S. Shaikh, Rahul M. Samant, Kshitij S. Patil, Nilesh R. Patil, Aarya R. Mirkale
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 185 - Number 45
Year of Publication: 2023
Authors: Abujar S. Shaikh, Rahul M. Samant, Kshitij S. Patil, Nilesh R. Patil, Aarya R. Mirkale
10.5120/ijca2023923263

Abujar S. Shaikh, Rahul M. Samant, Kshitij S. Patil, Nilesh R. Patil, Aarya R. Mirkale . Review on Explainable AI by using LIME and SHAP Models for Healthcare Domain. International Journal of Computer Applications. 185, 45 ( Nov 2023), 18-23. DOI=10.5120/ijca2023923263

@article{ 10.5120/ijca2023923263,
author = { Abujar S. Shaikh, Rahul M. Samant, Kshitij S. Patil, Nilesh R. Patil, Aarya R. Mirkale },
title = { Review on Explainable AI by using LIME and SHAP Models for Healthcare Domain },
journal = { International Journal of Computer Applications },
issue_date = { Nov 2023 },
volume = { 185 },
number = { 45 },
month = { Nov },
year = { 2023 },
issn = { 0975-8887 },
pages = { 18-23 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume185/number45/32992-2023923263/ },
doi = { 10.5120/ijca2023923263 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T01:28:42.263861+05:30
%A Abujar S. Shaikh
%A Rahul M. Samant
%A Kshitij S. Patil
%A Nilesh R. Patil
%A Aarya R. Mirkale
%T Review on Explainable AI by using LIME and SHAP Models for Healthcare Domain
%J International Journal of Computer Applications
%@ 0975-8887
%V 185
%N 45
%P 18-23
%D 2023
%I Foundation of Computer Science (FCS), NY, USA
Abstract

In the dynamic realm of healthcare research and the burgeoning utilization of artificial intelligence (AI), multiple research studies converge to accentuate the immense potential and persistent hurdles facing AI systems. At its core, artificial intelligence seeks to emulate human intelligence, enabling the performance of tasks, pattern recognition, and outcome prediction through the assimilation of data from diverse sources. Its far-reaching applications encompass autonomous driving, e-commerce recommendations, fin-tech, natural language comprehension, and healthcare, with the latter domain undergoing significant transformation. Historically, healthcare leaned heavily on rule-based methodologies rooted in curated medical knowledge. However, the landscape has evolved considerably, with the emergence of machine learning algorithms like deep learning, capable of comprehending intricate interplays within medical data. These algorithms have demonstrated exceptional performance in healthcare applications. Yet, a critical impediment lingers: the enigma of explainability. Despite their prowess, certain AI algorithms struggle to gain full acceptance in practical clinical environments due to their lack of interpretability. In response to this challenge, Explainable Artificial Intelligence (XAI) has risen as a pivotal solution. XAI functions as a conduit for elucidating the inner workings of AI algorithms, shedding light on their decision-making processes, behaviors, and actions. This newfound transparency fosters trust among healthcare professionals, enabling them to judiciously apply predictive models in real-world healthcare scenarios, rather than passively adhering to algorithmic predictions. Nonetheless, the journey toward rendering XAI genuinely effective in clinical settings remains ongoing, a testament to the intricate nature of medical knowledge and the multifaceted challenges it presents. In summation, this research paper underscores the importance of XAI in the domain of healthcare. It emphasizes the necessity for transparency and interpretability to fully harness the potential of AI systems while navigating the intricate landscape of medical practice, thus heralding a transformative era in healthcare research and delivery.

References
  1. Prashant Gohel 1, Priyanka Singh 1, And Manoranjan Mohanty 2 ,Explainable Ai: current status and future directions(2021)
  2. Ploug T, Holm S, The four dimensions of contestable AI diagnostics- A patient-centric approach to explainable AI, Artificial Intelligence In Medicine(2020)
  3. Chaddad, A. Peng, J.; Xu, J. Bouridane, A Survey of Explainable AI Techniques in Healthcare. Sensors 2023.
  4. Christopher C. Yang, Explainable Artificial Intelligence for Predictive Modeling in Healthcare(2022)
  5. Devam Dave, Het Naik, Smiti Singhal, and Pankesh Patel, Explainable AI meets Healthcare: A Study onHeart Disease Dataset(2020)
  6. Senthilkumar Mohan , Chandrasegar ThirumalaiI, And Gautam Srivastava. Effective Heart Disease Prediction Using Hybrid Machine Learning Techniques(2019)
  7. Guoguang Rong, Arnaldo Mendez, Elie Bou Assi, Bo Zhao, Mohamad Sawan, Artificial Intelligence in Healthcare: Review and Prediction Case Studies(2020)
  8. Tim Hulsen, Explainable Artificial Intelligence (XAI) in Healthcare(2023)
  9. Samant, Rahul, and Srikantha Rao. "A study on Comparative Performance of SVM Classifier Models with Kernel Functions in Prediction of Hypertension." International Journal of Computer Science and Information Technologies 4.6 (2013): 818-821.
  10. Pradnyesh Kadam Shikha Yadav B Abhishek R. Patel Anusha Vollal and Rahul M Samant. MoodyPlayer: A Mood based Music Player. International Journal of Computer Applications 141(4):21-25, May 2016.
  11. Rahul Samant and Srikantha Rao. Article: Evaluation of Artificial Neural Networks in Prediction of Essential Hypertension. International Journal of Computer Applications 81(12):34-38, November 2013
  12. Rahul Samant, Srikantha Rao, Performance of Alternate Structures of Artificial Neural Networks in Prediction Of Essential Hypertension, International Journal of Advanced Technology & Engineering Research (IJATER)Volume 3, Issue 6, Nov. 2013 ISSN No: 2250-3536 pp:22-27
  13. Christoph Molnar, "Interpretable Machine Learning: A Guide for Making Black Box Models Explainable"(2019)
  14. Marco Tulio Ribeiro, et al., "Why Should I Trust You? Explaining the Predictions of Any Classifier
  15. Rudzicz, A. (2018). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1808.00064.
  16. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608
  17. Zhang, X., Wu, L., & Li, S. (2020). A survey of explainable artificial intelligence (XAI) from a big data perspective. European Journal of Operational Research.
  18. Das, A., & Zhang, L. (2018). Explainable AI for healthcare. Ar Xiv preprint arXiv:1812.10464.
  19. Simonyan, K., Vedaldi, A., & Zisserman, A. (2013). Deep inside convolutional networks: Visualizing image classification models and saliency maps. arXiv preprint arXiv:1312.6034.
  20. Simonyan, K., Vedaldi, A., & Zisserman, A. (2013). Deep inside convolutional networks: Visualizing image classification models and saliency maps. arXiv preprint arXiv:1312.6034.
  21. Andreassen, T. (2021). Explainable AI in Industry 4.0: From Black-Box Models to White-Box Insights and Transparency. In Proceedings of the International Joint Conference on Neural Networks (IJCNN).
  22. R M. Samant, etl The effect of Noise in Automatic Text Classification, Proceedings of the international conference and workshops on emerging trends in technologies, pp 557-558, Feb 2011 https://doi.org/10.1145/1980022.1980142
Index Terms

Computer Science
Information Sciences

Keywords

Healthcare Domain LIME SHAP