0% found this document useful (0 votes)
25 views3 pages

Xai Algo

XAI algo

Uploaded by

anomin89
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views3 pages

Xai Algo

XAI algo

Uploaded by

anomin89
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Explainable AI (LIME)

Ribeiro et al. (2016)

Prepared for Academic Purposes


Abstract & Background

As machine learning models became more complex, the need for interpretability grew. In 2016,
Ribeiro, Singh, and Guestrin introduced LIME (Local Interpretable Model-agnostic Explanations), a
method for explaining predictions of any black-box classifier. This was one of the earliest and most
influential works in explainable AI.

Key Contributions

LIME generates explanations by approximating a model locally with a simpler, interpretable model
such as linear regression or decision trees. By perturbing the input and observing changes in
predictions, LIME can highlight which features most strongly influenced a specific prediction. This
model-agnostic approach makes it widely applicable across domains.

Critical Analysis

The strength of LIME lies in its simplicity and generality. It made the concept of explainability
accessible to practitioners across different fields. However, its approximations can sometimes be
misleading, especially in highly nonlinear decision boundaries. Moreover, it focuses only on local
explanations and does not capture the global behavior of models. Despite these limitations, it paved
the way for other frameworks like SHAP.

Personal Reflection

I see LIME as an essential milestone in the ethical use of AI. In a world where machine learning
increasingly affects decisions in healthcare, law, and finance, understanding 'why' a model made a
prediction is crucial. Although imperfect, LIME represents a step toward transparency. Personally, I
believe future research will need to balance accuracy, fairness, and interpretability more effectively.
References
[1] M. T. Ribeiro, S. Singh and C. Guestrin, 'Why Should I Trust You? Explaining the Predictions of
Any Classifier,' Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, 2016.
[2] S. Lundberg and S. Lee, 'A Unified Approach to Interpreting Model Predictions,' Advances in
Neural Information Processing Systems (NeurIPS), 2017.
[3] F. Doshi-Velez and B. Kim, 'Towards a Rigorous Science of Interpretable Machine Learning,'
arXiv preprint arXiv:1702.08608, 2017.

You might also like