0% found this document useful (0 votes)
13 views1 page

Text

The document discusses Explainable AI (XAI) and Interpretable Machine Learning (IML), highlighting their importance in enhancing transparency and trust in AI models. It covers various interpretability approaches, including intrinsic models and post-hoc techniques, while addressing ethical considerations and practical tools for improving model explainability. The speaker, Dr. Nagender Kumar Suryadevara, is a professor with extensive research experience in AI and related fields.

Uploaded by

arukiranreddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views1 page

Text

The document discusses Explainable AI (XAI) and Interpretable Machine Learning (IML), highlighting their importance in enhancing transparency and trust in AI models. It covers various interpretability approaches, including intrinsic models and post-hoc techniques, while addressing ethical considerations and practical tools for improving model explainability. The speaker, Dr. Nagender Kumar Suryadevara, is a professor with extensive research experience in AI and related fields.

Uploaded by

arukiranreddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Explainable AI and Interpretable Machine Learning

Abstract:
As AI and machine learning models become more complex and widespread, their
decision-making processes often remain opaque, leading to challenges in trust,
accountability, and compliance. Explainable AI (XAI) and Interpretable Machine
Learning (IML) aim to bridge this gap by making AI models more transparent,
understandable, and reliable.

This talk will explore the principles of XAI and IML, discussing why interpretability
matters in critical applications such as healthcare, finance, and security. We will cover
different approaches to model interpretability, including intrinsic (self-explanatory)
models like decision trees and linear regression, as well as post-hoc explanation
techniques like SHAP, LIME, and counterfactual explanations.

Attendees will gain insights into the trade-offs between accuracy and interpretability,
ethical considerations in AI transparency, and practical tools for improving model
explainability. Whether you're a data scientist, engineer, or AI enthusiast, this session
will equip you with essential techniques to build more trustworthy and interpretable AI
systems.

Speaker:
Dr. Nagender Kumar Suryadevara received his Ph.D.
degree from the School of Engineering and Advanced
Technology, Massey University, New Zealand, in 2014.
He is a Professor at School of Computer and Information
Sciences, University of Hyderabad, India. His research
interests include Internet of Things, Time-Series Data
Mining and Ambient Assisted Living Environment. He has
authored three books, edited four books and published
over 60 papers in various international journals,
conferences, and book chapters. He has delivered numerous presentations including
keynote, tutorial and special lectures. He is a senior member of IEEE. Passionate about
how to develop great AI based products under resource constraint computing
environments.
https://scholar.google.com/citations?user=S28OdGMAAAAJ&hl=en

You might also like