Explainable AI and Interpretable Machine Learning
Abstract:
As AI and machine learning models become more complex and widespread, their
decision-making processes often remain opaque, leading to challenges in trust,
accountability, and compliance. Explainable AI (XAI) and Interpretable Machine
Learning (IML) aim to bridge this gap by making AI models more transparent,
understandable, and reliable.
This talk will explore the principles of XAI and IML, discussing why interpretability
matters in critical applications such as healthcare, finance, and security. We will cover
different approaches to model interpretability, including intrinsic (self-explanatory)
models like decision trees and linear regression, as well as post-hoc explanation
techniques like SHAP, LIME, and counterfactual explanations.
Attendees will gain insights into the trade-offs between accuracy and interpretability,
ethical considerations in AI transparency, and practical tools for improving model
explainability. Whether you're a data scientist, engineer, or AI enthusiast, this session
will equip you with essential techniques to build more trustworthy and interpretable AI
systems.
Speaker:
Dr. Nagender Kumar Suryadevara received his Ph.D.
degree from the School of Engineering and Advanced
Technology, Massey University, New Zealand, in 2014.
He is a Professor at School of Computer and Information
Sciences, University of Hyderabad, India. His research
interests include Internet of Things, Time-Series Data
Mining and Ambient Assisted Living Environment. He has
authored three books, edited four books and published
over 60 papers in various international journals,
conferences, and book chapters. He has delivered numerous presentations including
keynote, tutorial and special lectures. He is a senior member of IEEE. Passionate about
how to develop great AI based products under resource constraint computing
environments.
https://scholar.google.com/citations?user=S28OdGMAAAAJ&hl=en