Academia.eduAcademia.edu

The imperative of interpretable machines

2020, Nature Machine Intelligence

AI-generated Abstract

As artificial intelligence (AI) becomes crucial in society, establishing a framework to connect algorithm interpretability with public trust is essential. This paper discusses how recent regulatory trends have led to increased demands for transparency in AI systems, emphasizing the need for accessible explanations tailored to various stakeholders. By exploring the nature of interpretability, the paper raises critical questions about what explanations are necessary, to whom they should be directed, and how their effectiveness can be assessed, ultimately aiming to enhance trust and accountability in algorithm-assisted decision-making.