Greater interpretability is crucial to greater adoption of applied AI, yet today’s most popular approaches to building AI models don’t allow for this. Explainability of intelligent systems has run the gamut from traditional expert systems, which are totally explainable but inflexible and hard to use, to deep neural networks, which are effective but virtually impossible to see inside.
In this talk, I examine two approaches to building explainability into AI models - learning deep explanations and model induction—and discuss the effectiveness of each in explaining classification tasks.
I also look at how a third category - learning more interpretable models with recomposability—uses building blocks to build explainability into control tasks. This last approach, along with Machine Teaching, is a cornerstone of the Bonsai Platform. I demonstrate exactly how this works within the Platform at the end of my talk by building an AI model to beat the game Lunar Lander.
I invite you to watch the talk in its entirety below or view the slides here. To learn more about how Bonsai is building Explainable AI with Machine Teaching, visit our How It Works page or check out the Bonsai Early Access Program.