Explaining AI explainability
Explaining AI explainability  
Podcast: Practical AI
Published On: Mon Jun 08 2020
Description: The CEO of Darwin AI, Sheldon Fernandez, joins Daniel to discuss generative synthesis and its connection to explainability. You might have heard of AutoML and meta-learning. Well, generative synthesis tackles similar problems from a different angle and results in compact, explainable networks. This episode is fascinating and very timely.Join the discussionChangelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!Sponsors:Linode – Our cloud of choice and the home of Changelog.com. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code changelog2019 OR changelog2020. To learn more and get started head to linode.com/changelog. The Brave Browser – Browse the web up to 8x faster than Chrome and Safari, block ads and trackers by default, and reward your favorite creators with the built-in Basic Attention Token. Download Brave for free and give tipping a try right here on changelog.com. Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com. Featuring:Sheldon Fernandez – LinkedIn, XDaniel Whitenack – Website, GitHub, XShow Notes:Darwin AIFermiNets - a paper that introduces the idea of generative synthesis.COVID-Net - Darwin AI’s COVID-related computer vision model.NLP highlights podcast episode on Behavioral Testing - behavioral testsSheldon’s explainability primer on mediumSomething missing or broken? PRs welcome! ★ Support this podcast ★