Podcast:AI + a16z Published On: Fri Apr 19 2024 Description: a16z General Partner Anjney Midha joins the podcast to discuss what's happening with hardware for artificial intelligence. Nvidia might have cornered the market on training workloads for now, but he believes there's a big opportunity at the inference layer — especially for wearable or similar devices that can become a natural part of our everyday interactions. Here's one small passage that speaks to his larger thesis on where we're heading:"I think why we're seeing so many developers flock to Ollama is because there is a lot of demand from consumers to interact with language models in private ways. And that means that they're going to have to figure out how to get the models to run locally without ever leaving without ever the user's context, and data leaving the user's device. And that's going to result, I think, in a renaissance of new kinds of chips that are capable of handling massive workloads of inference on device."We are yet to see those unlocked, but the good news is that open source models are phenomenal at unlocking efficiency. The open source language model ecosystem is just so ravenous."More from Anjney:The Quest for AGI: Q*, Self-Play, and Synthetic DataMaking the Most of Open Source AISafety in Numbers: Keeping AI OpenInvesting in Luma AIFollow everyone on X:Anjney MidhaDerrick Harris Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.