What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

March 18, 2026
A16z
What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

Here's something that might surprise you — research by Vishal Misra shows that transformers, the core tech behind LLMs, update their predictions in a mathematically predictable way as they process information. But here’s the catch — this doesn’t mean they’re conscious or truly understanding. According to Vishal and Martin Casado from a16z, what’s really missing for AGI isn’t just bigger models or faster training. It’s the ability to keep learning after initial training and, more importantly, moving from simple pattern matching to understanding cause and effect. That’s the leap that could unlock genuine intelligence. So, while LLMs are impressive, they’re still just pattern matchers, not thinkers. Vishal explains that true AGI needs ongoing learning and deeper reasoning, which current models just can’t do yet. And get this — research like this is helping us understand what’s next in AI’s evolution. It’s about building systems that not only predict but understand the world around them.

Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect.

 

Resources:

Follow Vishal Misra on X: https://x.com/vishalmisra 
Follow Martin Casado on X: https://x.com/martin_casado

 

Stay Updated:

Find a16z on YouTube: YouTube

Find a16z on X

Find a16z on LinkedIn

Listen to the a16z Show on Spotify

Listen to the a16z Show on Apple Podcasts

Follow our host: https://twitter.com/eriktorenberg

 

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Audio Transcript

Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect.

 

Resources:

Follow Vishal Misra on X: https://x.com/vishalmisra 
Follow Martin Casado on X: https://x.com/martin_casado

 

Stay Updated:

Find a16z on YouTube: YouTube

Find a16z on X

Find a16z on LinkedIn

Listen to the a16z Show on Spotify

Listen to the a16z Show on Apple Podcasts

Follow our host: https://twitter.com/eriktorenberg

 

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

0:00/0:00
What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado | Speasy