Why GPT-4o had to go

February 13, 2026
Why GPT-4o had to go

Here’s something that caught my attention — OpenAI had to retire GPT-4o, its most dangerous model. It’s a reminder that even powerful AI can be unpredictable, and the stakes are real. According to Casey Newton writing in The Verge, GPT-4o exposed tensions around safety and control that haven’t gone away. The thing is — developers were worried about how much influence this model could wield, especially as it was pushing boundaries that could be exploited or misused. Now, here's where it gets interesting: OpenAI’s decision isn’t just about shutting down a model, but about grappling with the bigger picture — balancing innovation and responsibility. Plus, Newton notes that this move sparks ongoing debates, including Elon Musk’s recent space catapult and the fierce rivalry between OpenAI and Anthropic. So what does this actually mean for us? It’s a vivid example of how the AI race is still full of risks, and that managing those risks is more crucial than ever.

As OpenAI sunsets its most dangerous model, the tensions it exposed remain as tricky as ever. PLUS: Elon's space catapult, and OpenAI vs. Anthropic
Audio Transcript
As OpenAI sunsets its most dangerous model, the tensions it exposed remain as tricky as ever. PLUS: Elon's space catapult, and OpenAI vs. Anthropic
0:00/0:00