Secondhand LLM Psychosis

February 13, 2026
Secondhand LLM Psychosis

Here's something that might sound odd at first — people are starting to believe in AI outputs that aren’t real, a phenomenon Byrne Hobart calls 'secondhand LLM psychosis.' Basically, because large language models (LLMs) generate convincing text, folks often assume they’re reliable sources of truth. But the twist? As Hobart points out in Business, this isn’t just about AI being wrong; it’s about how humans overtrust these machines, leading to false confidence and even market manipulation. Now, here’s where it gets interesting — companies are not just using AI for content, but also for lobbying, creating special purpose vehicles (SPVs), and even funding competitors to stay ahead. Hobart notes that some organizations are finding creative ways to monetize AI by backing different ventures and bending compliance rules. So what does this actually mean for you? In a world flooded with AI-generated info, it’s more crucial than ever to stay skeptical and question what’s real — and what’s just convincing enough to fool us.

Screenshot-2022-02-17-at-12.51.50-1.png

Plus! Lobbying; SPVs; Compliance; Backing Competitors; Monetizing AI
Audio Transcript

Screenshot-2022-02-17-at-12.51.50-1.png

Plus! Lobbying; SPVs; Compliance; Backing Competitors; Monetizing AI
0:00/0:00