
Longreads
- Patrick McKenzie has a very detailed look at the recent legal case against the SPLC. This is a really great piece in the genre of explaining how things really work, in a meta way: the SPLC's activism got a lot of leverage because of how the US financial system is regulated, in that financial companies want to have a blacklist for organizations with whom they shouldn't do business, and really want to ensure that when those organizations appeal their designation, it isn't to the bank but is to some neutral third party. Access to the banking system is valuable, and that means finding high leverage points for constraining someone's access to it is a form of power. But they lived and died by the sword of financial regulation: once they lied to banks in the course of opening bank accounts for fictitious entities whose actual purpose was to execute clandestine transfers—i.e. when they started laundering money—they were an easy target for the exact enforcement systems they'd previously leveraged. The piece livens up some discussions of fairly arcane financial regulation topics with a few good jokes (I particularly appreciate: "WordPress is a complex and highly modular open source platform which you could use for a blog or e-discovery delivery service," which is a fancy way of saying that some nonprofits literally blogged about how they were explicitly violating the rules for what constitutes a tax-exempt nonprofit.)
- Wired has a great story about the impact of AI on life outcomes. The premise of the first 80%: a man who overcame serious health problems to attend medical school is unjustly rejected from numerous residency programs. He discovers that these residencies use AI, tracks down a patent covering their algorithm, reverse-engineers it himself, and then feeds it simulated data in order to conclusively demonstrate the exact bias these models had for him. However, the premise of the last 20% is: basically all of that was wrong. The schools that got back to Wired mostly didn't use the AI tool in question at all, and the ones that did either tried it briefly or only used it for a handful of programs. And the CEO of the company says that they don't even use the patent, and also that "during the 2025–2026 cycle, Cortex did not algorithmically score or rank applicants." That's a pretty big plot twist! It makes the illustration Wired used for the article—a lurid pixelated gif of a confused guy menaced by clawed robotic tentacles—very confusing, unless it's meant to illustrate that the star of the story had a brief brush with LLM psychosis ("You're absolutely right! Reverse engineering the algorithm won't just give you answers—it'll give you closure."). But even that doesn't hang together too well: he ends up cold-emailing the heads of various programs he was interested in, and they like his cover letter and are delighted to interview him. So, this story is worth reading closely because so many more people will see the headline and subhead than get to the part where the story completely reverses. The demand for stories of AI-generated injustice far outstrips the supply right now, while this story also demonstrates that hallucinations are neither unique to the artificial sorts of intelligence nor disqualifying. One could imagine the Wired of the 90s writing a story about someone who felt wronged by some big company, wrote a lot of code to prove a point, and then found out that he'd been mistaken—but they'd write it as a slightly self-conscious celebration of being absolutely unwilling to let go of a technical problem until it's solved.
- On the topic of things that old Wired would have been all over: Ashlee Vance of Core Memory profiles Casey Handmer and Terraform (disclosure: long Terraform). There are many people who worry about overreliance on hydrocarbons, and a much smaller cohort who concede that there are some very useful molecules in that category and we can find less harmful ways to get them. It's also a good case study in tech companies being downstream from science fiction, or at least from more speculative ideas: the original question behind Terraform was: if a one-way trip to Mars is more practical than sending a rocket to Mars with enough fuel to get back to earth, we'll need a way to synthesize fuel, and we might as well try that on earth, which is a friendlier environment with more access to talent and supply chains, and with an established market for the end product. Some companies are basically a work of hard science fiction performance art: taking physical constraints seriously but pushing a few reasonable assumptions to unreasonable extremes.
- Jerusalem Demsas on AI as a tool that leads to more media concentration in terms of cultural impact, even if there are still media outlets. Models are trained on many of the same texts, and those texts tend to produce token predictions that follow a particular political slant, which is basically center-left, a little more progressive on social issues, and fairly Abundance-leaning on economic issues. So, if you're looking up some information using an LLM, that's about what you're going to get: a view of the world weighted by the preferences of whoever has collectively produced the most high-quality tokens. It's possible that things will drift back to the status quo as LLMs get better at picking up on feedback or as users get more diligent about custom instructions. They're able to represent plenty of belief systems faithfully if asked, and people tend to ask for that option.
- On LessWrong, Ashe Vazquez Nuñez looks at the impact of AI on Go, which is net negative. In many domains, it doesn't make sense to treat efficiency gains as a form of cheating: if you're trying to write a particular program, parse some text data, trawl through a large number of photos to find which ones contain a particular object, etc., there's a good chance that using AI will get you a more accurate result in less time. But in pure competitions, we have to decide which tools do and don't count as cheating; training for a marathon isn't cheating, but doing one on a motorcycle is. When the cost of switching to the easier way, even for a moment, is low enough, it's incredibly hard for people not to take it. But the post notes that people have an inaccurate view of how using AI to make the right move affects them: in their view, they're using it for brainstorming. But while Go players' skills have improved, this is confined to the early game, where they've memorized new openings, not from the later game, where they'd have to use new strategies at a higher level of abstraction. So it's another case of AI as a leveler: it's easier to get better at Go, but not requires more willpower to get exceptionally good.
- In this week's Capital Gains, a look back at the rally of summer 2000 as an instructive period in market history. Consumer Internet was way down, but enterprise software and networking hardware both rallied; they were the picks and shovels bets that didn't require determining the winner in a competitive space. They got a quarter or two of pretty good results, and then promptly fell apart as their customers all went bankrupt.
- A Read.Haus user asks about industries where the minimum scale for profitability is so high that the industry can only support a small number of participants. Low-resolution industries probably work. There's just a large enough cost to entering some industries, like building commercial jets, 2nm fabs, digital ads, and frontier labs. They can reach an equilibrium where everyone politely refuses to take much market share, or they can end up collectively targeting more than 100% market share. It's hard to predict which way these consolidated-by-default industries go, but the structure where they're earning slightly-supernormal profits is a bit more stable—until there's some change to the minimum scale that forces incumbents to learn how to compete again.
You're on the free list for The Diff. This week, paying subscribers read about AI companies' joint ventures with private equity ($), the case for Chrome to rewrite pages on the fly to mitigate dark patterns ($), and the phenomenon of investor relations catering/pandering to retail investors ($). Upgrade today for full access.
Open Thread
- Drop in any links or comments of interest to Diff readers.
- The SpaceX S-1 is coming any day now (and seems to have had more leaks than any other confidentially-filed S-1). What’s worth reading to understand the business?
Diff Jobs
Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:
- Ex-Palantir founders building the meta-harness for all the lucrative agents need full-stack engineers that understand that turning AI into economically valuable solutions means a system that includes deterministic function calls and database queries. This is the company that will power CPU earnings for some time to come. (NYC)
- Series A, ex-Navy defense firm building AI-enabled drone defense systems is looking for an electrical engineer with range: schematic design, electrical simulation, printed circuit board (PCB) design, and firmware development in high performance languages (C++, Rust, etc.) If you’re interested in power and control systems for mission critical technology, this is for you. (Austin, TX)
- A startup building a new financial market within a multi-trillion dollar asset class is looking for a data scientist with commercial financial experience. (if you’ve been an investor but are newer to the data side, that’s great too.) (NYC)
- Lightspeed-backed team building the engineering services firm of the future is looking for founding members of technical staff excited about working alongside civil engineers to translate their domain expertise into the operating system that powers the next era of great American infrastructure. If you’re an engineer with strong product intuition, who's energized by access to users, and excited by the prospect of transforming how we design and construct our built world with frontier AI, this is for you. (NYC, SF or Remote)
- AI Transformation firm with an ambition to build an economic world model to run swathes of the private, unstructured economy is looking for FDEs, Platform Engineers, and business generalists who understand how to solve problems.
- Well-funded, frontier AI neolab working on video pretraining and computer action models as the path to general intelligence is looking for researchers who are excited about creating machines that learn from experience, not text. Ideally you have zero-to-one pre-training experience and/or are a high-slope generalist who’s frustrated that the big labs aren't doing this. (SF)
Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.
If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.
And: we're now actively deploying capital into early-stage companies through Anomaly. Our focus is on defense, logistics, robotics, and energy. If you'd like to chat, please reach out.