
Longreads
- Tyler Cowen interviews Arthur Brooks. Brooks has had many incarnations, but Cowen is primarily talking to him in his capacity as a happiness researcher. It's a pretty entertaining culture clash: at one point, Cowen asks why people don't just read a self-help book every year to take advantage of the placebo effect. And Brooks is game! He says that this would probably make you happier, because the self-help books are full of clichés, but these clichés will remind readers of obvious truths they'd heard before but forgotten. This is an amazing act of conservatism on multiple levels, both arguing for the continuity of a longstanding moral tradition and telling the consumer of precisely 50th-percentile taste that they happen to have made a fabulous choice.
- Hollis Robins asks: what was really going on with Milgram's electric shock experiments? In the original experiment, participants were told that they were administering electric shocks, of varying intensity, to other subjects. The shocks were fake, but the decision to shock was real. As it turns out, some of the subjects who were unusually obedient in administering shocks were also unusually disobedient in terms of following the experiment's rules. So the overall result might be noise, or might be picking up on sadism as a universal, or at least statistically unavoidable, trait.
- In Bloomberg, Spencer Soper, Leon Yin, Jaewon Kang, Cailley LaPara, and Christopher Cannon write about Amazon's growth in rural communities. This is a fun story about efficient frontiers. For a company like Amazon, rural customers are structurally more expensive to serve. On the other hand, other retailers have the same dilemma, and if Amazon can temporarily beat them on the relevant price/quality/convenience mix, they may end up with a loyal customer for a long time.
- Tanner Greer has thoughts on China's growing competitiveness in scientific research. It's stunning how quickly China has pulled ahead here. His model is that Leninist states need to work towards a goal or they collapse, and that China has pivoted its goal from economic growth to science. And it's working! As with targeting economic growth, it's very easy to misallocate resources, especially as you push past the frontier—the burden of knowledge means that eventually the only people qualified to underwrite a given research project are the ones who run it, and they may have the usual human incentives to keep plodding forward on something that they've realized won't work, in exactly the same way that a party official a decade ago might have decided that the only way to keep borrowers from defaulting was to inject even more credit into the economy. But that's the kind of problem that accumulates over time. Right now, they just haven't had enough time for that sort of institutional rot to set in. The US can absolutely afford to keep up or pull ahead, and America is still in the lead for the single most important technology being developed right now. But even there, it's worth the occasional glance in the rearview mirror.
- David Oks on the rise of spreadsheets and its consequences. The spreadsheet is an incredibly useful medium for displaying and toying with information, especially if it's in exactly three dimensions (for finance, the three are: when (column labels), what (row labels), and how much (cell contents)—it's a 3D system displayed in two dimensions, like a topographical map). But that's adjacent to a weakness, in that it's much too easy to manipulate some estimates and treat them as a realistic prediction about the future. A big fraction of white-collar work can be read as an effort to make reality conform with spreadsheets and vice-versa.
- On Read.Haus this week, a financial koan worth quoting in full: "You are an investment manager who wants to beat benchmark, but with several key restrictions. 1. You are a large fraction of your benchmark, holding 5-15% of the market. 2. You are not smarter than the market on average and have no significant business insights. 3. You know 2 is true and are reasonably well calibrated. 4. You are not subject to the same restrictions that other indexers are, like Blackrock, Vanguard, State Street, etc. You can vote as you please, influence boards, and deviate from benchmark, even if you can't beat it in expectation. Can you, as an unconstrained Vanguard, create alpha purely from your relative position, even if you don't know anything more than anyone else?" This is a fantastic question. And the answer is: yes. If you are that big a factor in the market, your comparative advantage is in funding things that are good for economic growth overall, even if they don't capture that upside. In this case, the alpha is with respect to global benchmarks, not local ones, since this approach would subsidize other companies in the same country. So , calling it alpha will be debatable. But the financial upside won't be.
- In this week's Capital Gains, some thoughts prompted by the Demis Hassabis biography (reviewed below): why don't the AI labs just build systematic trading models? Predicting whether the next order is a buy or a sell is a valuable skill, but, in the end, it isn't the most valuable skill.
You're on the free list for The Diff. This week, paying subscribers read about how government and private regulation selections for more sophisticated rulebreakers ($) and why in the highest-growth sectors, the economic owners of equity are the employees and the products that are legally equity are economically something else ($). Upgrade today for full access.
Books
The Infinity Machine: Demis Hassabis, DeepMind, and the Quest for Superintelligence: One of my favorite video game concepts from my high school years was Black & White, a game in which the player plays the role of a deity who needs to attract followers through acts of mercy (e.g. miraculous rainfall on crops) or wrath (equally miraculous fireballs that destroy the crops). It was a fun concept, with a dynamic world that really felt like it was responding to what players did. It was less fun as a game, in part because the simulation was realistically complex enough that it was hard to affect things. One of the developers on that game turned out to be one Demis Hassabis, a chess and programming prodigy. So, we go way back. (As do the two million or so other people who bought the game.)
Hassabis later shifted his interests again. In fact, he is, at any given moment, doing exactly what the protagonist of a sci-fi book about someone who creates AGI would be doing. In the 90s, that meant building video games with elaborate simulations of interacting agents. In the early 2000s, he's getting a PhD in neuroscience. And in the 2010s and 2020s, he's working on AI, first at an independent DeepMind, and then at Google. (Disclosure: long GOOGL. It's always nice when your due diligence is available on Kindle.)
He did some clever neuroscientific work early on. He spent some time with memory: you can imagine your memory functioning like a camera, where it records detailed scenes in high fidelity. Or you can imagine a more compressed form, something more like a director, who describes what each participant should do but doesn't feed them lines word by word. Hassabis' view was the latter, and he was able to perform some experiments on people with brain damage to confirm it.
He eventually decided that AI was a more important goal to pursue. That's worked out quite well for him so far, but it's been a tricky path. Given that this is a book about a) a scientist, and b) someone who might end up being a major contributor to the technology that ultimately solves science, it's surprising how much of it is about investor relations. DeepMind had trouble getting its message across to VCs, and being acquired didn't solve that. And this was a trickier problem than with most companies. DeepMind employees, including Hassabis, seem to genuinely believe that the technology they're building could usher in either an era of unprecedented abundance or the end of the world. They eventually solved this, but the time period during which they hashed it out turned out to overlap with the period during which OpenAI realized that the transformer architecture meant that text-based models could start performing much, much better.
As that rivalry heats up, the book inevitably becomes more of a news digest. Fortunately, it's pretty exciting news. At least in the domain of white-collar tasks, he's contributed to automating a meaningful share of labor, and enhancing the value of the labor-hours that remain. It's just hard to spot the bullseye with a biography like this, and it's an admirable sacrifice (in expected value terms) to commit to writing about the head of an AI lab, given how rapid the turnover is in the most respected lab category.
Open Thread
- Drop in any links or comments of interest to Diff readers.
- Outside of a few special cases like media and sports, the rule in business books is that CEOs get biographies and individual contributors get memoirs or nothing. Are there some other good business biographies of people who weren’t at the top of the org chart?
Diff Jobs
Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:
- High-growth startup building dev tools that help highly technical organizations autonomously test and debug complex codebases is looking for senior product managers who enjoy defining developer-facing APIs and abstractions. Experience with fuzzing or property-based testing a plus! (London, D.C.)
- A Fortune 500 cybersecurity company with decades years of proprietary security data is running an internal incubation with a pre-seed startup mentality and a mandate to build something new in AI. They are looking for a founding engineer who can ship fast, an engineer with a security background who’d be excited to contribute to OpenClaw’s security efforts, an AI researcher, and a generalist (ex-banking/consulting/PE background preferred) who wants to wear a bunch of different hats. Comp is FAANG+ and cash heavy. If you want to build something new in AI, but also need runway, this is for you. (SF/Peninsula)
- Ex-Bridgewater, Worldcoin founders using LLMs to generate investment signals, systematize fundamental analysis, and power the superintelligence for investing are looking for machine learning and full-stack software engineers (Typescript/React + Python) who want to build highly-scalable infrastructure that enables previously impossible machine learning results. Experience with large scale data pipelines, applied machine learning, etc. preferred. If you’re a sharp generalist with strong technical skills, please reach out
- Ex-Citadel/D.E. Shaw team building AI-native infrastructure that turns lots of insurance data—structured and unstructured—into decision-grade plumbing that helps casualty risk and insurance liabilities move is looking for forward deployed data scientists to help clients optimize/underwrite/price their portfolios. Experience in consulting, banking, PE, etc. with a technical academic background (CS, Applied Math, Statistics) a plus. Traditional data scientists with a commercial bent also encouraged. (NYC)
- A leading AI transformation & PE investment firm (think private equity meets Palantir) that’s been focused on investing in and transforming businesses with AI long before ChatGPT (100+ successful portfolio company AI transformations since 2019) is hiring experienced forward deployed AI engineers to design, implement, test, and maintain cutting edge AI products that solve complex problems in a variety of sector areas. If you have 3+ years of experience across the development lifecycle and enjoy working with clients to solve concrete problems please reach out. Experience managing engineering teams is a plus. (Remote)
Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.
If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.