Who Decides What AI Can Do?

March 3, 2026
Who Decides What AI Can Do?

Here's something that caught my attention — deciding what AI can and can’t do is turning into a legal and ethical quagmire. Byrne Hobart points out that, unlike traditional weapons, AI tools can be shaped and limited by their creators, which raises huge questions about control and responsibility. For example, Anthropic wanted guardrails around military and surveillance use, but when they raised concerns, the US government responded with a sort of digital death sentence — cutting off their contracts entirely. Hobart explains that this shift is a big leap, because it means companies providing AI for national security might not have a say in how their tech is used. And that’s just the beginning — these debates expose a fundamental clash of values between tech innovators and defense officials. As Hobart notes, in AI, unlike building planes in WWII, the decision to deploy a weapon isn’t just binary; the power to set limits lies squarely with the creators, which complicates everything from ethics to geopolitics.

Screenshot-2022-02-17-at-12.51.50-1.png

In this issue:

  • Who Decides What AI Can Do?—Being a defense contractor is a pretty binary decision: you can choose whether or not to sell weapons or systems for targeting those weapons, but somebody else is deciding who targets them. AI makes this tricky, because limitations on how it can be used are, in a sense, part of the product itself. We've encountered problems like this before.
  • Fun Trades—Sometimes a derivative market ceases to be a bet on underlying reality and turns into a bet on the interpretation of the terms of a contract.
  • Insiders—Insider trading means misappropriating information, and the institutions whose data was misappropriated tend to frown on it.
  • M&A—The fewer big entertainment companies there are, the more they benefit from sabotaging one another.
  • Two-Tier Rounds—Strategic investments are a separate process from purely financial ones.
  • Switching Costs—One piece of career advice in the AI era is that the more intelligence is commoditized, the more it pays to be good at getting people to like you. Anthropic is demonstrating this.
audio-thumbnail
The Diff March 2nd 2026
0:00
/1026.481633

Chat with this post on Read.Haus.

Who Decides What AI Can Do?

One of the great strengths of the common law tradition is that it reasons incrementally, and by analogy. If you're setting up a system for deciding whether a given bit of land belongs to Farmer Smith or Farmer Jones, you eventually settle on either a map or a textual description that, well, maps to one. And this works great until, many generations later, Jones' land turns out to have oil underneath, and Jones leased that land to a driller, which got Smith interested in the oil business, at which point Smith discovered that there had been a pool of oil that extended under both properties, and that Jones had gotten it first. Theft! Smith had taken something of value that—please refer to the map for details—was clearly part of Jones' property. So the judicial system has to decide who is right, and [in 1889, the Supreme Court of Pennsylvania did so by declaring that:

Natural gas belongs to the owner of the land, and is a part of it, and, so long as it is on or in it, is subject to his control; but when it escapes, and goes into other land, or comes under another's control, the title of the former owner is gone. If an adjoining or distant owner drills a well on his own land, and taps his neighbor's vein of gas, so that it comes into his well and under his control, such gas belongs to the owner of the well.

Perhaps a few generations later, Smith and Jones' next round of descendants noticed that planes were flying overhead, right over their properties! And this time, they both got to be disappointed to learn that they don't have property rights going straight up from their home, forever, either.[1] New technology raised questions that could be asked in older terms, but also required some reasoning. When property is basically two-dimensional, you can afford to be sloppy about these things, but once there's something of value above and below the surface, you've added a couple of dimensions and complicated things.

If adding one dimension to a problem leads to tricky questions, it should be unsurprising that adding hundreds of billions of dimensions to something would also produce some tricky questions about property rights. So, over the last few days, there's been a very lively debate about what constraints AI labs can place on their customers, particularly when those customers include the US government and the relevant use cases involve making life-or-death decisions with AI.

So: Anthropic found out that their tools were used, via Palantir, in the capture of Maduro, and expressed some reservations about this to a Palantir representative, who then informed the US military about Anthropic's concerns. Anthropic wanted guardrails around the use of their tools for autonomous weapons systems and domestic mass surveillance, and when asked if they'd allow their tools to be used to stop a nuclear attack on the US, said they'd want the situation run by them first. So, Trump announced that the US government would stop doing business with Anthropic, effective six months from now, noting that "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow." Which sounds like he's giving them an out: if they remain helpful, and find some mutually face-saving way to assure the US military that they won't randomly veto legitimate operations, they get to stick around.

That whole hypothetical arrangement blew up a few hours later when Secretary of War Pete Hegseth declared them supply chain risk: "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service." Since Anthropic's revenue is mostly enterprise, this is, effectively, a corporate death penalty: they can't do business with big contractors who have federal contracts, nor can they work with the big cloud companies. This reaction was wildly inappropriate, and will probably end up being less of "death penalty" and more "death recorded." It's just going to be very hard for courts to buy the claim that there's a US company in the same national security risk category as Huawei, but also that the government's going to keep using them for the next six months.

The AI industry moves fast, because the underlying technology is always changing. That has made it an attractive place for executives who also move fast, so later that evening, Sam Altman announced that OpenAI had won the contract. (Apparently stipulating approximately the same contractual terms—no mass surveillance of Americans, human responsibility for autonomous weapons, though that last one is a little looser, as it implies that OpenAI's models could be used to directly control an autonomous weapons, but that a human still has to sign off on this and take responsibility for what those weapons do.)

A useful precedent here is Henry Ford, a world-class hater who once delivered negative feedback on a prototype by destroying it with his bare hands. Ford was rather famously not fond of war or Jews. He'd continued to do business in Germany right until the company’s operations there were seized in 1941, but also produced thousands of B-24 bombers at what was the then the world's largest assembly line. Once there's an actual conflict, people tend to realize which side they're on pretty quickly.

But AI is different, in important ways. Building a weapon means contributing to a war effort (construction on Willow Run started this before the US entered the war, and Ford might have been making the realistic bet that if the Nazis knew that they'd lost Henry Ford, and if the US already had an intimidating air force, it might have kept the US out of the war. Henry himself wasn't enthused about it at first; Edsel Ford and Charles Sorensen were the original advocates, though he later came around). It's binary: you might build planes hoping they'd be used exclusively against military targets, and worry that they'd be used on civilians instead, but it's hard for someone with 1940s technology to put that desire into effect. But the AI companies do have that capacity, or, at least, they have a lot more room to decide exactly what the limits of their models are. If the US decides that the plan that best serves our national interests is to synthesize a bioweapon, and they want an AI company to help, that company has to decide what it's going to do.

So treating Anthropic as a supply chain risk because they're setting strict contractual guardrails is both a radical expansion of the government's power and a Burkean policy that keeps defense contracting Boolean: if you're providing the US government with tools for waging war, you don't get to decide how they'll be used.

This is not a stable equilibrium for two reasons:

  1. There's a big gap in values between the people at AI labs and the people at the Department of War. A good way to make that more specific is to think about where each side would draw the line on "domestic mass surveillance" if it applied to left- or right-wing extremists. For one thing, both sides would disagree on who's an extremist, putting more people on the other side in that category. So, to each side, the other side wants "mass" surveillance while they'd prefer a more targeted approach. For various historically contingent reasons, if you have a large group of people who are very good at training large models, that group is going to be more politically progressive than average. The current administration does not share their views. So each side will be suspicious about how broadly the other side interprets things.
  2. We don't have a very good sense of what fine-grained conscientious objection would look like in practice. Militaries have always had to accept the fact that individual soldiers vary in how gung-ho they are about the whole thing. Some people end up in the military because they're genuinely eager to risk their lives for their country, some of them just weren't sure what to do after high school, others were mostly thinking about job security and tuition and didn't think they'd end up in combat. (A lot of military cultural norms that seem otherwise perverse start to make sense if you think of the goal as consistent output versus maximum possible output. It's just easier to make plans when you know lots of things will be 100% done, instead of wondering if a particular choice will be 50% or 500%.)

It's incredibly fortunate that we're having these debates in the context of rapid attacks that, at least so far, have been incredibly effective at accomplishing the US's policy goals.[2] That's comparatively low-stakes compared to Anthropic raising these concerns while the aircraft carriers are en route to Taiwan. The Chinese Communist Party will run into the same kinds of problems, but obviously has many more tools available to convince private sector actors to do things the government wants them to do.

What we'll probably end up doing is migrating some of the AI safety discussion into legislation instead. It would be helpful to have a category for general-purpose AI tools that are explicitly designed not to be used by the military, whose creators can build and distribute the model knowing that if it's used for defense purposes, the outcomes are entirely up to the US military—you wouldn't be able to stop the military from using an image recognition model you built to identify terrorist training camps, even if you'd be disappointed that those camps a) got attacked, and b) turned out not to have anything to do with terrorism. And you'd also be free to hobble the models to stop those kinds of uses (though figuring out how to do this is its own mess of technical challenges). For models that are meant for defense, the labs that offer them are necessarily taking some responsibility for how they're used. If GPT-6 is so crazy good that China permanently renounces its claims on Taiwan and disbands half of the PLA, Sam Altman will get to take credit as a great figure in American military history. And if what happens instead is that some OpenAI model mistakenly informs the US that Russia has deployed nuclear weapons, and the US overreacts, that's also something OpenAI would have to take responsibility for.

This model works pretty nicely because in practice, when wars break out the pacifists are generally pretty anxious to prove that they have a principled opposition to violence and not that they're pacifists because they like the other side. (Dalton Trumbo's anti-war novel Johnny Got His Gun was serialized in The Daily Worker; the book was out of print by the time the US entered the Second World War, and when people wrote to Trumbo asking for copies, he'd forward their letters to the FBI.[3]) If Team Anthropic knows that their models aren't going to be used to control drones and subject Americans to a secret ML-driven loyalty test, they'll probably be plenty motivated to get their products deployed in less culpable links in the supply chain. And the US probably should not try to compel a company to work with them against its own moral objections—if they took Anthropic more seriously, they might find some useful analogies in the difference between what models say in reasoning scratchpads and what they finally output to users. And this is probably part of the subtext of Anthropic's objections: the company was started because its founders were worried that OpenAI didn't care enough about safety, and it's being asked to behave like a malevolent AI that will bide its time and pretend to comply.


  1. This also saves us a future legal contention. If you extend property rights downward or upward, you're really saying that what people own is whatever prism starts at the exact center of the earth and extends upward such that it intersects with the plot of land they own on the surface and extends outward indefinitely. And that means that asteroid miners would be mining on the property of whoever's home faced that asteroid at any given moment. ↩︎

  2. To be clear: for any American intervention in the Middle East, it's a very bad idea to declare victory early on. We've probably learned some things since Iraq. On the other hand, it was widely assumed as recently as a few months ago that one of the things the current administration had learned from Iraq is that you can't just show up in a Middle Eastern country, depose the guy in charge, and think that's the end of it. ↩︎

  3. One reason it's hard to trust people who have extreme views is that another way to frame these views is that they're built from a smaller number of axioms. This makes them a lot more coherent than mainstream ideologies, which are coalitions that shift in surprising ways. But it also means that if you tweak a few variables, you can get an extreme swing in views. Trumbo was not the only person to completely flip his opinions on a big issue; for a while in the early 2000s, the most influential part of the US conservative movement was the neocons, many of whom had met when they were communists. ↩︎

Diff Jobs

Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:

  • Ex-Bridgewater, Worldcoin founders using LLMs to generate investment signals, systematize fundamental analysis, and power the superintelligence for investing are looking for machine learning and full-stack software engineers (Typescript/React + Python) who want to build highly-scalable infrastructure that enables previously impossible machine learning results. Experience with large scale data pipelines, applied machine learning, etc. preferred. If you’re a sharp generalist with strong technical skills, please reach out. (SF, NYC)
  • A pre-IPO, next-generation chemicals company that’s manufacturing the mission-critical inputs for a sustainable American reindustrialization is looking for a CFO to own the capital raising roadmap and allocation strategy end to end. Experience turning corporate strategy into a data-driven narrative and advising on late stage capital raises and/or IPOs preferred. (Remote, Houston)
  • Ex-Citadel/D.E. Shaw team building AI-native infrastructure that turns lots of insurance data—structured and unstructured—into decision-grade plumbing that helps casualty risk and insurance liabilities move is looking for forward deployed data scientists to help clients optimize/underwrite/price their portfolios. Experience in consulting, banking, PE, etc. with a technical academic background (CS, Applied Math, Statistics) a plus. Traditional data scientists with a commercial bent also encouraged. (NYC)
  • A leading AI transformation & PE investment firm (think private equity meets Palantir) that’s been focused on investing in and transforming businesses with AI long before ChatGPT (100+ successful portfolio company AI transformations since 2019) is hiring experienced forward deployed AI engineers to design, implement, test, and maintain cutting edge AI products that solve complex problems in a variety of sector areas. If you have 3+ years of experience across the development lifecycle and enjoy working with clients to solve concrete problems please reach out. Experience managing engineering teams is a plus. (Remote)
  • Series A startup that powers 2 of the 3 frontier labs’ coding agents with the highest quality SFT and RLVR data pipelines is looking for growth/ops folks to help customers improve the underlying intelligence and usefulness of their models by scaling data quality and quantity. If you read axRiv, but also love playing strategy games, this one is for you. (SF)

Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.

If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.

Elsewhere

Fun trades

During the US/Israel attack on Iran, Kalshi's official account highlighted the availability of a market on whether or not Alie Khamenei would be "out as Supreme Leader" of Iran. They later walked this back, arguing that they don't make prediction markets based on death and would resolve this one based on the prevailing market price at the time that he died. Which makes these very strange, self-referential markets that prediction markets sometimes create. (The most entertaining of these is probably Manifold's beloved whales versus minnows market, a prediction market on whether the total size of the "YES" position, multiplied by 10,000, would be larger than the total number of individual traders who had "NO" bets). In the case of a death market that resolves to the last market price, you're really asking two questions:

  1. Will traders be able to react to credible rumors of the death before the market gets resolved, and
  2. Conditional on that, which direction is it easier to push the price?

If the market is trading at 50%, and you're confident you could push it to 70% by the time it resolves, then your average price would be somewhere under 70% and you'd win. You'd prefer a case where there are lots of limit orders close to the current price (so you can get a big position there) and fewer further away (so every marginal bet you make swings the price more in your favor). You could just look at the order book, but then you have to worry that someone else is thinking the same way, and that they have a bunch of big-ish limit orders on one side of the book specifically so you'll make the foolish choice to push the price in the other direction. In unregulated markets, the force that keeps manipulation in check is that a market manipulator must be willing to provide a lot of liquidity at unfavorable prices.

Right now, this market is paused, so the ultimate resolution is unclear. You can sympathize with Kalshi being reluctant to support death markets, which are unpopular. But they have to balance that against the concern that they're an unregulated playground for traders who are better at exploiting the resolution process than at having the real-world analytical edge that's supposed to make these markets worthwhile.

Insiders

On the topic of prediction markets, OpenAI has fired an employee for insider trading on them. If you look at the history of US insider trading enforcement, there's a visible side (laws get written, people get convicted) and an invisible side (companies don't want their employees insider trading, and they really don't want executives to realize that they can create volatility and trade ahead of it). In the first big US insider trading case, Texas Gulf Sulfur, one of the traders was an EVP at Morgan Guaranty who got a secondhand tip. He promptly traded on that tip—but on behalf of a hospital for which he was a trustee, after which he tipped off the head of Morgan Guaranty's pension, who bought the stock for the pension (and then a bit, and at a higher price, for himself). There seems to have been some kind of understanding that insider trading was allowed, but that it was a bit unsporting to just go out there and make the trade yourself, but that by handing the donation off to a charity you might buy yourself a little indulgence. A prediction market insider trader is stealing—they're using information from their employer in an unauthorized way. But they're also going to a fence who takes a ridiculous percentage—it's incredibly valuable to competing labs to know what launches are coming and when, some of these labs will accelerate their own big announcement to make the other lab look like an also-ran. It's entirely possible for a lab to be worth a billion dollars more than in the counterfactual scenario where they don't have a few days' advance notice of big news. The five- and six-figure profits traders make are a ridiculously small share of the value they transfer.

M&A

Netflix has dropped out of the bidding for Warner Bros. Discovery, getting them a $2.8bn breakup fee and letting Netflix's co-CEO give a pretty brutal interview about what will happen next:

This deal is dependent on a lot of cost-cutting. We were in the books of Warner Bros., and the biggest cost centers are people in productions. There’ll be cuts in excess of $16 billion. They are telling people who lend them the money that’s gonna happen in 18 months or so. It would be less production, less people working.

Presumably, Plan A for Netflix was that they'd acquire Warner Brothers and become a much more vertically-integrated business, and also one that had many different price points and revenue models. But it's nice that their plan B involves telling everyone in Hollywood who depends on either Paramount or Warner Brothers Discovery to start looking for backup options—and, as a bonus, having a little extra cash to throw at them.

Two-Tier Rounds

OpenAI recently raised $110bn from Softbank, Amazon, and Nvidia (disclosure: long the last two), and plans to raise another $10bn from financial investors ($, The Information). Strategic investors tend to get different terms, because either the deal is strategic to the investor or to the recipient. If you're running a smallish AI company, it's valuable to have Nvidia on the cap table, and the valuation Nvidia pays will reflect that. For Amazon, it's helpful to have a big anchor AI company on the cap table, and the equity is part of that. But these strategic considerations all have to be negotiated in parallel—each investor's strategic benefit partly depends on who else is in the round. So it makes sense to do all of these concurrently, and then move on to the investors who are mostly providing pure cash. For those investors, the mix of strategics might moderately push the ideal valuation around, but it won't be the difference between doing a deal and not.

Switching Costs

At a time when Anthropic has staked out their position as an AI company that draws brighter ethical lines, and as an underdog in a fight with the American military and with OpenAI. This is a stirring example of corporate backbone, or a case of arrogant technologists overriding democracy. But, either way, it means a lot of attention for Anthropic, and it's a great marketing hook. So they've shipped a new feature that makes it easy to import conversations and preferences from other tools. As models get generally better, there are more tasks for which they're all good-enough, and at that point companies need some other kind of moat. For Anthropic, one of those moats is that they're probably the lab with the highest ratio of people rooting for them to win to current revenue, so they're the biggest beneficiary of lower switching costs.

Audio Transcript

Screenshot-2022-02-17-at-12.51.50-1.png

In this issue:

  • Who Decides What AI Can Do?—Being a defense contractor is a pretty binary decision: you can choose whether or not to sell weapons or systems for targeting those weapons, but somebody else is deciding who targets them. AI makes this tricky, because limitations on how it can be used are, in a sense, part of the product itself. We've encountered problems like this before.
  • Fun Trades—Sometimes a derivative market ceases to be a bet on underlying reality and turns into a bet on the interpretation of the terms of a contract.
  • Insiders—Insider trading means misappropriating information, and the institutions whose data was misappropriated tend to frown on it.
  • M&A—The fewer big entertainment companies there are, the more they benefit from sabotaging one another.
  • Two-Tier Rounds—Strategic investments are a separate process from purely financial ones.
  • Switching Costs—One piece of career advice in the AI era is that the more intelligence is commoditized, the more it pays to be good at getting people to like you. Anthropic is demonstrating this.
audio-thumbnail
The Diff March 2nd 2026
0:00
/1026.481633

Chat with this post on Read.Haus.

Who Decides What AI Can Do?

One of the great strengths of the common law tradition is that it reasons incrementally, and by analogy. If you're setting up a system for deciding whether a given bit of land belongs to Farmer Smith or Farmer Jones, you eventually settle on either a map or a textual description that, well, maps to one. And this works great until, many generations later, Jones' land turns out to have oil underneath, and Jones leased that land to a driller, which got Smith interested in the oil business, at which point Smith discovered that there had been a pool of oil that extended under both properties, and that Jones had gotten it first. Theft! Smith had taken something of value that—please refer to the map for details—was clearly part of Jones' property. So the judicial system has to decide who is right, and [in 1889, the Supreme Court of Pennsylvania did so by declaring that:

Natural gas belongs to the owner of the land, and is a part of it, and, so long as it is on or in it, is subject to his control; but when it escapes, and goes into other land, or comes under another's control, the title of the former owner is gone. If an adjoining or distant owner drills a well on his own land, and taps his neighbor's vein of gas, so that it comes into his well and under his control, such gas belongs to the owner of the well.

Perhaps a few generations later, Smith and Jones' next round of descendants noticed that planes were flying overhead, right over their properties! And this time, they both got to be disappointed to learn that they don't have property rights going straight up from their home, forever, either.[1] New technology raised questions that could be asked in older terms, but also required some reasoning. When property is basically two-dimensional, you can afford to be sloppy about these things, but once there's something of value above and below the surface, you've added a couple of dimensions and complicated things.

If adding one dimension to a problem leads to tricky questions, it should be unsurprising that adding hundreds of billions of dimensions to something would also produce some tricky questions about property rights. So, over the last few days, there's been a very lively debate about what constraints AI labs can place on their customers, particularly when those customers include the US government and the relevant use cases involve making life-or-death decisions with AI.

So: Anthropic found out that their tools were used, via Palantir, in the capture of Maduro, and expressed some reservations about this to a Palantir representative, who then informed the US military about Anthropic's concerns. Anthropic wanted guardrails around the use of their tools for autonomous weapons systems and domestic mass surveillance, and when asked if they'd allow their tools to be used to stop a nuclear attack on the US, said they'd want the situation run by them first. So, Trump announced that the US government would stop doing business with Anthropic, effective six months from now, noting that "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow." Which sounds like he's giving them an out: if they remain helpful, and find some mutually face-saving way to assure the US military that they won't randomly veto legitimate operations, they get to stick around.

That whole hypothetical arrangement blew up a few hours later when Secretary of War Pete Hegseth declared them supply chain risk: "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service." Since Anthropic's revenue is mostly enterprise, this is, effectively, a corporate death penalty: they can't do business with big contractors who have federal contracts, nor can they work with the big cloud companies. This reaction was wildly inappropriate, and will probably end up being less of "death penalty" and more "death recorded." It's just going to be very hard for courts to buy the claim that there's a US company in the same national security risk category as Huawei, but also that the government's going to keep using them for the next six months.

The AI industry moves fast, because the underlying technology is always changing. That has made it an attractive place for executives who also move fast, so later that evening, Sam Altman announced that OpenAI had won the contract. (Apparently stipulating approximately the same contractual terms—no mass surveillance of Americans, human responsibility for autonomous weapons, though that last one is a little looser, as it implies that OpenAI's models could be used to directly control an autonomous weapons, but that a human still has to sign off on this and take responsibility for what those weapons do.)

A useful precedent here is Henry Ford, a world-class hater who once delivered negative feedback on a prototype by destroying it with his bare hands. Ford was rather famously not fond of war or Jews. He'd continued to do business in Germany right until the company’s operations there were seized in 1941, but also produced thousands of B-24 bombers at what was the then the world's largest assembly line. Once there's an actual conflict, people tend to realize which side they're on pretty quickly.

But AI is different, in important ways. Building a weapon means contributing to a war effort (construction on Willow Run started this before the US entered the war, and Ford might have been making the realistic bet that if the Nazis knew that they'd lost Henry Ford, and if the US already had an intimidating air force, it might have kept the US out of the war. Henry himself wasn't enthused about it at first; Edsel Ford and Charles Sorensen were the original advocates, though he later came around). It's binary: you might build planes hoping they'd be used exclusively against military targets, and worry that they'd be used on civilians instead, but it's hard for someone with 1940s technology to put that desire into effect. But the AI companies do have that capacity, or, at least, they have a lot more room to decide exactly what the limits of their models are. If the US decides that the plan that best serves our national interests is to synthesize a bioweapon, and they want an AI company to help, that company has to decide what it's going to do.

So treating Anthropic as a supply chain risk because they're setting strict contractual guardrails is both a radical expansion of the government's power and a Burkean policy that keeps defense contracting Boolean: if you're providing the US government with tools for waging war, you don't get to decide how they'll be used.

This is not a stable equilibrium for two reasons:

  1. There's a big gap in values between the people at AI labs and the people at the Department of War. A good way to make that more specific is to think about where each side would draw the line on "domestic mass surveillance" if it applied to left- or right-wing extremists. For one thing, both sides would disagree on who's an extremist, putting more people on the other side in that category. So, to each side, the other side wants "mass" surveillance while they'd prefer a more targeted approach. For various historically contingent reasons, if you have a large group of people who are very good at training large models, that group is going to be more politically progressive than average. The current administration does not share their views. So each side will be suspicious about how broadly the other side interprets things.
  2. We don't have a very good sense of what fine-grained conscientious objection would look like in practice. Militaries have always had to accept the fact that individual soldiers vary in how gung-ho they are about the whole thing. Some people end up in the military because they're genuinely eager to risk their lives for their country, some of them just weren't sure what to do after high school, others were mostly thinking about job security and tuition and didn't think they'd end up in combat. (A lot of military cultural norms that seem otherwise perverse start to make sense if you think of the goal as consistent output versus maximum possible output. It's just easier to make plans when you know lots of things will be 100% done, instead of wondering if a particular choice will be 50% or 500%.)

It's incredibly fortunate that we're having these debates in the context of rapid attacks that, at least so far, have been incredibly effective at accomplishing the US's policy goals.[2] That's comparatively low-stakes compared to Anthropic raising these concerns while the aircraft carriers are en route to Taiwan. The Chinese Communist Party will run into the same kinds of problems, but obviously has many more tools available to convince private sector actors to do things the government wants them to do.

What we'll probably end up doing is migrating some of the AI safety discussion into legislation instead. It would be helpful to have a category for general-purpose AI tools that are explicitly designed not to be used by the military, whose creators can build and distribute the model knowing that if it's used for defense purposes, the outcomes are entirely up to the US military—you wouldn't be able to stop the military from using an image recognition model you built to identify terrorist training camps, even if you'd be disappointed that those camps a) got attacked, and b) turned out not to have anything to do with terrorism. And you'd also be free to hobble the models to stop those kinds of uses (though figuring out how to do this is its own mess of technical challenges). For models that are meant for defense, the labs that offer them are necessarily taking some responsibility for how they're used. If GPT-6 is so crazy good that China permanently renounces its claims on Taiwan and disbands half of the PLA, Sam Altman will get to take credit as a great figure in American military history. And if what happens instead is that some OpenAI model mistakenly informs the US that Russia has deployed nuclear weapons, and the US overreacts, that's also something OpenAI would have to take responsibility for.

This model works pretty nicely because in practice, when wars break out the pacifists are generally pretty anxious to prove that they have a principled opposition to violence and not that they're pacifists because they like the other side. (Dalton Trumbo's anti-war novel Johnny Got His Gun was serialized in The Daily Worker; the book was out of print by the time the US entered the Second World War, and when people wrote to Trumbo asking for copies, he'd forward their letters to the FBI.[3]) If Team Anthropic knows that their models aren't going to be used to control drones and subject Americans to a secret ML-driven loyalty test, they'll probably be plenty motivated to get their products deployed in less culpable links in the supply chain. And the US probably should not try to compel a company to work with them against its own moral objections—if they took Anthropic more seriously, they might find some useful analogies in the difference between what models say in reasoning scratchpads and what they finally output to users. And this is probably part of the subtext of Anthropic's objections: the company was started because its founders were worried that OpenAI didn't care enough about safety, and it's being asked to behave like a malevolent AI that will bide its time and pretend to comply.


  1. This also saves us a future legal contention. If you extend property rights downward or upward, you're really saying that what people own is whatever prism starts at the exact center of the earth and extends upward such that it intersects with the plot of land they own on the surface and extends outward indefinitely. And that means that asteroid miners would be mining on the property of whoever's home faced that asteroid at any given moment. ↩︎

  2. To be clear: for any American intervention in the Middle East, it's a very bad idea to declare victory early on. We've probably learned some things since Iraq. On the other hand, it was widely assumed as recently as a few months ago that one of the things the current administration had learned from Iraq is that you can't just show up in a Middle Eastern country, depose the guy in charge, and think that's the end of it. ↩︎

  3. One reason it's hard to trust people who have extreme views is that another way to frame these views is that they're built from a smaller number of axioms. This makes them a lot more coherent than mainstream ideologies, which are coalitions that shift in surprising ways. But it also means that if you tweak a few variables, you can get an extreme swing in views. Trumbo was not the only person to completely flip his opinions on a big issue; for a while in the early 2000s, the most influential part of the US conservative movement was the neocons, many of whom had met when they were communists. ↩︎

Diff Jobs

Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:

  • Ex-Bridgewater, Worldcoin founders using LLMs to generate investment signals, systematize fundamental analysis, and power the superintelligence for investing are looking for machine learning and full-stack software engineers (Typescript/React + Python) who want to build highly-scalable infrastructure that enables previously impossible machine learning results. Experience with large scale data pipelines, applied machine learning, etc. preferred. If you’re a sharp generalist with strong technical skills, please reach out. (SF, NYC)
  • A pre-IPO, next-generation chemicals company that’s manufacturing the mission-critical inputs for a sustainable American reindustrialization is looking for a CFO to own the capital raising roadmap and allocation strategy end to end. Experience turning corporate strategy into a data-driven narrative and advising on late stage capital raises and/or IPOs preferred. (Remote, Houston)
  • Ex-Citadel/D.E. Shaw team building AI-native infrastructure that turns lots of insurance data—structured and unstructured—into decision-grade plumbing that helps casualty risk and insurance liabilities move is looking for forward deployed data scientists to help clients optimize/underwrite/price their portfolios. Experience in consulting, banking, PE, etc. with a technical academic background (CS, Applied Math, Statistics) a plus. Traditional data scientists with a commercial bent also encouraged. (NYC)
  • A leading AI transformation & PE investment firm (think private equity meets Palantir) that’s been focused on investing in and transforming businesses with AI long before ChatGPT (100+ successful portfolio company AI transformations since 2019) is hiring experienced forward deployed AI engineers to design, implement, test, and maintain cutting edge AI products that solve complex problems in a variety of sector areas. If you have 3+ years of experience across the development lifecycle and enjoy working with clients to solve concrete problems please reach out. Experience managing engineering teams is a plus. (Remote)
  • Series A startup that powers 2 of the 3 frontier labs’ coding agents with the highest quality SFT and RLVR data pipelines is looking for growth/ops folks to help customers improve the underlying intelligence and usefulness of their models by scaling data quality and quantity. If you read axRiv, but also love playing strategy games, this one is for you. (SF)

Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.

If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.

Elsewhere

Fun trades

During the US/Israel attack on Iran, Kalshi's official account highlighted the availability of a market on whether or not Alie Khamenei would be "out as Supreme Leader" of Iran. They later walked this back, arguing that they don't make prediction markets based on death and would resolve this one based on the prevailing market price at the time that he died. Which makes these very strange, self-referential markets that prediction markets sometimes create. (The most entertaining of these is probably Manifold's beloved whales versus minnows market, a prediction market on whether the total size of the "YES" position, multiplied by 10,000, would be larger than the total number of individual traders who had "NO" bets). In the case of a death market that resolves to the last market price, you're really asking two questions:

  1. Will traders be able to react to credible rumors of the death before the market gets resolved, and
  2. Conditional on that, which direction is it easier to push the price?

If the market is trading at 50%, and you're confident you could push it to 70% by the time it resolves, then your average price would be somewhere under 70% and you'd win. You'd prefer a case where there are lots of limit orders close to the current price (so you can get a big position there) and fewer further away (so every marginal bet you make swings the price more in your favor). You could just look at the order book, but then you have to worry that someone else is thinking the same way, and that they have a bunch of big-ish limit orders on one side of the book specifically so you'll make the foolish choice to push the price in the other direction. In unregulated markets, the force that keeps manipulation in check is that a market manipulator must be willing to provide a lot of liquidity at unfavorable prices.

Right now, this market is paused, so the ultimate resolution is unclear. You can sympathize with Kalshi being reluctant to support death markets, which are unpopular. But they have to balance that against the concern that they're an unregulated playground for traders who are better at exploiting the resolution process than at having the real-world analytical edge that's supposed to make these markets worthwhile.

Insiders

On the topic of prediction markets, OpenAI has fired an employee for insider trading on them. If you look at the history of US insider trading enforcement, there's a visible side (laws get written, people get convicted) and an invisible side (companies don't want their employees insider trading, and they really don't want executives to realize that they can create volatility and trade ahead of it). In the first big US insider trading case, Texas Gulf Sulfur, one of the traders was an EVP at Morgan Guaranty who got a secondhand tip. He promptly traded on that tip—but on behalf of a hospital for which he was a trustee, after which he tipped off the head of Morgan Guaranty's pension, who bought the stock for the pension (and then a bit, and at a higher price, for himself). There seems to have been some kind of understanding that insider trading was allowed, but that it was a bit unsporting to just go out there and make the trade yourself, but that by handing the donation off to a charity you might buy yourself a little indulgence. A prediction market insider trader is stealing—they're using information from their employer in an unauthorized way. But they're also going to a fence who takes a ridiculous percentage—it's incredibly valuable to competing labs to know what launches are coming and when, some of these labs will accelerate their own big announcement to make the other lab look like an also-ran. It's entirely possible for a lab to be worth a billion dollars more than in the counterfactual scenario where they don't have a few days' advance notice of big news. The five- and six-figure profits traders make are a ridiculously small share of the value they transfer.

M&A

Netflix has dropped out of the bidding for Warner Bros. Discovery, getting them a $2.8bn breakup fee and letting Netflix's co-CEO give a pretty brutal interview about what will happen next:

This deal is dependent on a lot of cost-cutting. We were in the books of Warner Bros., and the biggest cost centers are people in productions. There’ll be cuts in excess of $16 billion. They are telling people who lend them the money that’s gonna happen in 18 months or so. It would be less production, less people working.

Presumably, Plan A for Netflix was that they'd acquire Warner Brothers and become a much more vertically-integrated business, and also one that had many different price points and revenue models. But it's nice that their plan B involves telling everyone in Hollywood who depends on either Paramount or Warner Brothers Discovery to start looking for backup options—and, as a bonus, having a little extra cash to throw at them.

Two-Tier Rounds

OpenAI recently raised $110bn from Softbank, Amazon, and Nvidia (disclosure: long the last two), and plans to raise another $10bn from financial investors ($, The Information). Strategic investors tend to get different terms, because either the deal is strategic to the investor or to the recipient. If you're running a smallish AI company, it's valuable to have Nvidia on the cap table, and the valuation Nvidia pays will reflect that. For Amazon, it's helpful to have a big anchor AI company on the cap table, and the equity is part of that. But these strategic considerations all have to be negotiated in parallel—each investor's strategic benefit partly depends on who else is in the round. So it makes sense to do all of these concurrently, and then move on to the investors who are mostly providing pure cash. For those investors, the mix of strategics might moderately push the ideal valuation around, but it won't be the difference between doing a deal and not.

Switching Costs

At a time when Anthropic has staked out their position as an AI company that draws brighter ethical lines, and as an underdog in a fight with the American military and with OpenAI. This is a stirring example of corporate backbone, or a case of arrogant technologists overriding democracy. But, either way, it means a lot of attention for Anthropic, and it's a great marketing hook. So they've shipped a new feature that makes it easy to import conversations and preferences from other tools. As models get generally better, there are more tasks for which they're all good-enough, and at that point companies need some other kind of moat. For Anthropic, one of those moats is that they're probably the lab with the highest ratio of people rooting for them to win to current revenue, so they're the biggest beneficiary of lower switching costs.

0:00/0:00