The Middle Game: Routers at the Edge

February 24, 2026
The Middle Game: Routers at the Edge

Here’s something that might blow your mind — routers at the edge are quietly transforming entire industries, especially healthcare. Byrne Hobart explains that specialized AI tools, already embedded in lucrative business models, can quickly dominate by gathering high-quality training data and deploying fast. Think of OpenEvidence, which, with a $12 billion valuation, is revolutionizing clinical decision-making by capturing doctors’ real-time uncertainty — something centralized systems struggle with. This edge-focused approach isn’t just about tech — it’s about trust. Doctors trust OpenEvidence because it’s built on exclusive, peer-reviewed sources and avoids the hallucination risks of general models. As Hobart points out, the value of these routers depends on how many people they solve problems for, how valuable those problems are, and how well they capture and act on dark, hidden information. And get this — because they’re solving the most critical, high-value problems in healthcare, these specialized edge tools are proving that the future isn’t about one giant system, but many smaller, highly effective hubs that keep the economy flowing at the edge.

Screenshot-2022-02-17-at-12.51.50-1.png

In this issue:

  • The Middle Game: Routers at the Edge—If there's an AI application that requires specialized knowledge, and is already attached to a lucrative existing business model, it can achieve escape velocity by getting rapid deployment and monopolizing the best sources of training data. This is already happening in healthcare.
  • Cameras—What happens when everyone's on camera all the time?
  • Painting the Tape—Bidding low to buy low.
  • Edge Cases—When mistakes show that you're well-calibrated.
  • Leaning in to Leaning Out—Uber once wanted to win in AVs, but the best thing they can do now is to commoditize the business so there are fewer big winners but many more participants.
  • Two-Tier Rounds—It's arguably unfair that investors who add different levels of value all get in on the same terms. But rectifying this makes companies look like they're doing better than they are.
audio-thumbnail
The Diff February 23rd 2026
0:00
/1516.120816

Talk to AI about this post on Read.Haus.

The Middle Game: Routers at the Edge

A few months ago we wrote Routers, Apps, AGI, about how chatbots routing queries to whatever tool could answer them—another model, a third-party service, the checkout page for a service provider, contact information for a consultant to hire—could lead to LLMs basically rolling up the real economy. The basic idea was a Hayekian vision where the big problem is generic information-transmission: whenever something changes anywhere in the world, it changes people’s optimal behavior in unpredictable ways, and you need some system to transmit this information to the intended recipients without burying them under trivialities. Prices are an elegant way to do this, but intelligence-on-demand can operate in more dimensions.

In other words, AGI isn’t Nobel Prize winners in a datacenter but a superhuman coordination technology that represents some high-fidelity simulacrum of the economy itself. In the fullness of time, if we were all wearing Google Glasses, or even going further: some kind of brain/machine interface, it would stand to reason that Google, with exclusive access to those high fidelity, real-time sensors, could coordinate a significant amount of economic activity by organizing all the world’s high-entropy information as soon as it’s created.

OpenAI’s roughly trillion-dollar valuation (don’t be afraid to round up—it works for Sam Altman!) is a bet that OpenAI can apply this routing process over a larger share of the economy, or with more precision, than any other company. That valuation is roughly equal to the market cap Google has added since early September, implying that the market thinks Google’s decades-long head start in acquiring users and building cheap infrastructure to organize the world’s information might put them in the lead—with a preexisting business model that’s already quite good at routing natural-language queries to the highest bidder.

Search engines and LLM chatbots can free-ride on an existing infrastructure of sensors that have collected data and organized it in text form. But for now, they’re locked into relatively simplistic economic models where they can’t capture the maximum value they create. More narrowly-focused entities can do so: they can collect the subset of data that their users value, exclude anything that dilutes that value, and plug into a supply chain that already has experience extracting value. It’s hard for a general-purpose product to catch up to all the special-purpose tools and relationships an existing incumbent has. In particular, general-purpose labs have a shortage of special-purpose sensors, where a sensor is anything that collects data that would help an AI tool improve its world model in a way a particular user would appreciate.

The special-purpose applications already see economically-valuable problems before the big labs, and will continue to do so for a while.[1] There’s too much informational and economic dark matter that the big labs don’t see, and now smaller players know they either need to build a durable standalone business model or pivot to being mostly in the business of collecting data.

The maximalist router thesis is a very chatbot-centric view of the world, and while the broadest use of LLMs has been in using a general-purpose model to respond to natural-language queries (through a chatbot, or by adding inline AI-generated results to search results from traditional search engines), the fastest adoption outside of software is actually happening in healthcare, particularly with OpenEvidence.

OpenEvidence recently raised at a $12B valuation, up from $6B in October. The key driver being exponential ad (routing) revenue growth: they’re at a $150M annual run-rate and growing 30% month over month. And, like other ad platforms, they’re generating this revenue at ~90% gross margins.[2] This pattern-matches to plenty of cases where someone either found a value-added way to resell tokens or a way to move inference from COGS to opex so their unsustainable business model looked a little better. But OpenEvidence isn’t reselling tokens. They have built the only trusted sensor that can legibilize doctors’ uncertainty/intent at the moment of clinical decision making. They sit at the edge of two pools of dark context that centralized routers cannot touch: doctors/patients and the universe of medical journals, pharma companies, med device manufactures, diagnostics companies, payors, clinical service providers, etc. who provide useful information/solutions to them. OpenEvidence’s traction shows something the market hasn't fully appreciated. Namely, that vertical-specific sensors at the edge and the routers they plug into can capture enormous value precisely because of these limitations to a centralized, fully general approach. Their traction is a proof point for a general thesis on edge/vertical routers, gives us a great opportunity to unpack how our original concept of AGI as a clever matching engine is actually playing out in a specific area of the economy, and makes clear why the EV of one or two centralized routers is unlikely to eat all of global GDP anytime soon.

Fundamentally this is because a router’s value goes beyond raw intelligence and is some multiplicative function of: (1) the absolute number of people you are solving problems for[3], (2) how economically valuable those people are, (3) the relative economic value of the problems you are solving for them (the value of the context you originating/sensing/routing), (4) what fraction of the information needed to solve the problem is surfaced by these users (and what fraction of the cost of a solution is information rather than energy, raw materials, or transportation) and (5) how completely you are able to solve these problems i.e., your access to and influence/execution over the comprehensive solution set and how naturally these actuators expand over time. This actuator control and expansion is generally some function of how successful you've been at generating/aggregating useful dark matter/context (1, 2, 3).

1-3 measure the quantity of previously illegible dark matter your sensors are originating and capturing as well as the quality/value of that context; 4 and 5 dictate how effectively you can route and monetize it.

OpenEvidence is extremely privileged across all five axes. They're solving problems for doctors (the highest-earning professionals in the US) and for lots of them: as of last month, 50%+ of US physicians or 600,000 were using OpenEvidence and for an average of 14 minutes a day. The last time a technology was adopted by doctors that fast, it was Google. They're solving the most economically valuable problem a doctor faces: clinical decision-making ie. the actual diagnosis/treatment of patients under uncertainty and in real time. And they are the most complete solution to date: grounded in evidence from the most prestigious medical journals, scores 100% on the USMLE, helps doctors match patients to potentially life saving clinical trials, suggests context-aware treatment paths/medications/med devices, etc. that are most likely to solve patients’ problems and help doctors do their jobs.

But the labs are also trying to build the universal API for healthcare. OpenAI, Anthropic, and Google have launched healthcare products. So what gives OpenEvidence durability?

In a few words it’s compounding trust and what that enables. Doctors are perhaps the most credential-sensitive demographic in the world, in part because they their entire early adulthood earning an artificially scarce credential. A world where technology replaces credentials, legible signals of human expertise, and expert institutions makes a doctor feel very uncomfortable. If you want to know exactly how uncomfortable, next time you see your doctor ask them about some piece of health advice you found on Google or ChatGPT and observe their facial expression and response. OpenEvidence understood this and ran a full-stack credibility strategy with a few interlocking parts. The first was explicit counterpositioning against the labs who were training on the open internet (including health blogs, social media, etc.—any remedy that’s marketed with “Doctors hate this one weird trick” will be represented in a broad set of training data) and liable to produce the embarrassing (and potentially dangerous) medical outputs that resulted from doing so. OpenEvidence trained an ensemble of specialized models exclusively on 35 million peer-reviewed sources, bootstrapping initially on public domain material from the FDA, CDC, PubMed, etc. Their models have no connection to the public internet during training or inference whatsoever. This meant hallucination risk in their early system was meaningfully lower than pre-o1/reasoning paradigm LLMs and the product was free, so doctors began adopting it virally.[4]

A number of those early adopters happened to be senior members of the editorial boards at the most prestigious medical journals, which led to the next piece: OpenEvidence was able to lock down exclusive content partnerships with JAMA, NEJM, NCCN, the American Medical Association, all 11 JAMA specialty journals, the American Academy of Family Physicians, the American College of Emergency Physicians, etc. They used this exclusive, credentialed ground truth to train an ensemble of models that became the first to score 100% on the USMLE. This attracted more doctors, which attracted more credentialed ground truth, and the flywheel began spinning. Trust with practitioners became trust with the institutions that credential them, which became exclusive access to the ground truth that made the system better and deepened trust with practitioners further. The NEJM partnership in particular was pivotal and illustrates the supply side flywheel nicely. Daniel Nadler, OpenEvidence’s CEO, provides some interesting context there: “some of these really well-funded AI companies threw enormous amounts of money at them (NEJM) and they said no. If they’re a private company, they probably would have said yes, but they’re a nonprofit so they said no because the Massachusetts Medical Society, which is a nonprofit organization, cared more about the sanctity and the pristineness of their mission as a nonprofit than they did about just trying to score some sort of quick commercial contract.” In fact, the NEJM reached out to OpenEvidence, not the other way around: “In our case, we didn’t show up at their door. A number of the very senior people on the editorial board of the New England Journal of Medicine were power users of OpenEvidence, and they wanted their content to show up in the thing that they were using.”

Beyond exclusive supply and counterpositioning, OpenEvidence made their corporate identity synonymous with credentialed trust. They engineered their about/team page to speak to doctors’ biases: under each team member's name is not where they previously worked or their current title in the company, but only which prestigious educational credentials they earned and from which institutions.

By arbitraging credibility from the journals to create a superior system, mirroring doctors' own credential-sensitivity back to them, and offering the product for free, they aggregated 50% of the physicians in the US. And by aggregating doctors they find themselves in a position of extraordinary power in the healthcare market. Doctor-friendly solution providers, from the remaining scientific journals and universe of pharma/med device businesses, to diagnostics services providers, vertical systems of record like Veeva and the EHR platforms like Epic, payors, and clinical research organizations etc. are all lining up to plug in, stream their context, and make OpenEvidence’s platform closer to a medical super intelligence than it already is.

Simply, OpenEvidence has become the only healthcare router that’s fed by a trusted sensor and this has interesting implications. Because doctors trust the sensor, they reveal their clinical uncertainty naturally, in real time, as a byproduct of using it. This dark matter (clinical uncertainty based on high-entropy, idiosyncratic patient situations) is very much created, not discovered and it’s created by virtue of trust. A centralized router can't replicate this by offering superior general intelligence alone, because doctors won't generate the context for a platform they don't trust. In fact, you can model the set of things doctors ask/reveal to OpenEvidence as the exact set of things they would be very hesitant to ask ChatGPT about: the lack of trust creates sky high verification costs, and the asymmetric downside from not verifying untrusted outputs means that the valuable context is never originated.

This is worth dwelling on because it inverts a common assumption about context and AI. The default mental model is one of discovery: valuable information exists out there in the world, and the job of a sensor is to go find it, scrape it, whatever, and pipe it back to a router. But OpenEvidence’s service is closer to selling confirmation of an informed guess, plus documentation to back it up. The doctor’s thought trace that boils a real time data stream about patient symptomology, diagnostic results, medical history and all of their prior knowledge/intuition into a clinical hypothesis and particularly, doubts they have around the hypothesis (“clinical uncertainty”) previously did not exist in any other system, on prem, in the cloud, on paper, anywhere. Perhaps it existed as airwaves when a doctor asked another doctor they trusted for advice about a specific patient scenario. No one could survey doctors at scale and in real time about their diagnostic uncertainty; the best doctors wouldn't fill out the survey, and even if they did, the act of filling out a survey is not qualitatively the same as the context revealed from a constant stream of real, novel patient cases, under real pressure and uncertainty. The Mercors, Surges, and Scales of the world are trying to replicate this for the labs, but it is not of the same quality, nor does quantity of decent inputs make up for the quality of the best ones: companies that are hiring doctors to provide and rate answers for general-purpose AI tools are hiring the doctors who aren’t minting money by using special-purpose AI tools, and are presumably getting adversely selected. And it’s very hard to change this because of the time value of money. Mercor, Surge, Scale, etc. are paying you to train a model whose outputs as a result of that training will be valuable at some point in the future. A patient or insurance company is paying a doctor for outputs that are immensely valuable to them (at least in theory) today.

OpenEvidence’s context is a byproduct of genuine use, and genuine use is a byproduct of trust, and trust is a byproduct of credentialed ground truth supply that comes from existing at the edge. If you remove any link in that chain, the dark matter can’t be legibilized (because it wouldn’t even exist). This goes beyond Hayek's knowledge problem: Hayek showed that valuable knowledge is dispersed, local, etc. and can't be centrally collected; in the case of routers at the edge like OpenEvidence, the knowledge hasn't been created. It is constituted by the interaction between a trusted sensor and the economic actor using it, which means a centralized router can't access it not because it's hard to find, but because there is nothing to find.

This makes routers at the edge very durable: they not only see context others can't, but also generate context others can't, because the context is constituted by the relationship/interaction between the sensor and the economic actors using it. And supply follows this this scarce context, which makes the solution more and more useful, and creates a flywheel where the context generated is increasingly scarce: as the platform handles more of the routine cases (the ones already inside the training distribution), the remaining queries doctors bring to it are increasingly outside the existing data (e.g. genuine edge cases, the novel clinical uncertainty, etc.) So the platform's training corpus self-selects for progressively higher-entropy, higher-value tokens over time. We could even see a two-track healthcare system, where ChatGPT-enhanced urgent care and medical tourism take care of prosaic issues and compete on price, while there’s a basically separate healthcare system that solves trickier problems, harvests proprietary data, and has immense pricing power.

We can generalize this:

  1. Exclusive credentialed ground truth (JAMA, NEJM partnerships) makes the sensor trusted

  2. Trust legibilizes latent dark matter (50%+ of US doctors reveal their clinical uncertainty daily because they trust the sensor)

  3. Legibilized dark matter monetizes privately without leaking to a centralized router (pharma pays $70–$150 CPMs for access to doctors' moment of highest intent, and OpenEvidence captures this)

  4. More and more solutions, from clinical trial patient recruitment to prior auth to medical device discovery, etc. continuously plug in, and compound dark matter origination/capture.

  5. As more edge/domain specific problems are legibilized and accurately/completely solved by an increasingly large ledger of solutions, a new signal is created and compounded: verified outcomes in the domain. These verified outcomes (how well certain matches solved certain problems) can be leveraged to improve the edge router via RL. This is a runaway advantage that’s very hard to replicate without 1), 2), and the time it takes for 4) to mature.

  1. and 5) mean the usefulness of the platform compounds and the ceiling on monetization naturally rises over time. For example, OpenEvidence just launched clinical trial matching and patient recruitment. Pharma companies currently pay CROs billions to recruit patients for clinical trials and run those trials, very slowly and inefficiently. If OpenEvidence can fill a Phase III trial faster and with better-matched patients than a CRO currently can, pharma companies benefit massively. The faster the recruitment period, the faster the trial starts, and the better the patients, the higher the probability the trial succeeds. A faster trial with a higher probability of success means more time for a drug under patent earning monopoly profits. To put some numbers around this, the average pharma company currently spends ~$2B a year just on patient recruitment (~$40k per patient), and 80% of trials are delayed. Each day of delay costs $600K-$8M in lost revenue depending on the drug. For a blockbuster (like GLPs, Keytruda, etc.), a single day under patent is worth something like $8M. Pharma will pay much more than $70-$150 CPMs to accelerate that. This is the actuator expansion: OpenEvidence's primary actuator today is serving an ad (routing doctors’ attention to pharma), but clinical trial matching is a different, more valuable actuator entirely (routing patients to trials). The logical next actuator might be prior authorization automation (routing payment) and it’s hard to see where this stops: each new actuator extends the set of solutions OpenEvidence has access to and can execute over while keeping context dark from the centralized routers.

In this sense, OpenEvidence is the cleanest example of a pattern that is likely to be repeated by vertical edge routers across the economy. Wherever there are two (or more) pools of economically valuable but illegible context that centralized routers can't bridge (or aren’t trusted to) there's an opportunity for a router at the edge to originate the trusted sensor, originate/capture the dark matter, and make it legible to market participants that will pay.


  1. In the fullness of time labs/centralized economic world models in theory should be able to get it all, but there’s a middle game, and it’s worth considering deeply—especially because the rules of the game are getting revealed as we go. ↩︎

  2. In most cases, ad platforms have revenue growth that’s nonlinear with respect to usage growth, because they benefit from rising bid density. But there are plenty of treatments for rare diseases where there is no second-highest bidder. ↩︎

  3. Note that this is not strictly monotonic. The dark matter generated by a million people watching cat videos is likely worth less than 600,000 doctors revealing high-entropy clinical context. ↩︎

  4. There’s a deeper epistemological failure mode that OpenEvidence uncovered here. The labs' entire theory of intelligence presupposes that more data and more compute produces more capable/economically valuable systems across all domains. Their business model, capex strategy, and investor narrative all desperately want this to be true. OpenEvidence's success is a really great counterexample, it’s creating massive economic value using specialized models trained on much less data. It’s not easy for the labs to acknowledge that less data of the right kind outperforms more data of every kind in one of the most economically valuable domains out there. In some respect, acknowledging this would call their entire strategy into question or at least be some indication that they could be asking the wrong questions. It would also mean that instead of one big win from a better model, there are N big wins, none of which are quite headline-worthy, for all N topics where there’s sufficient training data to produce a specialized model. At that point, their business is closer to that of a Bloomberg or FactSet: there’s still a lot of revenue (and margin!) in collecting and cleaning data, but it doesn’t scale the way a universal intelligence product would. ↩︎

You're on the free list for The Diff! Last week, paying subscribers read about making markets in death ($), how the SaaS-pocalypse is self-referential ($), and why software engineers are earning more per year but spending more of their time unemployed ($). Upgrade today for full access.

Upgrade Today

Diff Jobs

Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:

  • A leading AI transformation & PE investment firm (think private equity meets Palantir) that’s been focused on investing in and transforming businesses with AI long before ChatGPT (100+ successful portfolio company AI transformations since 2019) is hiring experienced forward deployed AI engineers to design, implement, test, and maintain cutting edge AI products that solve complex problems in a variety of sector areas. If you have 3+ years of experience across the development lifecycle and enjoy working with clients to solve concrete problems please reach out. Experience managing engineering teams is a plus. (Remote)
  • High-growth startup building dev tools for wrangling and debugging complex codebases is looking for someone who can personally execute the SaaS bear case: review the third-party software they use and figure out what to keep, what to drop, and what to implement in-house. (SF, DC)
  • Series A startup that powers 2 of the 3 frontier labs’ coding agents with the highest quality SFT and RLVR data pipelines is looking for growth/ops folks to help customers improve the underlying intelligence and usefulness of their models by scaling data quality and quantity. If you read axRiv, but also love playing strategy games, this one is for you. (SF)
  • Ex-Bridgewater, Worldcoin founders using LLMs to generate investment signals, systematize fundamental analysis, and power the superintelligence for investing are looking for machine learning and full-stack software engineers (Typescript/React + Python) who want to build highly-scalable infrastructure that enables previously impossible machine learning results. Experience with large scale data pipelines, applied machine learning, etc. preferred. If you’re a sharp generalist with strong technical skills, please reach out
  • Ex-Citadel/D.E. Shaw team building AI-native infrastructure that turns lots of insurance data—structured and unstructured—into decision-grade plumbing that helps casualty risk and insurance liabilities move is looking for forward deployed data scientists to help clients optimize/underwrite/price their portfolios. Experience in consulting, banking, PE, etc. with a technical academic background (CS, Applied Math, Statistics) a plus. Traditional data scientists with a commercial bent also encouraged. (NYC)

Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.

If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.

Elsewhere

Cameras

Some frictional costs are the gap between good things happening and not, and some are load-bearing: if it were cheap enough to violate a norm, everyone would do it, but inconvenience is often the most effective kind of enforcement. The rise of smart glasses that can record everyday interactions is an example of this. Spend a day in a busy city and you'll probably see something entertaining; if you're wearing your Meta Ray-Bans, perhaps a million strangers will also find it entertaining. And maybe to at least one of those million strangers, the person you're watching isn't a stranger to them, and they'll get outed, and anyone who Googles their name will find a video about some viral mishap they were subject to years ago.

This is hard to avoid, and will take some getting used to. But in a way, it's just part of a general pattern of online cultural norms invading real-world ones. On the Internet, anonymity is hard, and it's hard as a function of time spent and audience size. So eventually, everyone whose doxx is interested gets doxxed. This is not an especially healthy norm online, but it's one that Internet natives have gotten used to, and as cameras get cheaper and it gets easier to search through a bunch of footage for the funniest possible clip, online and offline will converge in this respect.

Disclosure: long META.

Painting the Tape

There's a continuum of respectability for tender offers. At one end, there's a case where a company has been mismanaged for years, someone offers to buy them at a big premium, the board says no, and they just go to shareholders directly offering that same premium. At the other end is the biggers who do mini-tender offers where they bid below market and hope that shareholders hear about it and tender their shares. Boaz Weinstein is somewhere in between in bidding 65-80% of stated net asset value for various Blue Owl-operated private credit vehicles ($, WSJ). At best, this is a pretty bare-knuckles arbitrage: short some public vehicles, then lob in a bid that implies that their private assets are severely mismarked, and hope to execute enough of both sides of the trade to roughly hedge out the risk. At worst, it's the kind of questioning firms' asset value that tends to happen at a more volatile point in the cycle.

Edge Cases

OpenAI had a vigorous internal debate about whether to alert law enforcement about chats with a user who, months later, committed a school shooting ($, WSJ). Which sounds pretty bad, but means that OpenAI has actually done an extraordinarily good job at well-calibrated moderation. If zero OpenAI employees looked at it, that would mean they weren't really trying. If one looked at it and immediately called the cops, it would mean that they're taking quick action—but raise the question of how many other people's chats they were reading, whose privacy they were violating, etc. In this case, we're incredibly fortunate in that it was apparently the kind of edge case that had a dozen separate employees debating the pros and cons of taking action. In other words, there were people who saw a case for, there were people who saw a case against, there were people who tagged in other peers to the debate, etc.

It's just a fact that either a right to privacy must be surrendered to prevent mass shootings or that the occasional mass shooting is an inevitable cost of respecting people's privacy. That's just how the tradeoff works. So long as one of the things people keep secret is the planning they do for mass-casualty events, and so longs as that's not the only kind of secret in existence, there's a necessary tradeoff here. This case shows that OpenAI is slightly violating some people's privacy, by convening Slack chats to discuss what exact kind of crazy they are, but that they aren't automatically sharing every suspicious chat with law enforcement. That's the happy medium you should aim for: in an uncertain world, you want there to be cases where people documented qualms or warned about risks and the exact risk they warned about materialized. Because the alternative is that either every bad outcome is a complete surprise or that there's no way to know what limits law enforcement has to accessing the private thoughts people share with LLMs.

Leaning in to Leaning Out

A long time ago, Uber's plan was to get to scale with human drivers and then replace all of them with AI. For various reasons, this didn't work out for them, but also started to work out for other companies. Now, they're in a position where they're a great demand source for AVs, because Uber customers know that they'll always be able to get a vehicle, even at times of peak demand, which means that AV fleet operators know that Uber is where the ride hailing will happen. So they're taking the next step, of offering services like training data, maps, fleet management, and financing to AV companies. In a way, this is Uber reaching even further back: before the company's ambition was to replace drivers entirely, it was just another capital-light complement to a capital- and operations-intensive industry. If other companies want to provide the capital, and if that capital barrier is so high that none of them can grow fast enough to win the entire market, Uber's in a great position.

Two-Tier Rounds

Startups are sometimes raising in two tranches, one at a low valuation and a smaller one right after at a higher one ($, WSJ). There are two good models here:

  1. Investors provide a mix of cash and "other," where other is sometimes close to zero by design (in a case where a diversified fund wants exposure and offers to leave management alone) and sometimes it's most of the package (being branded as the most legitimate company in a given space, for example). So in a given round, the value-added investors are overpaying, and everyone else is getting a good deal.
  2. Assets have a demand curve; you can sell more at a lower price, but might be able to sell some at a higher price. But it's a common practice to mark a company's value to whatever the last price it traded at was.

What this dynamic lets companies do is to say that they raised money from a high-profile fund, and to say that they got a great valuation, without quite coming out and admitting that the top-tier fund wouldn't have paid that valuation, and that the fund that thinks they're worth $1bn rather than $400m is not as strong a signal. So, like any case where people can benefit from slightly changing the form of some transaction without altering its economic fundamentals, this will either become a completely standard part of the process—redistributing some returns to the most recognizable funds—or turn into one of those fuzzy signals that the company in question is willing to game things a bit.

Audio Transcript

Screenshot-2022-02-17-at-12.51.50-1.png

In this issue:

  • The Middle Game: Routers at the Edge—If there's an AI application that requires specialized knowledge, and is already attached to a lucrative existing business model, it can achieve escape velocity by getting rapid deployment and monopolizing the best sources of training data. This is already happening in healthcare.
  • Cameras—What happens when everyone's on camera all the time?
  • Painting the Tape—Bidding low to buy low.
  • Edge Cases—When mistakes show that you're well-calibrated.
  • Leaning in to Leaning Out—Uber once wanted to win in AVs, but the best thing they can do now is to commoditize the business so there are fewer big winners but many more participants.
  • Two-Tier Rounds—It's arguably unfair that investors who add different levels of value all get in on the same terms. But rectifying this makes companies look like they're doing better than they are.
audio-thumbnail
The Diff February 23rd 2026
0:00
/1516.120816

Talk to AI about this post on Read.Haus.

The Middle Game: Routers at the Edge

A few months ago we wrote Routers, Apps, AGI, about how chatbots routing queries to whatever tool could answer them—another model, a third-party service, the checkout page for a service provider, contact information for a consultant to hire—could lead to LLMs basically rolling up the real economy. The basic idea was a Hayekian vision where the big problem is generic information-transmission: whenever something changes anywhere in the world, it changes people’s optimal behavior in unpredictable ways, and you need some system to transmit this information to the intended recipients without burying them under trivialities. Prices are an elegant way to do this, but intelligence-on-demand can operate in more dimensions.

In other words, AGI isn’t Nobel Prize winners in a datacenter but a superhuman coordination technology that represents some high-fidelity simulacrum of the economy itself. In the fullness of time, if we were all wearing Google Glasses, or even going further: some kind of brain/machine interface, it would stand to reason that Google, with exclusive access to those high fidelity, real-time sensors, could coordinate a significant amount of economic activity by organizing all the world’s high-entropy information as soon as it’s created.

OpenAI’s roughly trillion-dollar valuation (don’t be afraid to round up—it works for Sam Altman!) is a bet that OpenAI can apply this routing process over a larger share of the economy, or with more precision, than any other company. That valuation is roughly equal to the market cap Google has added since early September, implying that the market thinks Google’s decades-long head start in acquiring users and building cheap infrastructure to organize the world’s information might put them in the lead—with a preexisting business model that’s already quite good at routing natural-language queries to the highest bidder.

Search engines and LLM chatbots can free-ride on an existing infrastructure of sensors that have collected data and organized it in text form. But for now, they’re locked into relatively simplistic economic models where they can’t capture the maximum value they create. More narrowly-focused entities can do so: they can collect the subset of data that their users value, exclude anything that dilutes that value, and plug into a supply chain that already has experience extracting value. It’s hard for a general-purpose product to catch up to all the special-purpose tools and relationships an existing incumbent has. In particular, general-purpose labs have a shortage of special-purpose sensors, where a sensor is anything that collects data that would help an AI tool improve its world model in a way a particular user would appreciate.

The special-purpose applications already see economically-valuable problems before the big labs, and will continue to do so for a while.[1] There’s too much informational and economic dark matter that the big labs don’t see, and now smaller players know they either need to build a durable standalone business model or pivot to being mostly in the business of collecting data.

The maximalist router thesis is a very chatbot-centric view of the world, and while the broadest use of LLMs has been in using a general-purpose model to respond to natural-language queries (through a chatbot, or by adding inline AI-generated results to search results from traditional search engines), the fastest adoption outside of software is actually happening in healthcare, particularly with OpenEvidence.

OpenEvidence recently raised at a $12B valuation, up from $6B in October. The key driver being exponential ad (routing) revenue growth: they’re at a $150M annual run-rate and growing 30% month over month. And, like other ad platforms, they’re generating this revenue at ~90% gross margins.[2] This pattern-matches to plenty of cases where someone either found a value-added way to resell tokens or a way to move inference from COGS to opex so their unsustainable business model looked a little better. But OpenEvidence isn’t reselling tokens. They have built the only trusted sensor that can legibilize doctors’ uncertainty/intent at the moment of clinical decision making. They sit at the edge of two pools of dark context that centralized routers cannot touch: doctors/patients and the universe of medical journals, pharma companies, med device manufactures, diagnostics companies, payors, clinical service providers, etc. who provide useful information/solutions to them. OpenEvidence’s traction shows something the market hasn't fully appreciated. Namely, that vertical-specific sensors at the edge and the routers they plug into can capture enormous value precisely because of these limitations to a centralized, fully general approach. Their traction is a proof point for a general thesis on edge/vertical routers, gives us a great opportunity to unpack how our original concept of AGI as a clever matching engine is actually playing out in a specific area of the economy, and makes clear why the EV of one or two centralized routers is unlikely to eat all of global GDP anytime soon.

Fundamentally this is because a router’s value goes beyond raw intelligence and is some multiplicative function of: (1) the absolute number of people you are solving problems for[3], (2) how economically valuable those people are, (3) the relative economic value of the problems you are solving for them (the value of the context you originating/sensing/routing), (4) what fraction of the information needed to solve the problem is surfaced by these users (and what fraction of the cost of a solution is information rather than energy, raw materials, or transportation) and (5) how completely you are able to solve these problems i.e., your access to and influence/execution over the comprehensive solution set and how naturally these actuators expand over time. This actuator control and expansion is generally some function of how successful you've been at generating/aggregating useful dark matter/context (1, 2, 3).

1-3 measure the quantity of previously illegible dark matter your sensors are originating and capturing as well as the quality/value of that context; 4 and 5 dictate how effectively you can route and monetize it.

OpenEvidence is extremely privileged across all five axes. They're solving problems for doctors (the highest-earning professionals in the US) and for lots of them: as of last month, 50%+ of US physicians or 600,000 were using OpenEvidence and for an average of 14 minutes a day. The last time a technology was adopted by doctors that fast, it was Google. They're solving the most economically valuable problem a doctor faces: clinical decision-making ie. the actual diagnosis/treatment of patients under uncertainty and in real time. And they are the most complete solution to date: grounded in evidence from the most prestigious medical journals, scores 100% on the USMLE, helps doctors match patients to potentially life saving clinical trials, suggests context-aware treatment paths/medications/med devices, etc. that are most likely to solve patients’ problems and help doctors do their jobs.

But the labs are also trying to build the universal API for healthcare. OpenAI, Anthropic, and Google have launched healthcare products. So what gives OpenEvidence durability?

In a few words it’s compounding trust and what that enables. Doctors are perhaps the most credential-sensitive demographic in the world, in part because they their entire early adulthood earning an artificially scarce credential. A world where technology replaces credentials, legible signals of human expertise, and expert institutions makes a doctor feel very uncomfortable. If you want to know exactly how uncomfortable, next time you see your doctor ask them about some piece of health advice you found on Google or ChatGPT and observe their facial expression and response. OpenEvidence understood this and ran a full-stack credibility strategy with a few interlocking parts. The first was explicit counterpositioning against the labs who were training on the open internet (including health blogs, social media, etc.—any remedy that’s marketed with “Doctors hate this one weird trick” will be represented in a broad set of training data) and liable to produce the embarrassing (and potentially dangerous) medical outputs that resulted from doing so. OpenEvidence trained an ensemble of specialized models exclusively on 35 million peer-reviewed sources, bootstrapping initially on public domain material from the FDA, CDC, PubMed, etc. Their models have no connection to the public internet during training or inference whatsoever. This meant hallucination risk in their early system was meaningfully lower than pre-o1/reasoning paradigm LLMs and the product was free, so doctors began adopting it virally.[4]

A number of those early adopters happened to be senior members of the editorial boards at the most prestigious medical journals, which led to the next piece: OpenEvidence was able to lock down exclusive content partnerships with JAMA, NEJM, NCCN, the American Medical Association, all 11 JAMA specialty journals, the American Academy of Family Physicians, the American College of Emergency Physicians, etc. They used this exclusive, credentialed ground truth to train an ensemble of models that became the first to score 100% on the USMLE. This attracted more doctors, which attracted more credentialed ground truth, and the flywheel began spinning. Trust with practitioners became trust with the institutions that credential them, which became exclusive access to the ground truth that made the system better and deepened trust with practitioners further. The NEJM partnership in particular was pivotal and illustrates the supply side flywheel nicely. Daniel Nadler, OpenEvidence’s CEO, provides some interesting context there: “some of these really well-funded AI companies threw enormous amounts of money at them (NEJM) and they said no. If they’re a private company, they probably would have said yes, but they’re a nonprofit so they said no because the Massachusetts Medical Society, which is a nonprofit organization, cared more about the sanctity and the pristineness of their mission as a nonprofit than they did about just trying to score some sort of quick commercial contract.” In fact, the NEJM reached out to OpenEvidence, not the other way around: “In our case, we didn’t show up at their door. A number of the very senior people on the editorial board of the New England Journal of Medicine were power users of OpenEvidence, and they wanted their content to show up in the thing that they were using.”

Beyond exclusive supply and counterpositioning, OpenEvidence made their corporate identity synonymous with credentialed trust. They engineered their about/team page to speak to doctors’ biases: under each team member's name is not where they previously worked or their current title in the company, but only which prestigious educational credentials they earned and from which institutions.

By arbitraging credibility from the journals to create a superior system, mirroring doctors' own credential-sensitivity back to them, and offering the product for free, they aggregated 50% of the physicians in the US. And by aggregating doctors they find themselves in a position of extraordinary power in the healthcare market. Doctor-friendly solution providers, from the remaining scientific journals and universe of pharma/med device businesses, to diagnostics services providers, vertical systems of record like Veeva and the EHR platforms like Epic, payors, and clinical research organizations etc. are all lining up to plug in, stream their context, and make OpenEvidence’s platform closer to a medical super intelligence than it already is.

Simply, OpenEvidence has become the only healthcare router that’s fed by a trusted sensor and this has interesting implications. Because doctors trust the sensor, they reveal their clinical uncertainty naturally, in real time, as a byproduct of using it. This dark matter (clinical uncertainty based on high-entropy, idiosyncratic patient situations) is very much created, not discovered and it’s created by virtue of trust. A centralized router can't replicate this by offering superior general intelligence alone, because doctors won't generate the context for a platform they don't trust. In fact, you can model the set of things doctors ask/reveal to OpenEvidence as the exact set of things they would be very hesitant to ask ChatGPT about: the lack of trust creates sky high verification costs, and the asymmetric downside from not verifying untrusted outputs means that the valuable context is never originated.

This is worth dwelling on because it inverts a common assumption about context and AI. The default mental model is one of discovery: valuable information exists out there in the world, and the job of a sensor is to go find it, scrape it, whatever, and pipe it back to a router. But OpenEvidence’s service is closer to selling confirmation of an informed guess, plus documentation to back it up. The doctor’s thought trace that boils a real time data stream about patient symptomology, diagnostic results, medical history and all of their prior knowledge/intuition into a clinical hypothesis and particularly, doubts they have around the hypothesis (“clinical uncertainty”) previously did not exist in any other system, on prem, in the cloud, on paper, anywhere. Perhaps it existed as airwaves when a doctor asked another doctor they trusted for advice about a specific patient scenario. No one could survey doctors at scale and in real time about their diagnostic uncertainty; the best doctors wouldn't fill out the survey, and even if they did, the act of filling out a survey is not qualitatively the same as the context revealed from a constant stream of real, novel patient cases, under real pressure and uncertainty. The Mercors, Surges, and Scales of the world are trying to replicate this for the labs, but it is not of the same quality, nor does quantity of decent inputs make up for the quality of the best ones: companies that are hiring doctors to provide and rate answers for general-purpose AI tools are hiring the doctors who aren’t minting money by using special-purpose AI tools, and are presumably getting adversely selected. And it’s very hard to change this because of the time value of money. Mercor, Surge, Scale, etc. are paying you to train a model whose outputs as a result of that training will be valuable at some point in the future. A patient or insurance company is paying a doctor for outputs that are immensely valuable to them (at least in theory) today.

OpenEvidence’s context is a byproduct of genuine use, and genuine use is a byproduct of trust, and trust is a byproduct of credentialed ground truth supply that comes from existing at the edge. If you remove any link in that chain, the dark matter can’t be legibilized (because it wouldn’t even exist). This goes beyond Hayek's knowledge problem: Hayek showed that valuable knowledge is dispersed, local, etc. and can't be centrally collected; in the case of routers at the edge like OpenEvidence, the knowledge hasn't been created. It is constituted by the interaction between a trusted sensor and the economic actor using it, which means a centralized router can't access it not because it's hard to find, but because there is nothing to find.

This makes routers at the edge very durable: they not only see context others can't, but also generate context others can't, because the context is constituted by the relationship/interaction between the sensor and the economic actors using it. And supply follows this this scarce context, which makes the solution more and more useful, and creates a flywheel where the context generated is increasingly scarce: as the platform handles more of the routine cases (the ones already inside the training distribution), the remaining queries doctors bring to it are increasingly outside the existing data (e.g. genuine edge cases, the novel clinical uncertainty, etc.) So the platform's training corpus self-selects for progressively higher-entropy, higher-value tokens over time. We could even see a two-track healthcare system, where ChatGPT-enhanced urgent care and medical tourism take care of prosaic issues and compete on price, while there’s a basically separate healthcare system that solves trickier problems, harvests proprietary data, and has immense pricing power.

We can generalize this:

  1. Exclusive credentialed ground truth (JAMA, NEJM partnerships) makes the sensor trusted

  2. Trust legibilizes latent dark matter (50%+ of US doctors reveal their clinical uncertainty daily because they trust the sensor)

  3. Legibilized dark matter monetizes privately without leaking to a centralized router (pharma pays $70–$150 CPMs for access to doctors' moment of highest intent, and OpenEvidence captures this)

  4. More and more solutions, from clinical trial patient recruitment to prior auth to medical device discovery, etc. continuously plug in, and compound dark matter origination/capture.

  5. As more edge/domain specific problems are legibilized and accurately/completely solved by an increasingly large ledger of solutions, a new signal is created and compounded: verified outcomes in the domain. These verified outcomes (how well certain matches solved certain problems) can be leveraged to improve the edge router via RL. This is a runaway advantage that’s very hard to replicate without 1), 2), and the time it takes for 4) to mature.

  1. and 5) mean the usefulness of the platform compounds and the ceiling on monetization naturally rises over time. For example, OpenEvidence just launched clinical trial matching and patient recruitment. Pharma companies currently pay CROs billions to recruit patients for clinical trials and run those trials, very slowly and inefficiently. If OpenEvidence can fill a Phase III trial faster and with better-matched patients than a CRO currently can, pharma companies benefit massively. The faster the recruitment period, the faster the trial starts, and the better the patients, the higher the probability the trial succeeds. A faster trial with a higher probability of success means more time for a drug under patent earning monopoly profits. To put some numbers around this, the average pharma company currently spends ~$2B a year just on patient recruitment (~$40k per patient), and 80% of trials are delayed. Each day of delay costs $600K-$8M in lost revenue depending on the drug. For a blockbuster (like GLPs, Keytruda, etc.), a single day under patent is worth something like $8M. Pharma will pay much more than $70-$150 CPMs to accelerate that. This is the actuator expansion: OpenEvidence's primary actuator today is serving an ad (routing doctors’ attention to pharma), but clinical trial matching is a different, more valuable actuator entirely (routing patients to trials). The logical next actuator might be prior authorization automation (routing payment) and it’s hard to see where this stops: each new actuator extends the set of solutions OpenEvidence has access to and can execute over while keeping context dark from the centralized routers.

In this sense, OpenEvidence is the cleanest example of a pattern that is likely to be repeated by vertical edge routers across the economy. Wherever there are two (or more) pools of economically valuable but illegible context that centralized routers can't bridge (or aren’t trusted to) there's an opportunity for a router at the edge to originate the trusted sensor, originate/capture the dark matter, and make it legible to market participants that will pay.


  1. In the fullness of time labs/centralized economic world models in theory should be able to get it all, but there’s a middle game, and it’s worth considering deeply—especially because the rules of the game are getting revealed as we go. ↩︎

  2. In most cases, ad platforms have revenue growth that’s nonlinear with respect to usage growth, because they benefit from rising bid density. But there are plenty of treatments for rare diseases where there is no second-highest bidder. ↩︎

  3. Note that this is not strictly monotonic. The dark matter generated by a million people watching cat videos is likely worth less than 600,000 doctors revealing high-entropy clinical context. ↩︎

  4. There’s a deeper epistemological failure mode that OpenEvidence uncovered here. The labs' entire theory of intelligence presupposes that more data and more compute produces more capable/economically valuable systems across all domains. Their business model, capex strategy, and investor narrative all desperately want this to be true. OpenEvidence's success is a really great counterexample, it’s creating massive economic value using specialized models trained on much less data. It’s not easy for the labs to acknowledge that less data of the right kind outperforms more data of every kind in one of the most economically valuable domains out there. In some respect, acknowledging this would call their entire strategy into question or at least be some indication that they could be asking the wrong questions. It would also mean that instead of one big win from a better model, there are N big wins, none of which are quite headline-worthy, for all N topics where there’s sufficient training data to produce a specialized model. At that point, their business is closer to that of a Bloomberg or FactSet: there’s still a lot of revenue (and margin!) in collecting and cleaning data, but it doesn’t scale the way a universal intelligence product would. ↩︎

You're on the free list for The Diff! Last week, paying subscribers read about making markets in death ($), how the SaaS-pocalypse is self-referential ($), and why software engineers are earning more per year but spending more of their time unemployed ($). Upgrade today for full access.

Upgrade Today

Diff Jobs

Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:

  • A leading AI transformation & PE investment firm (think private equity meets Palantir) that’s been focused on investing in and transforming businesses with AI long before ChatGPT (100+ successful portfolio company AI transformations since 2019) is hiring experienced forward deployed AI engineers to design, implement, test, and maintain cutting edge AI products that solve complex problems in a variety of sector areas. If you have 3+ years of experience across the development lifecycle and enjoy working with clients to solve concrete problems please reach out. Experience managing engineering teams is a plus. (Remote)
  • High-growth startup building dev tools for wrangling and debugging complex codebases is looking for someone who can personally execute the SaaS bear case: review the third-party software they use and figure out what to keep, what to drop, and what to implement in-house. (SF, DC)
  • Series A startup that powers 2 of the 3 frontier labs’ coding agents with the highest quality SFT and RLVR data pipelines is looking for growth/ops folks to help customers improve the underlying intelligence and usefulness of their models by scaling data quality and quantity. If you read axRiv, but also love playing strategy games, this one is for you. (SF)
  • Ex-Bridgewater, Worldcoin founders using LLMs to generate investment signals, systematize fundamental analysis, and power the superintelligence for investing are looking for machine learning and full-stack software engineers (Typescript/React + Python) who want to build highly-scalable infrastructure that enables previously impossible machine learning results. Experience with large scale data pipelines, applied machine learning, etc. preferred. If you’re a sharp generalist with strong technical skills, please reach out
  • Ex-Citadel/D.E. Shaw team building AI-native infrastructure that turns lots of insurance data—structured and unstructured—into decision-grade plumbing that helps casualty risk and insurance liabilities move is looking for forward deployed data scientists to help clients optimize/underwrite/price their portfolios. Experience in consulting, banking, PE, etc. with a technical academic background (CS, Applied Math, Statistics) a plus. Traditional data scientists with a commercial bent also encouraged. (NYC)

Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.

If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.

Elsewhere

Cameras

Some frictional costs are the gap between good things happening and not, and some are load-bearing: if it were cheap enough to violate a norm, everyone would do it, but inconvenience is often the most effective kind of enforcement. The rise of smart glasses that can record everyday interactions is an example of this. Spend a day in a busy city and you'll probably see something entertaining; if you're wearing your Meta Ray-Bans, perhaps a million strangers will also find it entertaining. And maybe to at least one of those million strangers, the person you're watching isn't a stranger to them, and they'll get outed, and anyone who Googles their name will find a video about some viral mishap they were subject to years ago.

This is hard to avoid, and will take some getting used to. But in a way, it's just part of a general pattern of online cultural norms invading real-world ones. On the Internet, anonymity is hard, and it's hard as a function of time spent and audience size. So eventually, everyone whose doxx is interested gets doxxed. This is not an especially healthy norm online, but it's one that Internet natives have gotten used to, and as cameras get cheaper and it gets easier to search through a bunch of footage for the funniest possible clip, online and offline will converge in this respect.

Disclosure: long META.

Painting the Tape

There's a continuum of respectability for tender offers. At one end, there's a case where a company has been mismanaged for years, someone offers to buy them at a big premium, the board says no, and they just go to shareholders directly offering that same premium. At the other end is the biggers who do mini-tender offers where they bid below market and hope that shareholders hear about it and tender their shares. Boaz Weinstein is somewhere in between in bidding 65-80% of stated net asset value for various Blue Owl-operated private credit vehicles ($, WSJ). At best, this is a pretty bare-knuckles arbitrage: short some public vehicles, then lob in a bid that implies that their private assets are severely mismarked, and hope to execute enough of both sides of the trade to roughly hedge out the risk. At worst, it's the kind of questioning firms' asset value that tends to happen at a more volatile point in the cycle.

Edge Cases

OpenAI had a vigorous internal debate about whether to alert law enforcement about chats with a user who, months later, committed a school shooting ($, WSJ). Which sounds pretty bad, but means that OpenAI has actually done an extraordinarily good job at well-calibrated moderation. If zero OpenAI employees looked at it, that would mean they weren't really trying. If one looked at it and immediately called the cops, it would mean that they're taking quick action—but raise the question of how many other people's chats they were reading, whose privacy they were violating, etc. In this case, we're incredibly fortunate in that it was apparently the kind of edge case that had a dozen separate employees debating the pros and cons of taking action. In other words, there were people who saw a case for, there were people who saw a case against, there were people who tagged in other peers to the debate, etc.

It's just a fact that either a right to privacy must be surrendered to prevent mass shootings or that the occasional mass shooting is an inevitable cost of respecting people's privacy. That's just how the tradeoff works. So long as one of the things people keep secret is the planning they do for mass-casualty events, and so longs as that's not the only kind of secret in existence, there's a necessary tradeoff here. This case shows that OpenAI is slightly violating some people's privacy, by convening Slack chats to discuss what exact kind of crazy they are, but that they aren't automatically sharing every suspicious chat with law enforcement. That's the happy medium you should aim for: in an uncertain world, you want there to be cases where people documented qualms or warned about risks and the exact risk they warned about materialized. Because the alternative is that either every bad outcome is a complete surprise or that there's no way to know what limits law enforcement has to accessing the private thoughts people share with LLMs.

Leaning in to Leaning Out

A long time ago, Uber's plan was to get to scale with human drivers and then replace all of them with AI. For various reasons, this didn't work out for them, but also started to work out for other companies. Now, they're in a position where they're a great demand source for AVs, because Uber customers know that they'll always be able to get a vehicle, even at times of peak demand, which means that AV fleet operators know that Uber is where the ride hailing will happen. So they're taking the next step, of offering services like training data, maps, fleet management, and financing to AV companies. In a way, this is Uber reaching even further back: before the company's ambition was to replace drivers entirely, it was just another capital-light complement to a capital- and operations-intensive industry. If other companies want to provide the capital, and if that capital barrier is so high that none of them can grow fast enough to win the entire market, Uber's in a great position.

Two-Tier Rounds

Startups are sometimes raising in two tranches, one at a low valuation and a smaller one right after at a higher one ($, WSJ). There are two good models here:

  1. Investors provide a mix of cash and "other," where other is sometimes close to zero by design (in a case where a diversified fund wants exposure and offers to leave management alone) and sometimes it's most of the package (being branded as the most legitimate company in a given space, for example). So in a given round, the value-added investors are overpaying, and everyone else is getting a good deal.
  2. Assets have a demand curve; you can sell more at a lower price, but might be able to sell some at a higher price. But it's a common practice to mark a company's value to whatever the last price it traded at was.

What this dynamic lets companies do is to say that they raised money from a high-profile fund, and to say that they got a great valuation, without quite coming out and admitting that the top-tier fund wouldn't have paid that valuation, and that the fund that thinks they're worth $1bn rather than $400m is not as strong a signal. So, like any case where people can benefit from slightly changing the form of some transaction without altering its economic fundamentals, this will either become a completely standard part of the process—redistributing some returns to the most recognizable funds—or turn into one of those fuzzy signals that the company in question is willing to game things a bit.

0:00/0:00
The Middle Game: Routers at the Edge | Speasy