I was inspired by Tom Cunningham’s notes on two economics of transformative AI workshops to write something similar. Last month, the Windfall Trust hosted the London version of “Economic Scenarios for Transformative AI,” gathering an excellent group of economists, policymakers, technologists, philosophers, and other experts to talk about a scenario the UK could face.
“By October 2030, nearly a third of earners in the top 20% had either lost their jobs or seen their roles dramatically downgraded,” and GDP growth hit 4% in 2028. When Windfall passed around the scenario before the workshop, some thought it was ridiculously heady. The UK hasn’t seen sustained growth at this rate over many years since the era of Heath and Wilson, over half a century ago. But I suppose if AI is a normal technology, the UK will have the same problems and advantages as it does now, so there’s no point going over the same ground. Other reviewers found the scenario far too conservative, for all knowledge work would be gone by 2030. I’ll let you guess if their office is closer to Shoreditch or Whitehall.
I won’t question the scenario, since the workshop asked us to take it as given. Instead, I want to talk about what the workshop taught me about how to communicate transformation. At first, I was left wanting more, because even though we agreed on the gravity of the disruption, our ideas oscillated between the incrementally boring and the terrifying but paralyzing. But I left thinking that the boring ideas were actually the more effective ones, because even if they didn’t superficially sound equal to the occasion, they started from expectations people already held and extended institutions rather than reinvented them.
The day was split into two halves. In one half, we thought of ways the UK could take maximum advantage of transformative AI. The room was not shy about confronting the uncomfortable. One attendee ventured that they just didn’t see any way organized labor would stay a core part of the social contract. An engineering lead at an AI company declared that they would never hire a junior-level employee again—not because juniors would need to reskill, but because they couldn’t imagine juniors ever contributing useful cognitive labor from here on out. A third panelist proposed, to a depressing lack of dissent, that doing most ideas was illegal in the UK. And that’s unlikely to change, because, in their estimation, “only six or seven” political principals in the world have truly internalized the transformative potential of AI, “maybe one or two” in the UK, with the rest in the Trump administration and the Emirates.
Radical sentiments didn’t necessarily translate into radical plans. In the other half of the workshop, we advised the prime minister on what to say in this hypothetical 2030 of high unemployment and high growth. Uncertainty and small differences tempered what we collectively thought was best policy. Plus, remember that most ideas were illegal. I didn’t cover myself in glory in committee. My group spent a good fraction of our precious hour (to solve everything) debating whether point three of our three-point plan should be named “retraining” or “retraining… for opportunity.” This semantic breakthrough proved immaterial, though, since the most memorable speech award must go to another group’s representative, who got up and in one breath reeled off: “We’re going to have an income tax, a capital gains tax, a corporate tax, a VAT, a departure tax, a dividend tax, an inheritance tax, a land tax, and even a carbon tax. We know this might drive some companies offshore but we’ll figure that out when it comes.” I still think Armando Iannucci is a genius; I just didn’t know how much of his script writes itself.
You can imagine, to my bias from an industry that worships speed and reinvention, how at first this deflated me. For all the expertise in the room and shared conviction on the stakes, we had no clearer picture of how things would go. I thought of a recent observation from Dean Ball, that the main fissure in AI politics now wasn’t left or right, safety or acceleration, but “takes advanced AI seriously as a concept vs. does not take advanced AI seriously.” He encountered middle powers at the India AI summit in denial about the ever more likely arrival of superintelligence and what it would mean for their nations. The Department of War’s quarrel with Anthropic, too, is about the kind of technology AI is. When Claude turns to blackmail, the government is no more worried than it is about a computer virus, and even if it should be more worried, all the more reason the rules shouldn’t be written in a private company’s contract.
But this workshop had little disagreement on this point. Participants representing unions to media to frontier labs attended because they all took transformative AI seriously. So while that was the first fissure to close, still the ideas that survived a room full of people who disagreed on the rest and the uncertainty of the future looked modest.
In his 1790 pamphlet Reflections on the Revolution in France, Edmund Burke explained why he thought the French Revolution would fail but England’s Glorious Revolution succeeded. When England had to achieve the extraordinary feat of replacing its king, it framed the act as “a small and a temporary deviation” from the existing order. Somers drafted the Declaration of Right to make revolution look like continuity; it was “a marvellous providence” that the new monarchs could reign “on the throne of their ancestors.” England passed down its crown, church, peerage, and commons, holding that “people will not look forward to posterity, who never look backward to their ancestors.”
Burke is unsparing toward the French revolutionaries, who tore down the walls and foundations of “a noble and venerable castle,” discarding all the advantages of their ancient state to rebuild anew from first principles. He accused them of “despising everything” that belonged to them: “They have no respect for the wisdom of others; but they pay it off by a very full measure of confidence in their own.” And so they lacked “circumspection and caution” when they made choices for the “sentient beings, by the sudden alteration of whose state, condition, and habits, multitudes may be rendered miserable.” In research engineering, too, I spare a thought for the multitudes of sentient beings rendered miserable by a total codebase rewrite.
Before I succumb to making a general case for conservatism, it’s worth remembering that Burke himself was a reformist, as Henry Oliver thoroughly argues. Don’t forget in this auspicious year that he was for the American colonists. Burke was for transformative change through existing frameworks. England altered existing law as little as possible to change its succession, lest authority be confused with power. “A state without the means of some change, is without the means of its conservation.”
The workshop ideas I discounted for being boring were Burkean. Someone in my group fairly pointed out that I kept fast-forwarding to the worst case, concluding that nothing was worth trying. The better instinct was another proposal that cut through the committee fog, which was simply a national address. The PM should get up and be honest about the disruption AI would bring. Something like: nothing has changed. We made a deal that if the country got richer, everyone would get richer. That hasn’t happened for a generation, but AI gives us the chance to finally keep the promise made long ago. Taxes and benefits don’t sound as adequate to the moment as proclaiming a new social contract, which is exactly why they would work as one.
The AI industry has all kinds of French revolutionary tendencies. In our own self-conception, we’re bold, inevitable, and on the right side of history. At our worst, we’re ignoring our inheritance to remake the world from abstract rational principles, dismissive of accumulated experience, and impatient that no one else is keeping up. I think we’re meeting a resistance to that impulse that is earned. There’s a growing coalition, wary of us, that counts among its members teachers, parents, artists, journalists, our neighbors, regulators, the left and the right. Even the new Toy Story is anxious about AI.
Yes, the water issue is fake, we should build more data centers and power plants, and I agree with Paul Ford that even when you give someone an honest warning sometimes they just need to screech. So while I wouldn’t concede the substance, I wonder how we can be more careful in delivering it.
There’s that moment dramatized in Oppenheimer where Truman calls Oppenheimer a crybaby for wailing that he has blood on his hands. I don’t think we read this as enough of an indictment of Oppenheimer as we should. Oppenheimer is so obsessed with the enormity of the transformation and his own place in it (“I am become death”) that he fails to convey anything actionable. As it happened, neither his pacifism nor von Neumann’s rational preemptive strike steered foreign policy. Instead containment and deterrence, gradualist and institutional, absorbed the transformative technology. The bomb eventually did change everything; the Oppenheimer of that scene just played less of a part than he thought he deserved.
If Burke could see the AI industry today, he might suggest that we frame the disruption as a faster version of something that’s happened before. We’ll have to fix with the tools we have faster than before, rather than announce that most jobs as we know them will get wiped out, only implying blank slates to come. He might wish we’d vaguepost less, or at least spell out what changes and what doesn’t instead of leaving it at “I’ve seen things you people wouldn’t believe.” Finally, and I don’t know what this is a symptom of, but we could be more mature on the world stage. I get that if the next few years are the most pivotal in history, there’s not much time or value in learning diplomatic proprieties. It’s also radical in the Burkean sense to not read off your phone when speaking to the Prime Minister of India.
Ideas which fulfill the expectations that everyone already has are not only the most politically feasible, but might even be the soundest ones. Retraining, an idea I was skeptical of in my workshop committee, is one of the ideas that Anthropic, hardly pessimists about AI’s labor impact, groups it to fit “nearly all scenarios.” I’ve also always been struck by the stickiness of universal basic income or assets as an idea, even though everyone seems to have their own reservations about it. Universal basic compute or a token tax sounds more fit for purpose, but maybe the good old price mechanism just works; redistribution has plenty of headroom without needing new instruments.
I have two friends well outside the AI world, one melancholy and the other exhilarated. The first is clear-eyed about what AI can do and that it won’t be held back, but grieves that something real is being lost in the way his industry used to do things. The second is puzzled by the alarm, because she sees continuity with all the change she’s had to adapt to before, is supremely confident in her ability to do so, and welcomes volatility as an opportunity for social mobility. Jia Zhangke’s New Year’s AI short film captures this duality well: the inheritance of the past, the vertigo of change, and the fascination that draws you forward. All these feelings are true. And if the transformation is going to feel bearable, we have to stitch the past with what’s coming in the same breath. Burke is too pleasurable to read not to quote one more time:
By a slow, but well-sustained progress, the effect of each step is watched; the good or ill success of the first gives light to us in the second; and so, from light to light, we are conducted with safety through the whole series… We compensate, we reconcile, we balance. We are enabled to unite into a consistent whole the various anomalies and contending principles that are found in the minds and affairs of men. From hence arises, not an excellence in simplicity, but one far superior, an excellence in composition.
Transformation not in a final revelation, but from light to light.
