The new ruling philosophy regarding AI

I suggest that (for now) it’s a mix of Marc Andreessen, Leopold Aschenbrenner, and Guillaume Verdon.

But let’s back up first. What was the Biden-Harris administration’s philosophy regarding AI? For the first half of Biden’s term, I would say they didn’t have one. As with most of the world, it was the release of ChatGPT in late 2022 that made AI a top-level issue.

And though it’s not as if Biden’s cabinet contained any avowed effective altruists, I do think that by default, the “safetyist” attitude towards AI that is associated with effective altruism and Less Wrong rationalism, was philosophically influential; not least because influential advocates of effective altruism were part of the elite Democratic base. FTX’s Sam Bankman-Fried came from that background. Open Philanthropy’s Dustin Moskovitz is another.

(I’ve listed three people as alleged thought leaders for the Trump 2.0 era; if I was going to pick three for the second half of the Biden era, maybe it would be Paul Christiano, Helen Toner, and Joseph Matheny.)

As most of us would know, irritation with AI safetyism did a lot to inspire the creation by a few Silicon Valley memelords of an alternative ideology, “effective accelerationism”; and after the unsuccessful OpenAI coup against Sam Altman at the end of 2023, e/​acc was widely considered to have won the culture war against effective altruism in the tech world.

Now, looking back from the end of 2024, we can see that many of the tech figures who affiliated with e/​acc at the start of the year, defected from elite consensus to ally with the victorious Trump Republicans by the end of the year. This is why I regard e/​acc as a major component of the emerging zeitgeist regarding AI and AI policy.

What follows is far more a product of intuitive speculation than scholarship. Also, I don’t live in North America, I’m poor, I have zero experience of contemporary Silicon Valley (or Washington DC, for that matter). I am a distant observer of all this. I am prepared to be corrected by people who are actually in the thick of things.

But for now, I don’t see anyone making a clear claim about which ideas will inform the thinking of the incoming American government and its allies. So this is my “model” of what’s ahead, make of it what you will.

First, Marc Andreessen. A pivotal figure in the 1990s Internet, CEO of the browser company Netscape, who seems to have then risen into the investment Valhalla of billionaire venture capitalists. In the wake of e/​acc and ChatGPT, he wrote a “techno-optimist manifesto” that incorporates AI into an older narrative of human progress through technology and capitalism. It’s good enough to stand as an example of its genre; it expounds a particular perspective on history, politics and economics that is probably shared by many of these tech captains of industry; and it ends with a list of about 50 other thinkers who Andreessen considers to be fellow travelers, so you can read them if you want more details.

Second, Leopold Aschenbrenner. A young former employee of OpenAI who became a wunderkind of AI strategic policy in mid-2024, thanks to the publication of his manifesto entitled “Situational Awareness”. It’s been discussed here on Less Wrong. His manifesto first endorses short timelines for superhuman AI, saying that it’s coming later this decade, and then says that the democratic world, led by the USA, must create and domesticate superhuman AI before a geopolitical and ideological rival like China does so; and that this should be done by the nationalization of labs engaged in research on frontier AI, as part of a new Manhattan Project aimed at solving superalignment.

Aschenbrenner has therefore fused the tech narrative of imminent superhuman AI, and the safetyist narrative according to which the preferences of superhuman AI will shape the future of life on Earth, with an America-First national-security perspective. A month before the vote, Ivanka Trump tweeted favorably about his manifesto, so we know that the incoming First Family has heard of it.

Third, Guillaume Verdon. Originally known only as e/​acc co-founder @BasedBeffJezos, he was doxxed by Forbes in the same week that Biden’s commerce secretary publicly declared e/​acc to be dangerous, and just a week after Altman was reinstated as OpenAI CEO. He was revealed as a Canadian quantum-information physicist (his thesis is quite interesting, if you’re into that), who worked on quantum AI at Google before co-founding his own startup, Extropic, with the idea of running AI on stochastic computer chips that directly utilize non-gaussian thermodynamic randomness to implement cognitive probability distributions (rather than doing everything at the software level).

I’ve already mentioned the role that e/​acc has played in bringing together Trump’s allies in the tech sector (though its role there is overshadowed by Elon Musk and his 2022 purchase of Twitter). When it comes to philosophy, e/​acc has a deserved reputation for glib memeing and sloganeering. However, I have included Verdon on this list because I also find implicit in his thoughts, an alternative to the influential model of the future (which Aschenbrenner arguably favors, as do I), according to which the creation of human-level AI will be followed by the emergence of “superintelligent” AI whose goals then dominate the world, regardless of whether that value system is “liberal democracy” or “more paperclips”.

In his talk “Thermodynamics of techno-capitalism”, Verdon instead presents a model of evolution that is persistently pluralistic. It’s still pretty sparse and undeveloped—maybe the few minutes after 10:00 are where it is most spelled out—but it’s one of complex systems that for thermodynamic reasons learn, and learn to learn. Competition never goes away, and values are never final. The fundamental metric of progress is how much energy you are able to spend, and that applies all the way from the first cells surviving in the primordial soup, to AI companies surviving in the global marketplace, and presumably on to whatever new interplanetary and interstellar forms of being emerge from life on Earth.

I’m exercising some latitude in interpreting his remarks here, which are mostly just about a common characterization of evolution and capitalism through a combination of thermodynamic and machine-learning concepts. But the point is that it’s a big picture, different from the synthesis we’re familiar with here (e.g. of a multiverse dominated by simulation and timeless trade among a population of autarkic superintelligences), and with a significant intellectual ancestry to back it up (especially complex systems theory); and something like it potentially provides a rationale for the seemingly unsafe strategies of a Zuckerberg or a Musk, when it comes to dealing with superintelligence.

(A historical digression here: Verdon’s company name, Extropic, of course brings to mind the Extropians, the 1990s Internet transhumanists among whom Eliezer first appeared. One of the differences between Extropian transhumanism and the transhumanism of Less Wrong rationalism, is that the Extropians were far more in sync with the idea that the struggle to survive never goes away and that pluralism based in decentralized freedom is the way to go even for transhuman beings, rather than the idea that everything hinges on identifying true human values and extrapolating them faithfully.

To this I would add that in the 1980s, Bruce Sterling’s SF novel Schismatrix featured a “Posthumanist” movement in a solar-system civilization of competing techno-cartels, whose political rhetoric derives from nonequilibrium thermodynamics. I have to think that someone among the founders of e/​acc was influenced by that, even if they went on to combine it with a pro-capitalist poetics not employed by Sterling.)

I’ve gone on at such length about the alternative model of the era of superintelligence, that I have supposedly found in e/​acc, precisely because it isn’t spelt out anywhere that I can find. e/​acc is variously accused of being in denial about superintelligence, or of hiding its indifference to the future of mere humanity for the sake of public relations, and I think there’s something to that. But if we’re looking for a serious rival to the “singleton” conception of what a world with superintelligence looks like, the unendingly pluralistic evolution of the e/​acc universe is such an alternative, and I think it will come up in some form, if the tech tycoons of Trump 2.0 are challenged on the topic of superintelligence.

No comments.