Just skimmed the pdf. This is my first exposure to Aschenbrenner beyond “fired by OpenAI”. I haven’t listened to his interview with Dwarkesh yet.
For some reason, the pdf reminds me a lot of Drexler’s Engines of Creation. Of course, that was a book which argued that nanotechnology would transform everything, but which posed great perils, and shared a few ideas on how to counter those perils. Along the way it mentions that nanotechnology will lead to a great concentration of power, dubbed “the leading force”, and says that the “cooperating democracies” of the world are the leading force for now, and can stay that way.
Aschenbrenner’s opus is like an accelerated version of this that focuses on AI. For Drexler. nanotechnology was still decades away. For Aschenbrenner, superintelligence is coming later this decade, and the 2030s will see a speedrun through the possibilities of science and technology, culminating in a year of chaos in which the political character of the world will be decided (since superintelligent AI will be harnessed by some political system or other). Aschenbrenner’s take is that liberal democracy needs to prevail, it can do so if the US maintains its existing lead in AI, but to do so, it has to treat frontier algorithms as the top national security issue, and nationalize AI in some way or other.
At first read, Aschenbrenner’s reasoning seems logical to me in many areas. For example, I think AI nationalization is the logical thing for the US to do, given the context he describes; though I wonder if the US has enough institutional coherence to do something so forceful. (Perhaps it is more consistent with Trump’s autocratic style, than with Biden’s spokesperson-for-the-system demeanour.) Though the Harris brothers recently assured Joe Rogan that, as smart as Silicon Valley’s best are, there are people like that scattered throughout the US government too; the hypercompetent people that @trevor has talked about.
When Aschenbrenner said that by the end of the 2020s, there will be massive growth in electrical production (for the sake of training AIs), that made be a bit skeptical. I believe superintelligence can probably design and mass-produce transformative material technologies quickly, but I’m not sure I believe in the human economy’s ability to do so. However, I haven’t checked the numbers, this is just a feeling (a “vibe”?).
I become more skeptical when Aschenbrenner says there will be millions of superintelligent agents in the world—and the political future will still be at stake. I think, once you reach that situation, humanity exists at their mercy, not vice versa… Aschenbrenner also says he’s optimistic about the solvability of superalignment; which I guess makes Anthropic important, since they’re now the only leading AI company that’s working on it.
As a person, Aschenbrenner seems quite impressive (what is he, 25?). Apparently there is, or was, a post on Threads beginning like this:
I feel slightly bad for AI’s latest main character, Leopold Aschenbrenner. He seems like a bright young man, which is awesome! But there are some things you can only learn with age. There are no shortcuts
I can’t find the full text or original post (but I am not on Threads). It’s probably just someone being a generic killjoy—“things don’t turn out how you expect, kid”—but I would be interested to know the full comment, just in case it contains something important.
Just skimmed the pdf. This is my first exposure to Aschenbrenner beyond “fired by OpenAI”. I haven’t listened to his interview with Dwarkesh yet.
For some reason, the pdf reminds me a lot of Drexler’s Engines of Creation. Of course, that was a book which argued that nanotechnology would transform everything, but which posed great perils, and shared a few ideas on how to counter those perils. Along the way it mentions that nanotechnology will lead to a great concentration of power, dubbed “the leading force”, and says that the “cooperating democracies” of the world are the leading force for now, and can stay that way.
Aschenbrenner’s opus is like an accelerated version of this that focuses on AI. For Drexler. nanotechnology was still decades away. For Aschenbrenner, superintelligence is coming later this decade, and the 2030s will see a speedrun through the possibilities of science and technology, culminating in a year of chaos in which the political character of the world will be decided (since superintelligent AI will be harnessed by some political system or other). Aschenbrenner’s take is that liberal democracy needs to prevail, it can do so if the US maintains its existing lead in AI, but to do so, it has to treat frontier algorithms as the top national security issue, and nationalize AI in some way or other.
At first read, Aschenbrenner’s reasoning seems logical to me in many areas. For example, I think AI nationalization is the logical thing for the US to do, given the context he describes; though I wonder if the US has enough institutional coherence to do something so forceful. (Perhaps it is more consistent with Trump’s autocratic style, than with Biden’s spokesperson-for-the-system demeanour.) Though the Harris brothers recently assured Joe Rogan that, as smart as Silicon Valley’s best are, there are people like that scattered throughout the US government too; the hypercompetent people that @trevor has talked about.
When Aschenbrenner said that by the end of the 2020s, there will be massive growth in electrical production (for the sake of training AIs), that made be a bit skeptical. I believe superintelligence can probably design and mass-produce transformative material technologies quickly, but I’m not sure I believe in the human economy’s ability to do so. However, I haven’t checked the numbers, this is just a feeling (a “vibe”?).
I become more skeptical when Aschenbrenner says there will be millions of superintelligent agents in the world—and the political future will still be at stake. I think, once you reach that situation, humanity exists at their mercy, not vice versa… Aschenbrenner also says he’s optimistic about the solvability of superalignment; which I guess makes Anthropic important, since they’re now the only leading AI company that’s working on it.
As a person, Aschenbrenner seems quite impressive (what is he, 25?). Apparently there is, or was, a post on Threads beginning like this:
I can’t find the full text or original post (but I am not on Threads). It’s probably just someone being a generic killjoy—“things don’t turn out how you expect, kid”—but I would be interested to know the full comment, just in case it contains something important.
He graduated Columbia in 2021 at 19, so I think more like 22.