Superintelligences are pareto efficient wrt lesser intelligences, they are not in general efficient on all domains wrt lesser intelligences. Superintelligences are constrained by the pareto frontier across all domains they operate in.
I think optimality across all domains is impossible as a matter of computer science and the physics/information theory of our universe.
I think efficiency in all domains wrt human civilisation is infeasible as a matter of economic constraints and the theoretical limits on attainable optimality.
To put this in a succinct form, I think a superintelligence can’t beat SOTA dedicated chess AIs running on a comparable amount of compute.
I’d expect the superintelligence to have a lower ELO score.
Thus, I expect parts of human civilisation/economic infrastructure to retain comparative advantage (and plausibly even absolute advantage) on some tasks of economic importance wrt any strongly superhuman general intelligences due to the constraints of pareto optimality.
I think there would be gains from trade between civilisation and agentic superintelligences. I find the assumptions that a superintelligence would be as far above civilisation as civilisation is above an ant hill nonsensical.
That is I don’t think human civilisation feasibly develops a single system that’s efficient wrt (the rest of) human civilisation on all cognitive tasks of economic importance.
there’s a lot to unpack here. i feel like i disagree with a lot of this post, but it depends on the definitions of terms, which in turns depends on what those questions’ answers are supposed to be used for.
what do you mean by “optimality across all domains” and why do you care about that?
what do you mean by “efficiency in all domains wrt human civilization” and why do you care about that?
there also are statements that i easily, straight-up disagree with. for example,
To put this in a succinct form, I think a superintelligence can’t beat SOTA dedicated chess AIs running on a comparable amount of compute.
that feels easily wrong. 2026 chess SOTA probly beats 2023 chess SOTA. so why can’t superintelligent AI just invent in 2023 what we would’ve taken 3 years to invent, get to 2026 chess SOTA, and use that to beat our SOTA? it’s not like we’re anywhere near optimal or even remotely good at designing software, let alone AI. sure, this superintelligence spends some compute coming up with its own better-than-SOTA chess-specialized algorithm, but that investment could be quickly reimbursed. whether it can be reimbursed within a single game of chess is up to various constant factors.
a superintelligence beat existing specialized systems because it can turn itself into what they do but also turn itself into better than what they do, because it also has the capability “design better AI”. this feels sufficient for superingence to beat any specialized system that doesn’t have general-improvement part — if it does, it probly fooms to superintelligence pretty easily itself. but, note that this might even not be necessary for superintelligence to beat existing specialized systems. it could be that it improves itself in a very general way that lets it be better on arrival to most existing specialized systems.
this is all because existing specialized systems are very far from optimal. that’s the whole point of 2026 chess SOTA beating 2023 chess SOTA — 2023 chess SOTA isn’t optimal, so there’s room to find better, and superintelligence can simply make itself be a finder-of-better-things.
Thus, I expect parts of human civilisation/economic infrastructure to retain comparative advantage (and plausibly even absolute advantage) on some tasks of economic importance wrt any strongly superhuman general intelligences due to the constraints of pareto optimality.
okay, even if this were true, it doesn’t particularly matter, right ? like, if AI is worse than us at a bunch of tasks, but it’s good enough to take over enough of the internet to achieve decisive strategic advantage and then kill us, then that doesn’t really matter a lot.
so sure, the AI never learned to drive better than our truckers and our truckers never technically went through lost their job to competition, but also everyone everywhere is dead forever.
but i guess this relies on various arguments about the brittleness of civilization to unaligned AI.
I think there would be gains from trade between civilisation and agentic superintelligences. I find the assumptions that a superintelligence would be as far above civilisation as civilisation is above an ant hill nonsensical.
why is that? even if both of your claims are true, that general optimality is impossible and general efficiency is infeasible, this does not stop an AI from specializing at taking over the world, which is much easier than outcompeting every industry (you never have to beat truckers at driving to take over the world!). and then, it doesn’t take much inside view to see how an AI could actually do this without a huge amount of general intelligence; yudkowsky’s usual scheme for AI achieving DSA and us all falling dead within the same second, as explained in the podcast he was recently on, is one possible inside-view way for this to happen.
we’re “universal”, maybe, but we’re the very first thing that got to taking over the world. there’s no reason to think that the very first thing to take over the world is also the thing that’s the best at taking over the world; and surprise, here’s one that can probly beat us at that.
and that’s all excluding dumb ways to die such as for example someone at a protein factory just plugging an AI into the protein design machine to see what funny designs it’ll come up with and accidentally kill everyone with neither user nor AI having particularly intended to do this (the AI is just outputting “interesting” proteins).
I think that DG is making a more nickpicky point and just claiming that that specific definition is not feasible rather than using this as a claim that foom is not feasible, at least in this post.
He also claims that elsewhere but has a diferent argument about humans being able to make narrow AI for things like strategy(wich I think are also wrong)
At least that’s what I’ve understood from our previous discussions.
If the thing you say is true, superintelligence will just build specialized narrow superintelligences for particular tasks, just like how we build machines. It doesn’t leave us much chance for trade.
The system has an absolute advantage wrt civilisation at the task of developing specialised systems for any task
The system also has a comparative advantage
I think #1 is dubious for attainable strongly superhuman general intelligences, and #2 is likely nonsense.
I think #2 only sounds not nonsense if you ignore all economic constraints.
I think the problem is defining superintelligence as a thing that’s “efficient wrt human civilisation on all cognitive tasks of economic importance”, when my objection is: “that thing you have defined may not be something that is actually physically possible. Attainable strongly superhuman general intelligences are not the thing that you have defined”.
Like you can round off my position to “certain definitions of superintelligence just seem prima facie infeasible/unattainable to me” without losing much nuance.
I actually can’t imagine any subtask of “turning the world into paperclips” where humanity can have any comparative advantage, can you give an example?
I dislike Yudkowsky’s definition/operationalisation of “superintelligence”.
“A single AI system that is efficient wrt humanity on all cognitive tasks” seems dubious with near term compute.
A single system that’s efficient wrt human civilisation on all cognitive tasks is IMO flat out infeasible[1].
I think that’s just not how optimisation works!
No free lunch theorems hold in their strongest forms in max entropic universes, but our universe isn’t min entropic!
You can’t get maximal free lunch here.
You can’t be optimal across all domains. You must cognitively specialise.
I do not believe there’s a single optimisation algorithm that is optimal on all cognitive tasks/domains of economic importance in our universe.
Superintelligences are pareto efficient wrt lesser intelligences, they are not in general efficient on all domains wrt lesser intelligences. Superintelligences are constrained by the pareto frontier across all domains they operate in.
I think optimality across all domains is impossible as a matter of computer science and the physics/information theory of our universe.
I think efficiency in all domains wrt human civilisation is infeasible as a matter of economic constraints and the theoretical limits on attainable optimality.
To put this in a succinct form, I think a superintelligence can’t beat SOTA dedicated chess AIs running on a comparable amount of compute.
I’d expect the superintelligence to have a lower ELO score.
Thus, I expect parts of human civilisation/economic infrastructure to retain comparative advantage (and plausibly even absolute advantage) on some tasks of economic importance wrt any strongly superhuman general intelligences due to the constraints of pareto optimality.
I think there would be gains from trade between civilisation and agentic superintelligences. I find the assumptions that a superintelligence would be as far above civilisation as civilisation is above an ant hill nonsensical.
The human brain is a universal learning machine in important respects, and human civilisation is a universal civilisation in a very strong sense.
That is I don’t think human civilisation feasibly develops a single system that’s efficient wrt (the rest of) human civilisation on all cognitive tasks of economic importance.
there’s a lot to unpack here. i feel like i disagree with a lot of this post, but it depends on the definitions of terms, which in turns depends on what those questions’ answers are supposed to be used for.
what do you mean by “optimality across all domains” and why do you care about that?
what do you mean by “efficiency in all domains wrt human civilization” and why do you care about that?
there also are statements that i easily, straight-up disagree with. for example,
that feels easily wrong. 2026 chess SOTA probly beats 2023 chess SOTA. so why can’t superintelligent AI just invent in 2023 what we would’ve taken 3 years to invent, get to 2026 chess SOTA, and use that to beat our SOTA? it’s not like we’re anywhere near optimal or even remotely good at designing software, let alone AI. sure, this superintelligence spends some compute coming up with its own better-than-SOTA chess-specialized algorithm, but that investment could be quickly reimbursed. whether it can be reimbursed within a single game of chess is up to various constant factors.
a superintelligence beat existing specialized systems because it can turn itself into what they do but also turn itself into better than what they do, because it also has the capability “design better AI”. this feels sufficient for superingence to beat any specialized system that doesn’t have general-improvement part — if it does, it probly fooms to superintelligence pretty easily itself. but, note that this might even not be necessary for superintelligence to beat existing specialized systems. it could be that it improves itself in a very general way that lets it be better on arrival to most existing specialized systems.
this is all because existing specialized systems are very far from optimal. that’s the whole point of 2026 chess SOTA beating 2023 chess SOTA — 2023 chess SOTA isn’t optimal, so there’s room to find better, and superintelligence can simply make itself be a finder-of-better-things.
okay, even if this were true, it doesn’t particularly matter, right ? like, if AI is worse than us at a bunch of tasks, but it’s good enough to take over enough of the internet to achieve decisive strategic advantage and then kill us, then that doesn’t really matter a lot.
so sure, the AI never learned to drive better than our truckers and our truckers never technically went through lost their job to competition, but also everyone everywhere is dead forever.
but i guess this relies on various arguments about the brittleness of civilization to unaligned AI.
why is that? even if both of your claims are true, that general optimality is impossible and general efficiency is infeasible, this does not stop an AI from specializing at taking over the world, which is much easier than outcompeting every industry (you never have to beat truckers at driving to take over the world!). and then, it doesn’t take much inside view to see how an AI could actually do this without a huge amount of general intelligence; yudkowsky’s usual scheme for AI achieving DSA and us all falling dead within the same second, as explained in the podcast he was recently on, is one possible inside-view way for this to happen.
we’re “universal”, maybe, but we’re the very first thing that got to taking over the world. there’s no reason to think that the very first thing to take over the world is also the thing that’s the best at taking over the world; and surprise, here’s one that can probly beat us at that.
and that’s all excluding dumb ways to die such as for example someone at a protein factory just plugging an AI into the protein design machine to see what funny designs it’ll come up with and accidentally kill everyone with neither user nor AI having particularly intended to do this (the AI is just outputting “interesting” proteins).
I think that DG is making a more nickpicky point and just claiming that that specific definition is not feasible rather than using this as a claim that foom is not feasible, at least in this post. He also claims that elsewhere but has a diferent argument about humans being able to make narrow AI for things like strategy(wich I think are also wrong) At least that’s what I’ve understood from our previous discussions.
yeah, totally, i’m also just using that post as a jump-off point for a more in-depth long-form discussion about dragon god’s beliefs.
If the thing you say is true, superintelligence will just build specialized narrow superintelligences for particular tasks, just like how we build machines. It doesn’t leave us much chance for trade.
This also presupposes that:
The system has an absolute advantage wrt civilisation at the task of developing specialised systems for any task
The system also has a comparative advantage
I think #1 is dubious for attainable strongly superhuman general intelligences, and #2 is likely nonsense.
I think #2 only sounds not nonsense if you ignore all economic constraints.
I think the problem is defining superintelligence as a thing that’s “efficient wrt human civilisation on all cognitive tasks of economic importance”, when my objection is: “that thing you have defined may not be something that is actually physically possible. Attainable strongly superhuman general intelligences are not the thing that you have defined”.
Like you can round off my position to “certain definitions of superintelligence just seem prima facie infeasible/unattainable to me” without losing much nuance.
I actually can’t imagine any subtask of “turning the world into paperclips” where humanity can have any comparative advantage, can you give an example?