Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that however sharp the singularity, abundance will still be finite, and must therefore be allocated.
Though yes, I agree that a superintelligent singleton controlling a command economy means this breaks down.
However it seems far from clear we will end up exactly there. The finiteness of the future lightcone and the resulting necessity of allocating “scarce” resources, the usefulness of a single medium of exchange (which you can see as motivated by coherence theorems if you want), and trade between different entities all seem like very general concepts. So even in futures that are otherwise very alien, but just not in the exact “singleton-run command economy” direction, I expect a high chance that those concepts matter.
Maybe the crux is that you are not expecting superintelligence?[1] This quote seems to indicate that: “However it seems far from clear we will end up exactly there”. Also, your post writes about “labor-replacing AGI” but writes as if the world it might cause near-term lasts eternally (“anyone of importance in the future will be important because of something they or someone they were close with did in the pre-AGI era (‘oh, my uncle was technical staff at OpenAI’). The children of the future will live their lives in the shadow of their parents”)
If not, my response:
just not in the exact “singleton-run command economy” direction
I don’t see why strongly-superintelligent optimization would benefit from an economy of any kind.
Given superintelligence, I don’t see how there would still be different entities doing actual (as opposed to just-for-fun / fantasy-like) dynamic (as opposed to acausal) trade with each other, because the first superintelligent agent would have control over the whole lightcone.
If trade currently captures information (including about the preferences of those engaged in it), it is regardless unlikely to be the best way to gain this information, if you are a superintelligence.[2]
If takeoff is more continuous than hard, why is it so obvious that there exists exactly one superintelligence rather than multiple? Or are you assuming hard takeoff?
Also, your post writes about “labor-replacing AGI” but writes as if the world it might cause near-term lasts eternally
If things go well, human individuals continue existing (and humans continue making new humans, whether digitally or not). Also, it seems more likely than not that fairly strong property rights continue (if property rights aren’t strong, and humans aren’t augmented to be competitive with the superintelligences, then prospects for human survival seem weak since humans’ main advantage is that they start out owning a lot of the stuff—and yes, that they can shape the values of the AGI, but I tentatively think CEV-type solutions are neither plausible nor necessarily desirable). The simplest scenario is that there is continuity between current and post-singularity property ownership (especially if takeoff is slow and there isn’t a clear “reset” point). The AI stuff might get crazy and the world might change a lot as a result, but these guesses, if correct, seem to pin down a lot of what the human situation looks like.
I don’t think so, but I’m not sure exactly what this means. This post says slow takeoff means ‘smooth/gradual’ and my view is compatible with that—smooth/gradual, but at some point the singularity point is reached (a superintelligent optimization process starts).
why is it so obvious that there exists exactly one superintelligence rather than multiple?
Because it would require an odd set of events that cause two superintelligent agents to be created.. if not at the same time, within the time it would take one to start effecting matter on the other side of the planet relative to where it is[1]. Even if that happened, I don’t think it would change the outcome (e.g. lead to an economy). And it’s still far from a world with a lot of superintelligences. And even in a world where a lot of superintelligences are created at the same time, I’d expect them to do something like a value handshake, after which the outcome looks the same again.
(I thought this was a commonly accepted view here)
Reading your next paragraph, I still think we must have fundamentally different ideas about what superintelligence (or “the most capable possible agent, modulo unbounded quantitative aspects like memory size”) would be. (You seem to expect it to be not capable of finding routes to its goals which do not require (negotiating with) humans)
(note: even in a world where {learning / task-annealing / selecting a bag of heuristics} is the best (in a sense only) method of problem solving, which might be an implicit premise of expectations of this kind, there will still eventually be some Theory of Learning which enables the creation of ideal learning-based agents, which then take the role of superintelligence in the above story)
I think your expectations are closer to mine in some ways, quila. But I do doubt that the transition will be as fast and smooth as you predict. The AIs we’re seeing now have very spiky capability profiles, and I expect early AGI to be similar. It seems likely to me that there will be period which is perhaps short in wall-clock-time but still significant in downstream causal effects, where there are multiple versions of AGIs interacting with humans in shaping the ASI(s) that later emerge.
I think a single super-powerful ASI is one way things could go, but I also think that there’s reason to expect a more multi-polar community of AIs, perhaps blending into each other around the edges of their collaboration, merges made of distilled down versions of their larger selves. I think the cohesion of a typical human mind is more due to the limitations of biology and the shaping forces of biological evolution than to an inherent attractor-state in mindspace.
Do you want to look for cruxes? I can’t tell what your cruxy underlying beliefs are from your comment.
I think the cohesion of a typical human mind is more due to the limitations of biology and the shaping forces of biological evolution than to an inherent attractor-state in mindspace.
I don’t think whether there is an attractor[1] towards cohesiveness is a crux for me (although I’d be interested in reading your thoughts on that anyways), at least because it looks like humans will try to create an optimal agent, so it doesn’t need to have a common attractor or be found through one[2], it just needs to be possible at all.
But I do doubt that the transition will be as fast and smooth as you predict
Note: I wrote that my view is compatible with ‘smooth takeoff’, when asked if I was ‘assuming hard takeoff’. I don’t know what ‘takeoff’ looks like, especially prior to recursive AI research.
there will be period which is perhaps short in wall-clock-time but still significant in downstream causal effects, where there are multiple versions of AGIs interacting with humans in shaping the ASI(s) that later emerge.
Sure (if ‘shaping’ is merely ‘having a causal effect on’, not necessarily in the hoped-for direction).
a more multi-polar community of AIs
Sure, that could happen before superintelligence, but why do you then frame it as an alternative to superintelligence?[3]
Feel free to ask me probing questions as well, and no pressure to engage.
(adding a note just in case it’s relevant: attractors are not in mindspace/programspace itself, but in the conjunction with the specific process selecting the mind/program)
(Edit to add: I saw this other comment by you. I agree that maybe there could be good governance made of humans + AIs and if that happened, then that could prevent anyone from creating a super-agent, although it would still end with (in this case aligned) superintelligence in my view.
I can also imagine, but doubt it’s what you mean, runaway processes which are composed of ‘many AIs’ but which do not converge to superintelligence, because that sounds intuitively-mathematically possible (i.e., where none of the AIs are exactly subject to instrumental convergence, nor have the impulse to do things which create superintelligence, but the process nonetheless spreads and consumes and creates more ~‘myopically’ powerful AIs (until plateauing beyond the point of human/altruist disempowerment)))
I think there are a lot of places where we agree. In this comment I was trying to say that I feel doubtful about the idea of a superintelligence arising once, and then no other superintelligences arise because the first one had time to fully seize control of the world. I think it’s also possible that there is time for more than one super-human intelligence to arise and then compete with each other.
I think the offense-dominant nature of our current technological milieu means that humanity is almost certainly toast under the multipolar superintelligence scenario unless the controllers (likely the ASIs themselves) are in a stable violence-preventing governance framework (which could be simply a pact between two powerful ASIs).
Responses:
Sure (if ‘shaping’ is merely ‘having a causal effect on’, not necessarily in the hoped-for direction).
Yes, that’s what I meant. Control seems like not-at-all a default scenario to me. More like the accelerating self-improving AI process is a boulder tumbling down a hill, and humanity is a stone in its path that may alter its trajectory (while likely being destroyed in the process).
a more multi-polar community of AIs
Sure, that could happen before superintelligence, but why do you then frame it as an alternative to superintelligence?[3]
More that I am trying to suggest that such a multi-polar community of sub-super-intelligent AIs makes a multipolar ASI scenario seem more likely to me. Not as an alternative to superintelligence.
I’m pretty sure we’re on a fast-track to either superintelligence-within-ten-years or civilizational collapse (e.g. large scale nuclear war). I doubt very much that any governance effort will manage to delay superintelligence for more than 10 years from now.
I think our best hope is to go all-in on alignment and governance efforts designed to shape the near-term future of AI progress, not on attempts to pause/delay. I think that algorithmic advance is the most dangerous piece of the puzzle, and wouldn’t be much hindered by restrictions on large training runs (which is what people often mean when talking of delay).
But, if we’re skillful and lucky, we might manage to get to controlled-AGI, and have some sort of AGI-powered world government arise which was able to squash self-improving AI competitors before getting overrrun. Then at that point, we could delay, and focus on more robust alignment (including value-alignment rather than just intent-alignment) and on human augmentation / digital people.
I was trying to say that I feel doubtful about the idea of a superintelligence arising once [...] I think it’s also possible that there is time for more than one super-human intelligence to arise and then compete with each other.
Okay. I am not seeing why you are doubtful. (I agree 2+ arising near enough in time is merely possible, but it seems like you think it’s more than merely possible, e.g. 5%+ likely? That’s what I’m reading into “doubtful”)
unless the controllers (likely the ASIs themselves) are in a stable violence-preventing governance framework (which could be simply a pact between two powerful ASIs).
Why would the pact protect beings other than the two ASIs? (If one wouldn’t have an incentive to protect, why would two?) (Edit: Or, based on the term “governance framework”, do you believe the human+AGI government could actually control ASIs?)
More that I am trying to suggest that such a multi-polar community of sub-super-intelligent AIs makes a multipolar ASI scenario seem more likely to me. Not as an alternative to superintelligence.
Thanks for clarifying. It’s not intuitive to me why that would make it more likely, and I can’t find anything else in this comment about that.
I think our best hope is to go all-in on alignment and governance efforts designed to shape the near-term future of AI progress [...] if we’re skillful and lucky, we might manage to get to controlled-AGI, and have some sort of AGI-powered world government arise which was able to squash self-improving AI competitors before getting overrrun
I see. That does help me understand the motive for ‘control’ research more.
I am confused by the existence of this discourse. Do its participants not believe strong superintelligence is possible?
Can you elaborate, I’m not sure what you are asking. I believe strong superintelligence is possible.
Why would strong superintelligence coexist with an economy? Wouldn’t an aligned (or unaligned) superintelligence antiquate it all?
Though yes, I agree that a superintelligent singleton controlling a command economy means this breaks down.
However it seems far from clear we will end up exactly there. The finiteness of the future lightcone and the resulting necessity of allocating “scarce” resources, the usefulness of a single medium of exchange (which you can see as motivated by coherence theorems if you want), and trade between different entities all seem like very general concepts. So even in futures that are otherwise very alien, but just not in the exact “singleton-run command economy” direction, I expect a high chance that those concepts matter.
I am still confused.
Maybe the crux is that you are not expecting superintelligence?[1] This quote seems to indicate that: “However it seems far from clear we will end up exactly there”. Also, your post writes about “labor-replacing AGI” but writes as if the world it might cause near-term lasts eternally (“anyone of importance in the future will be important because of something they or someone they were close with did in the pre-AGI era (‘oh, my uncle was technical staff at OpenAI’). The children of the future will live their lives in the shadow of their parents”)
If not, my response:
I don’t see why strongly-superintelligent optimization would benefit from an economy of any kind.
Given superintelligence, I don’t see how there would still be different entities doing actual (as opposed to just-for-fun / fantasy-like) dynamic (as opposed to acausal) trade with each other, because the first superintelligent agent would have control over the whole lightcone.
If trade currently captures information (including about the preferences of those engaged in it), it is regardless unlikely to be the best way to gain this information, if you are a superintelligence.[2]
(Regardless of whether the first superintelligence is an agent, a superintelligent agent is probably created soon after)
I could list better ways of gaining this information given superintelligence, if this claim is not obvious.
If takeoff is more continuous than hard, why is it so obvious that there exists exactly one superintelligence rather than multiple? Or are you assuming hard takeoff?
If things go well, human individuals continue existing (and humans continue making new humans, whether digitally or not). Also, it seems more likely than not that fairly strong property rights continue (if property rights aren’t strong, and humans aren’t augmented to be competitive with the superintelligences, then prospects for human survival seem weak since humans’ main advantage is that they start out owning a lot of the stuff—and yes, that they can shape the values of the AGI, but I tentatively think CEV-type solutions are neither plausible nor necessarily desirable). The simplest scenario is that there is continuity between current and post-singularity property ownership (especially if takeoff is slow and there isn’t a clear “reset” point). The AI stuff might get crazy and the world might change a lot as a result, but these guesses, if correct, seem to pin down a lot of what the human situation looks like.
I don’t think so, but I’m not sure exactly what this means. This post says slow takeoff means ‘smooth/gradual’ and my view is compatible with that—smooth/gradual, but at some point the singularity point is reached (a superintelligent optimization process starts).
Because it would require an odd set of events that cause two superintelligent agents to be created.. if not at the same time, within the time it would take one to start effecting matter on the other side of the planet relative to where it is[1]. Even if that happened, I don’t think it would change the outcome (e.g. lead to an economy). And it’s still far from a world with a lot of superintelligences. And even in a world where a lot of superintelligences are created at the same time, I’d expect them to do something like a value handshake, after which the outcome looks the same again.
(I thought this was a commonly accepted view here)
Reading your next paragraph, I still think we must have fundamentally different ideas about what superintelligence (or “the most capable possible agent, modulo unbounded quantitative aspects like memory size”) would be. (You seem to expect it to be not capable of finding routes to its goals which do not require (negotiating with) humans)
(note: even in a world where {learning / task-annealing / selecting a bag of heuristics} is the best (in a sense only) method of problem solving, which might be an implicit premise of expectations of this kind, there will still eventually be some Theory of Learning which enables the creation of ideal learning-based agents, which then take the role of superintelligence in the above story)
which is still pretty short, thanks to computer communication.
(and that’s only if being created slightly earlier doesn’t afford some decisive physical advantage over the other, which depends on physics)
I think your expectations are closer to mine in some ways, quila. But I do doubt that the transition will be as fast and smooth as you predict. The AIs we’re seeing now have very spiky capability profiles, and I expect early AGI to be similar. It seems likely to me that there will be period which is perhaps short in wall-clock-time but still significant in downstream causal effects, where there are multiple versions of AGIs interacting with humans in shaping the ASI(s) that later emerge.
I think a single super-powerful ASI is one way things could go, but I also think that there’s reason to expect a more multi-polar community of AIs, perhaps blending into each other around the edges of their collaboration, merges made of distilled down versions of their larger selves. I think the cohesion of a typical human mind is more due to the limitations of biology and the shaping forces of biological evolution than to an inherent attractor-state in mindspace.
Do you want to look for cruxes? I can’t tell what your cruxy underlying beliefs are from your comment.
I don’t think whether there is an attractor[1] towards cohesiveness is a crux for me (although I’d be interested in reading your thoughts on that anyways), at least because it looks like humans will try to create an optimal agent, so it doesn’t need to have a common attractor or be found through one[2], it just needs to be possible at all.
Note: I wrote that my view is compatible with ‘smooth takeoff’, when asked if I was ‘assuming hard takeoff’. I don’t know what ‘takeoff’ looks like, especially prior to recursive AI research.
Sure (if ‘shaping’ is merely ‘having a causal effect on’, not necessarily in the hoped-for direction).
Sure, that could happen before superintelligence, but why do you then frame it as an alternative to superintelligence?[3]
Feel free to ask me probing questions as well, and no pressure to engage.
(adding a note just in case it’s relevant: attractors are not in mindspace/programspace itself, but in the conjunction with the specific process selecting the mind/program)
as opposed to through understanding agency/problem-solving(-learning) more fundamentally/mathematically
(Edit to add: I saw this other comment by you. I agree that maybe there could be good governance made of humans + AIs and if that happened, then that could prevent anyone from creating a super-agent, although it would still end with (in this case aligned) superintelligence in my view.
I can also imagine, but doubt it’s what you mean, runaway processes which are composed of ‘many AIs’ but which do not converge to superintelligence, because that sounds intuitively-mathematically possible (i.e., where none of the AIs are exactly subject to instrumental convergence, nor have the impulse to do things which create superintelligence, but the process nonetheless spreads and consumes and creates more ~‘myopically’ powerful AIs (until plateauing beyond the point of human/altruist disempowerment)))
I think there are a lot of places where we agree. In this comment I was trying to say that I feel doubtful about the idea of a superintelligence arising once, and then no other superintelligences arise because the first one had time to fully seize control of the world. I think it’s also possible that there is time for more than one super-human intelligence to arise and then compete with each other.
I think the offense-dominant nature of our current technological milieu means that humanity is almost certainly toast under the multipolar superintelligence scenario unless the controllers (likely the ASIs themselves) are in a stable violence-preventing governance framework (which could be simply a pact between two powerful ASIs).
Responses:
Yes, that’s what I meant. Control seems like not-at-all a default scenario to me. More like the accelerating self-improving AI process is a boulder tumbling down a hill, and humanity is a stone in its path that may alter its trajectory (while likely being destroyed in the process).
More that I am trying to suggest that such a multi-polar community of sub-super-intelligent AIs makes a multipolar ASI scenario seem more likely to me. Not as an alternative to superintelligence.
I’m pretty sure we’re on a fast-track to either superintelligence-within-ten-years or civilizational collapse (e.g. large scale nuclear war). I doubt very much that any governance effort will manage to delay superintelligence for more than 10 years from now.
I think our best hope is to go all-in on alignment and governance efforts designed to shape the near-term future of AI progress, not on attempts to pause/delay. I think that algorithmic advance is the most dangerous piece of the puzzle, and wouldn’t be much hindered by restrictions on large training runs (which is what people often mean when talking of delay).
But, if we’re skillful and lucky, we might manage to get to controlled-AGI, and have some sort of AGI-powered world government arise which was able to squash self-improving AI competitors before getting overrrun. Then at that point, we could delay, and focus on more robust alignment (including value-alignment rather than just intent-alignment) and on human augmentation / digital people.
I talk more about my thoughts on this in my post here: https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy
My response, before having read the linked post:
Okay. I am not seeing why you are doubtful. (I agree 2+ arising near enough in time is merely possible, but it seems like you think it’s more than merely possible, e.g. 5%+ likely? That’s what I’m reading into “doubtful”)
Why would the pact protect beings other than the two ASIs? (If one wouldn’t have an incentive to protect, why would two?) (Edit: Or, based on the term “governance framework”, do you believe the human+AGI government could actually control ASIs?)
Thanks for clarifying. It’s not intuitive to me why that would make it more likely, and I can’t find anything else in this comment about that.
I see. That does help me understand the motive for ‘control’ research more.
To a first approximation, yes, I believe it would antiquate it all.
Okay, thanks for clarifying. I may have misunderstood your comment. I’m still confused by the existence of the original post with this many upvotes.