When people such as myself say “money won’t matter post-AGI” the claim is NOT that the economy post-AGI won’t involve money (though that might be true) but rather that the strategy of saving money in order to spend it after AGI is a bad strategy. Here are some reasons:
The post-AGI economy might not involve money, it might be more of a command economy.
Even if it involves money, the relationship between how much money someone has before and how much money they have after might not be anywhere close to 1:1. For example:
Maybe the humans will lose control of the AGIs
Maybe the humans who control the AGIs will put values into the AGIs, such that the resulting world redistributes the money, so to speak. E.g. maybe they’ll tax and redistribute to create a more equal society—OR (and you talk about this, but don’t go far enough!) maybe they’ll make a less equal society, one in which ‘how much money you saved’ doesn’t translate into how much money you have in the new world, and instead e.g. being in the good graces of the leadership of the AGI project, as judged by their omnipresent AGI servants that infuse the economy and talk to everyone, is what matters.
Maybe there’ll be a war or something, a final tussle over AGI (and possibly involving AGI) between US and China for example. Or between terrorists with AI-produced bioweapons and everyone else. Maybe this war will result in mass casualties and you might be one of them.
Even if saving money through AGI converts 1:1 into money after the singularity, it will probably be worth less in utility to you:
You’ll probably be able to buy planets post-AGI for the price of houses today. More generally your selfish and/or local and/or personal preferences will be fairly easily satisfiable even with small amounts of money, or to put it in other words, there are massive diminishing returns.
For your altruistic or political preferences—e.g. your preferences about how society should be structured, or about what should be done with all the galaxies—then your money post-AGI will be valuable, but it’ll be a drop in the bucket compared to all the money of other people and institutions who also saved their money through AGI (i.e. most of the planet). By contrast, *very few people are spending money to influence AGI development right now. If you want future beings to have certain inalienable rights, or if you want the galaxies to be used in such-and-such a way, you can lobby AGI companies right now to change their spec/constitution/RLHF, and to make commitments about what values they’ll instill, etc. More generally you can compete for influence right now. And the amount of money in the arena competing with you is… billions? Whereas the amount of money that is being saved for the post-AGI future is what, a hundred trillion? (Because it’s all the rest of the money there is basically)
I agree (1) and (2) are possibilities. However, from a personal planning pov, you should focus preparing for scenarios (i) that might last a long time (ii) where you can affect what happens, since that’s where the stakes are.
Scenarios where we all die soon can be mostly be ignored, unless you think they make up most of the probability. (Edit: to be clear it does reduce the value of saving vs. spending, just don’t think it’s a big effect unless probabilities are high.)
I think (3) is the key way to push back.
I feel unsure all my preferences are either (i) local and easily satisfied or (ii) impartial & altruistic. You only need to have one type of preference with, say, log returns to money that can be better satisfied post-AGI to make capital post-AGI valuable to you (emulations maybe).
But let’s focus on the altruistic case – I’m very interested in the question of how valuable capital will be altruistically post-AGI.
I think your argument about relative neglectedness makes sense, but is maybe too strong.
There’s 500 trillion of world wealth, so if you have $1m now, that’s 2e-9 of world wealth. Through good investing through the transition, it seems like you can increase your share. Then set that against chance of confiscation etc, and plausibly you end up with a similar share afterwards.
You say you’d be competing with the entire rest of the pot post-transition, but that seems too negative. Only <3% of income today is used on broadly altruistic stuff, and the amount focused on impartial longtermist values is miniscule (which is why AI safety is neglected in the first place). It seems likely it would still be a minority in the future.
People with an impartial perspective might be able to make good trades with the majority who are locally focused (give up earth for the commons etc.). People with low discount rates should also be able to increase their share over time.
So if you have 2e-9 of future world wealth, it seems like you could get a significantly larger share of the influence (>10x) from the perspective of your values.
Now you need to compare that to $1m extra donated to AI safety in the short-term. If you think that would reduce x-risk by less 1e-8 then saving to give could be more valuable.
Suppose about $10bn will be donated to AI safety before the lock-in moment. Now consider adding a marginal $10bn. Maybe that decreases x-risk by another ~1%. Then that means $1m decreases it by about 10e-6. So with these numbers, I agree donating now is ~100x better.
However, I could imagine people with other reasonable inputs concluding the opposite. It’s also not obvious to me that donating now dominates so much that I’d want to allocate 0% to the other scenario.
Scenarios where we all die soon can be mostly be ignored, unless you think they make up most of the probability.
I would say: unless you can change the probability. These can still be significant in your decision making, if you can invest time or money or effort to decrease the probability.
(Except maybe I’d emphasise the command economy possibility slightly less. And compared to what I understand of your ranking, I’d rank competition between different AGIs/AGI-using factions as a relatively more important factor in determining what happens, and values put into AGIs as a relatively less important factor. I think these are both downstream of you expecting slightly-to-somewhat more singleton-like scenarios than I do?)
EDIT: see here for more detail on my take on Daniel’s takes.
Overall, I’d emphasize as the main point in my post: AI-caused shifts in the incentives/leverage of human v non-human factors of production, and this mattering because the interests of power will become less aligned with humans while simultaneously power becomes more entrenched and effective. I’m not really interested in whether someone should save or not for AGI. I think starting off with “money won’t matter post-AGI” was probably a confusing and misleading move on my part.
OK, cool, thanks for clarifying. Seems we were talking past each other then, if you weren’t trying to defend the strategy of saving money to spend after AGI. Cheers!
I see the command economy point as downstream of a broader trend: as technology accelerates, negative public externalities will increasingly scale and present irreversible threats (x-risks, but also more mundane pollution, errant bio-engineering plague risks etc.). If we condition on our continued existence, there must’ve been some solution to this which would look like either greater government intervention (command economy) or a radical upgrade to the coordination mechanisms in our capitalist system. Relevant to your power entrenchment claim: both of these outcomes involve the curtailment of power exerted by private individuals with large piles of capital.
(Note there are certainly other possible reasons to expect a command economy, and I do not know which reasons were particularly compelling to Daniel)
the strategy of saving money in order to spend it after AGI is a bad strategy.
This seems very reasonable and likely correct (though not obvious) to me. I especially like your point about there being lots of competition in the “save it” strategy because it happens by default. Also note that my post explicitly encourages individuals to do ambitious things pre-AGI, rather than focus on safe capital accumulation.
#1 and #2 are serious concerns, but there’s not really much I can do about them anyways. #3 doesn’t make any sense to me.
You’ll probably be able to buy planets post-AGI for the price of houses today
Right, and that seems like OP’s point? Because I can do this, I shouldn’t spend money on consumption goods today and in fact should gather as much money as I can now? Certainly massive stellar objects post-AGI will be more useful to me than a house is pre-agi?
As to this:
By contrast, *very few people are spending money to influence AGI development right now. If you want future beings to have certain inalienable rights, or if you want the galaxies to be used in such-and-such a way, you can lobby AGI companies right now to change their spec/constitution/RLHF, and to make commitments about what values they’ll instill, etc.
I guess I just don’t really believe I have much control over that at all. Further, I can specifically invest in things likely to be important parts of the AGI production function, like semiconductors, etc.
On the contrary, massive stellar objects post-AGI will be less useful to you than a house is today, as far as your selfish personal preferences are concerned. Consider the difference in your quality of life living in a nice house vs. skimping and saving 50% and living in a cheap apartment so you can save money. Next, consider the difference in your quality of life owning your own planet (replete with superintelligent servants) vs. owning merely half a planet. What can you do with a whole planet that you can’t do with half a planet? Not that much.
Re: 1 and 2: Whether you can do something about them matters but doesn’t undermine my argument. You should still discount the value of your savings by their probability.
However little control you have over influencing AGI development, you’ll have orders of magnitude less control over influencing the cosmos / society / etc. after AGI.
On the contrary, massive stellar objects post-AGI will be less useful to you than a house is today, as far as your selfish personal preferences are concerned. Consider the difference in your quality of life living in a nice house vs. skimping and saving 50% and living in a cheap apartment so you can save money. Next, consider the difference in your quality of life owning your own planet (replete with superintelligent servants) vs. owning merely half a planet. What can you do with a whole planet that you can’t do with half a planet? Not that much.
It matters if it means I can live twice as long, because I can purchase more negentropy with which to maintain whatever lifestyle I have.
Good point. If your utility is linear or close to linear in lifespan even at very large scales, and lifespan is based on how much money you have rather than e.g. a right guaranteed by the government, then a planetworth could be almost twice as valuable as half a planetworth.
You’ll probably be able to buy planets post-AGI for the price of houses today
I am confused by the existence of this discourse. Do its participants not believe strong superintelligence is possible?
(edit: I misinterpreted Daniel’s comment, I thought this quote indicated they thought it was non-trivially likely, instead of just being reasoning through an ‘even if’ scenario / scenario relevant in OP’s model)
Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that however sharp the singularity, abundance will still be finite, and must therefore be allocated.
Though yes, I agree that a superintelligent singleton controlling a command economy means this breaks down.
However it seems far from clear we will end up exactly there. The finiteness of the future lightcone and the resulting necessity of allocating “scarce” resources, the usefulness of a single medium of exchange (which you can see as motivated by coherence theorems if you want), and trade between different entities all seem like very general concepts. So even in futures that are otherwise very alien, but just not in the exact “singleton-run command economy” direction, I expect a high chance that those concepts matter.
Maybe the crux is that you are not expecting superintelligence?[1] This quote seems to indicate that: “However it seems far from clear we will end up exactly there”. Also, your post writes about “labor-replacing AGI” but writes as if the world it might cause near-term lasts eternally (“anyone of importance in the future will be important because of something they or someone they were close with did in the pre-AGI era (‘oh, my uncle was technical staff at OpenAI’). The children of the future will live their lives in the shadow of their parents”)
If not, my response:
just not in the exact “singleton-run command economy” direction
I don’t see why strongly-superintelligent optimization would benefit from an economy of any kind.
Given superintelligence, I don’t see how there would still be different entities doing actual (as opposed to just-for-fun / fantasy-like) dynamic (as opposed to acausal) trade with each other, because the first superintelligent agent would have control over the whole lightcone.
If trade currently captures information (including about the preferences of those engaged in it), it is regardless unlikely to be the best way to gain this information, if you are a superintelligence.[2]
If takeoff is more continuous than hard, why is it so obvious that there exists exactly one superintelligence rather than multiple? Or are you assuming hard takeoff?
Also, your post writes about “labor-replacing AGI” but writes as if the world it might cause near-term lasts eternally
If things go well, human individuals continue existing (and humans continue making new humans, whether digitally or not). Also, it seems more likely than not that fairly strong property rights continue (if property rights aren’t strong, and humans aren’t augmented to be competitive with the superintelligences, then prospects for human survival seem weak since humans’ main advantage is that they start out owning a lot of the stuff—and yes, that they can shape the values of the AGI, but I tentatively think CEV-type solutions are neither plausible nor necessarily desirable). The simplest scenario is that there is continuity between current and post-singularity property ownership (especially if takeoff is slow and there isn’t a clear “reset” point). The AI stuff might get crazy and the world might change a lot as a result, but these guesses, if correct, seem to pin down a lot of what the human situation looks like.
I don’t think so, but I’m not sure exactly what this means. This post says slow takeoff means ‘smooth/gradual’ and my view is compatible with that—smooth/gradual, but at some point the singularity point is reached (a superintelligent optimization process starts).
why is it so obvious that there exists exactly one superintelligence rather than multiple?
Because it would require an odd set of events that cause two superintelligent agents to be created.. if not at the same time, within the time it would take one to start effecting matter on the other side of the planet relative to where it is[1]. Even if that happened, I don’t think it would change the outcome (e.g. lead to an economy). And it’s still far from a world with a lot of superintelligences. And even in a world where a lot of superintelligences are created at the same time, I’d expect them to do something like a value handshake, after which the outcome looks the same again.
(I thought this was a commonly accepted view here)
Reading your next paragraph, I still think we must have fundamentally different ideas about what superintelligence (or “the most capable possible agent, modulo unbounded quantitative aspects like memory size”) would be. (You seem to expect it to be not capable of finding routes to its goals which do not require (negotiating with) humans)
(note: even in a world where {learning / task-annealing / selecting a bag of heuristics} is the best (in a sense only) method of problem solving, which might be an implicit premise of expectations of this kind, there will still eventually be some Theory of Learning which enables the creation of ideal learning-based agents, which then take the role of superintelligence in the above story)
I think your expectations are closer to mine in some ways, quila. But I do doubt that the transition will be as fast and smooth as you predict. The AIs we’re seeing now have very spiky capability profiles, and I expect early AGI to be similar. It seems likely to me that there will be period which is perhaps short in wall-clock-time but still significant in downstream causal effects, where there are multiple versions of AGIs interacting with humans in shaping the ASI(s) that later emerge.
I think a single super-powerful ASI is one way things could go, but I also think that there’s reason to expect a more multi-polar community of AIs, perhaps blending into each other around the edges of their collaboration, merges made of distilled down versions of their larger selves. I think the cohesion of a typical human mind is more due to the limitations of biology and the shaping forces of biological evolution than to an inherent attractor-state in mindspace.
Do you want to look for cruxes? I can’t tell what your cruxy underlying beliefs are from your comment.
I think the cohesion of a typical human mind is more due to the limitations of biology and the shaping forces of biological evolution than to an inherent attractor-state in mindspace.
I don’t think whether there is an attractor[1] towards cohesiveness is a crux for me (although I’d be interested in reading your thoughts on that anyways), at least because it looks like humans will try to create an optimal agent, so it doesn’t need to have a common attractor or be found through one[2], it just needs to be possible at all.
But I do doubt that the transition will be as fast and smooth as you predict
Note: I wrote that my view is compatible with ‘smooth takeoff’, when asked if I was ‘assuming hard takeoff’. I don’t know what ‘takeoff’ looks like, especially prior to recursive AI research.
there will be period which is perhaps short in wall-clock-time but still significant in downstream causal effects, where there are multiple versions of AGIs interacting with humans in shaping the ASI(s) that later emerge.
Sure (if ‘shaping’ is merely ‘having a causal effect on’, not necessarily in the hoped-for direction).
a more multi-polar community of AIs
Sure, that could happen before superintelligence, but why do you then frame it as an alternative to superintelligence?[3]
Feel free to ask me probing questions as well, and no pressure to engage.
(adding a note just in case it’s relevant: attractors are not in mindspace/programspace itself, but in the conjunction with the specific process selecting the mind/program)
(Edit to add: I saw this other comment by you. I agree that maybe there could be good governance made of humans + AIs and if that happened, then that could prevent anyone from creating a super-agent, although it would still end with (in this case aligned) superintelligence in my view.
I can also imagine, but doubt it’s what you mean, runaway processes which are composed of ‘many AIs’ but which do not converge to superintelligence, because that sounds intuitively-mathematically possible (i.e., where none of the AIs are exactly subject to instrumental convergence, nor have the impulse to do things which create superintelligence, but the process nonetheless spreads and consumes and creates more ~‘myopically’ powerful AIs (until plateauing beyond the point of human/altruist disempowerment)))
I think there are a lot of places where we agree. In this comment I was trying to say that I feel doubtful about the idea of a superintelligence arising once, and then no other superintelligences arise because the first one had time to fully seize control of the world. I think it’s also possible that there is time for more than one super-human intelligence to arise and then compete with each other.
I think the offense-dominant nature of our current technological milieu means that humanity is almost certainly toast under the multipolar superintelligence scenario unless the controllers (likely the ASIs themselves) are in a stable violence-preventing governance framework (which could be simply a pact between two powerful ASIs).
Responses:
Sure (if ‘shaping’ is merely ‘having a causal effect on’, not necessarily in the hoped-for direction).
Yes, that’s what I meant. Control seems like not-at-all a default scenario to me. More like the accelerating self-improving AI process is a boulder tumbling down a hill, and humanity is a stone in its path that may alter its trajectory (while likely being destroyed in the process).
a more multi-polar community of AIs
Sure, that could happen before superintelligence, but why do you then frame it as an alternative to superintelligence?[3]
More that I am trying to suggest that such a multi-polar community of sub-super-intelligent AIs makes a multipolar ASI scenario seem more likely to me. Not as an alternative to superintelligence.
I’m pretty sure we’re on a fast-track to either superintelligence-within-ten-years or civilizational collapse (e.g. large scale nuclear war). I doubt very much that any governance effort will manage to delay superintelligence for more than 10 years from now.
I think our best hope is to go all-in on alignment and governance efforts designed to shape the near-term future of AI progress, not on attempts to pause/delay. I think that algorithmic advance is the most dangerous piece of the puzzle, and wouldn’t be much hindered by restrictions on large training runs (which is what people often mean when talking of delay).
But, if we’re skillful and lucky, we might manage to get to controlled-AGI, and have some sort of AGI-powered world government arise which was able to squash self-improving AI competitors before getting overrrun. Then at that point, we could delay, and focus on more robust alignment (including value-alignment rather than just intent-alignment) and on human augmentation / digital people.
I was trying to say that I feel doubtful about the idea of a superintelligence arising once [...] I think it’s also possible that there is time for more than one super-human intelligence to arise and then compete with each other.
Okay. I am not seeing why you are doubtful. (I agree 2+ arising near enough in time is merely possible, but it seems like you think it’s much more than merely possible, e.g. 5%+ likely? That’s what I’m reading into “doubtful”)
unless the controllers (likely the ASIs themselves) are in a stable violence-preventing governance framework (which could be simply a pact between two powerful ASIs).
Why would the pact protect beings other than the two ASIs? (If one wouldn’t have an incentive to protect, why would two?) (Edit: Or, based on the term “governance framework”, do you believe the human+AGI government could actually control ASIs?)
More that I am trying to suggest that such a multi-polar community of sub-super-intelligent AIs makes a multipolar ASI scenario seem more likely to me. Not as an alternative to superintelligence.
Thanks for clarifying. It’s not intuitive to me why that would make it more likely, and I can’t find anything else in this comment about that.
I think our best hope is to go all-in on alignment and governance efforts designed to shape the near-term future of AI progress [...] if we’re skillful and lucky, we might manage to get to controlled-AGI, and have some sort of AGI-powered world government arise which was able to squash self-improving AI competitors before getting overrrun
I see. That does help me understand the motive for ‘control’ research more.
Even if saving money through AGI converts 1:1 into money after the singularity, it will probably be worth less in utility to you:
You’ll probably be able to buy planets post-AGI for the price of houses today. More generally your selfish and/or local and/or personal preferences will be fairly easily satisfiable even with small amounts of money, or to put it in other words, there are massive diminishing returns.
No one will be buying planets for the novelty or as an exotic vacation destination. The reason you buy a planet is to convert it into computing power, which you then attach to your own mind. If people aren’t explicitly prevented from using planets for that purpose, then planets are going to be in very high demand, and very useful for people on a personal level.
Is your selfish utility linear in computing power? Is the difference between how your life goes with a planet’s worth of compute that much bigger than how it goes with half a planet’s worth of compute? I doubt it.
Also, there are eight billion people now, and many orders of magnitude more planets, not to mention all the stars etc. “You’ll probably be able to buy planets post-AGI for the price of houses today” was probably a massive understatement.
When people such as myself say “money won’t matter post-AGI” the claim is NOT that the economy post-AGI won’t involve money (though that might be true) but rather that the strategy of saving money in order to spend it after AGI is a bad strategy. Here are some reasons:
The post-AGI economy might not involve money, it might be more of a command economy.
Even if it involves money, the relationship between how much money someone has before and how much money they have after might not be anywhere close to 1:1. For example:
Maybe the humans will lose control of the AGIs
Maybe the humans who control the AGIs will put values into the AGIs, such that the resulting world redistributes the money, so to speak. E.g. maybe they’ll tax and redistribute to create a more equal society—OR (and you talk about this, but don’t go far enough!) maybe they’ll make a less equal society, one in which ‘how much money you saved’ doesn’t translate into how much money you have in the new world, and instead e.g. being in the good graces of the leadership of the AGI project, as judged by their omnipresent AGI servants that infuse the economy and talk to everyone, is what matters.
Maybe there’ll be a war or something, a final tussle over AGI (and possibly involving AGI) between US and China for example. Or between terrorists with AI-produced bioweapons and everyone else. Maybe this war will result in mass casualties and you might be one of them.
Even if saving money through AGI converts 1:1 into money after the singularity, it will probably be worth less in utility to you:
You’ll probably be able to buy planets post-AGI for the price of houses today. More generally your selfish and/or local and/or personal preferences will be fairly easily satisfiable even with small amounts of money, or to put it in other words, there are massive diminishing returns.
For your altruistic or political preferences—e.g. your preferences about how society should be structured, or about what should be done with all the galaxies—then your money post-AGI will be valuable, but it’ll be a drop in the bucket compared to all the money of other people and institutions who also saved their money through AGI (i.e. most of the planet). By contrast, *very few people are spending money to influence AGI development right now. If you want future beings to have certain inalienable rights, or if you want the galaxies to be used in such-and-such a way, you can lobby AGI companies right now to change their spec/constitution/RLHF, and to make commitments about what values they’ll instill, etc. More generally you can compete for influence right now. And the amount of money in the arena competing with you is… billions? Whereas the amount of money that is being saved for the post-AGI future is what, a hundred trillion? (Because it’s all the rest of the money there is basically)
I agree (1) and (2) are possibilities. However, from a personal planning pov, you should focus preparing for scenarios (i) that might last a long time (ii) where you can affect what happens, since that’s where the stakes are.
Scenarios where we all die soon can be mostly be ignored, unless you think they make up most of the probability. (Edit: to be clear it does reduce the value of saving vs. spending, just don’t think it’s a big effect unless probabilities are high.)
I think (3) is the key way to push back.
I feel unsure all my preferences are either (i) local and easily satisfied or (ii) impartial & altruistic. You only need to have one type of preference with, say, log returns to money that can be better satisfied post-AGI to make capital post-AGI valuable to you (emulations maybe).
But let’s focus on the altruistic case – I’m very interested in the question of how valuable capital will be altruistically post-AGI.
I think your argument about relative neglectedness makes sense, but is maybe too strong.
There’s 500 trillion of world wealth, so if you have $1m now, that’s 2e-9 of world wealth. Through good investing through the transition, it seems like you can increase your share. Then set that against chance of confiscation etc, and plausibly you end up with a similar share afterwards.
You say you’d be competing with the entire rest of the pot post-transition, but that seems too negative. Only <3% of income today is used on broadly altruistic stuff, and the amount focused on impartial longtermist values is miniscule (which is why AI safety is neglected in the first place). It seems likely it would still be a minority in the future.
People with an impartial perspective might be able to make good trades with the majority who are locally focused (give up earth for the commons etc.). People with low discount rates should also be able to increase their share over time.
So if you have 2e-9 of future world wealth, it seems like you could get a significantly larger share of the influence (>10x) from the perspective of your values.
Now you need to compare that to $1m extra donated to AI safety in the short-term. If you think that would reduce x-risk by less 1e-8 then saving to give could be more valuable.
Suppose about $10bn will be donated to AI safety before the lock-in moment. Now consider adding a marginal $10bn. Maybe that decreases x-risk by another ~1%. Then that means $1m decreases it by about 10e-6. So with these numbers, I agree donating now is ~100x better.
However, I could imagine people with other reasonable inputs concluding the opposite. It’s also not obvious to me that donating now dominates so much that I’d want to allocate 0% to the other scenario.
I would say: unless you can change the probability. These can still be significant in your decision making, if you can invest time or money or effort to decrease the probability.
I think I agree with all of this.
(Except maybe I’d emphasise the command economy possibility slightly less. And compared to what I understand of your ranking, I’d rank competition between different AGIs/AGI-using factions as a relatively more important factor in determining what happens, and values put into AGIs as a relatively less important factor. I think these are both downstream of you expecting slightly-to-somewhat more singleton-like scenarios than I do?)
EDIT: see here for more detail on my take on Daniel’s takes.
Overall, I’d emphasize as the main point in my post: AI-caused shifts in the incentives/leverage of human v non-human factors of production, and this mattering because the interests of power will become less aligned with humans while simultaneously power becomes more entrenched and effective. I’m not really interested in whether someone should save or not for AGI. I think starting off with “money won’t matter post-AGI” was probably a confusing and misleading move on my part.
OK, cool, thanks for clarifying. Seems we were talking past each other then, if you weren’t trying to defend the strategy of saving money to spend after AGI. Cheers!
I see the command economy point as downstream of a broader trend: as technology accelerates, negative public externalities will increasingly scale and present irreversible threats (x-risks, but also more mundane pollution, errant bio-engineering plague risks etc.). If we condition on our continued existence, there must’ve been some solution to this which would look like either greater government intervention (command economy) or a radical upgrade to the coordination mechanisms in our capitalist system. Relevant to your power entrenchment claim: both of these outcomes involve the curtailment of power exerted by private individuals with large piles of capital.
(Note there are certainly other possible reasons to expect a command economy, and I do not know which reasons were particularly compelling to Daniel)
This seems very reasonable and likely correct (though not obvious) to me. I especially like your point about there being lots of competition in the “save it” strategy because it happens by default. Also note that my post explicitly encourages individuals to do ambitious things pre-AGI, rather than focus on safe capital accumulation.
#1 and #2 are serious concerns, but there’s not really much I can do about them anyways. #3 doesn’t make any sense to me.
Right, and that seems like OP’s point? Because I can do this, I shouldn’t spend money on consumption goods today and in fact should gather as much money as I can now? Certainly massive stellar objects post-AGI will be more useful to me than a house is pre-agi?
As to this:
I guess I just don’t really believe I have much control over that at all. Further, I can specifically invest in things likely to be important parts of the AGI production function, like semiconductors, etc.
On the contrary, massive stellar objects post-AGI will be less useful to you than a house is today, as far as your selfish personal preferences are concerned. Consider the difference in your quality of life living in a nice house vs. skimping and saving 50% and living in a cheap apartment so you can save money. Next, consider the difference in your quality of life owning your own planet (replete with superintelligent servants) vs. owning merely half a planet. What can you do with a whole planet that you can’t do with half a planet? Not that much.
Re: 1 and 2: Whether you can do something about them matters but doesn’t undermine my argument. You should still discount the value of your savings by their probability.
However little control you have over influencing AGI development, you’ll have orders of magnitude less control over influencing the cosmos / society / etc. after AGI.
It matters if it means I can live twice as long, because I can purchase more negentropy with which to maintain whatever lifestyle I have.
Good point. If your utility is linear or close to linear in lifespan even at very large scales, and lifespan is based on how much money you have rather than e.g. a right guaranteed by the government, then a planetworth could be almost twice as valuable as half a planetworth.
(My selfish utility is not close to linear in lifespan at very large scales, I think.)
I am confused by the existence of this discourse. Do its participants not believe strong superintelligence is possible?
(edit: I misinterpreted Daniel’s comment, I thought this quote indicated they thought it was non-trivially likely, instead of just being reasoning through an ‘even if’ scenario / scenario relevant in OP’s model)
Can you elaborate, I’m not sure what you are asking. I believe strong superintelligence is possible.
Why would strong superintelligence coexist with an economy? Wouldn’t an aligned (or unaligned) superintelligence antiquate it all?
Though yes, I agree that a superintelligent singleton controlling a command economy means this breaks down.
However it seems far from clear we will end up exactly there. The finiteness of the future lightcone and the resulting necessity of allocating “scarce” resources, the usefulness of a single medium of exchange (which you can see as motivated by coherence theorems if you want), and trade between different entities all seem like very general concepts. So even in futures that are otherwise very alien, but just not in the exact “singleton-run command economy” direction, I expect a high chance that those concepts matter.
I am still confused.
Maybe the crux is that you are not expecting superintelligence?[1] This quote seems to indicate that: “However it seems far from clear we will end up exactly there”. Also, your post writes about “labor-replacing AGI” but writes as if the world it might cause near-term lasts eternally (“anyone of importance in the future will be important because of something they or someone they were close with did in the pre-AGI era (‘oh, my uncle was technical staff at OpenAI’). The children of the future will live their lives in the shadow of their parents”)
If not, my response:
I don’t see why strongly-superintelligent optimization would benefit from an economy of any kind.
Given superintelligence, I don’t see how there would still be different entities doing actual (as opposed to just-for-fun / fantasy-like) dynamic (as opposed to acausal) trade with each other, because the first superintelligent agent would have control over the whole lightcone.
If trade currently captures information (including about the preferences of those engaged in it), it is regardless unlikely to be the best way to gain this information, if you are a superintelligence.[2]
(Regardless of whether the first superintelligence is an agent, a superintelligent agent is probably created soon after)
I could list better ways of gaining this information given superintelligence, if this claim is not obvious.
If takeoff is more continuous than hard, why is it so obvious that there exists exactly one superintelligence rather than multiple? Or are you assuming hard takeoff?
If things go well, human individuals continue existing (and humans continue making new humans, whether digitally or not). Also, it seems more likely than not that fairly strong property rights continue (if property rights aren’t strong, and humans aren’t augmented to be competitive with the superintelligences, then prospects for human survival seem weak since humans’ main advantage is that they start out owning a lot of the stuff—and yes, that they can shape the values of the AGI, but I tentatively think CEV-type solutions are neither plausible nor necessarily desirable). The simplest scenario is that there is continuity between current and post-singularity property ownership (especially if takeoff is slow and there isn’t a clear “reset” point). The AI stuff might get crazy and the world might change a lot as a result, but these guesses, if correct, seem to pin down a lot of what the human situation looks like.
I don’t think so, but I’m not sure exactly what this means. This post says slow takeoff means ‘smooth/gradual’ and my view is compatible with that—smooth/gradual, but at some point the singularity point is reached (a superintelligent optimization process starts).
Because it would require an odd set of events that cause two superintelligent agents to be created.. if not at the same time, within the time it would take one to start effecting matter on the other side of the planet relative to where it is[1]. Even if that happened, I don’t think it would change the outcome (e.g. lead to an economy). And it’s still far from a world with a lot of superintelligences. And even in a world where a lot of superintelligences are created at the same time, I’d expect them to do something like a value handshake, after which the outcome looks the same again.
(I thought this was a commonly accepted view here)
Reading your next paragraph, I still think we must have fundamentally different ideas about what superintelligence (or “the most capable possible agent, modulo unbounded quantitative aspects like memory size”) would be. (You seem to expect it to be not capable of finding routes to its goals which do not require (negotiating with) humans)
(note: even in a world where {learning / task-annealing / selecting a bag of heuristics} is the best (in a sense only) method of problem solving, which might be an implicit premise of expectations of this kind, there will still eventually be some Theory of Learning which enables the creation of ideal learning-based agents, which then take the role of superintelligence in the above story)
which is still pretty short, thanks to computer communication.
(and that’s only if being created slightly earlier doesn’t afford some decisive physical advantage over the other, which depends on physics)
I think your expectations are closer to mine in some ways, quila. But I do doubt that the transition will be as fast and smooth as you predict. The AIs we’re seeing now have very spiky capability profiles, and I expect early AGI to be similar. It seems likely to me that there will be period which is perhaps short in wall-clock-time but still significant in downstream causal effects, where there are multiple versions of AGIs interacting with humans in shaping the ASI(s) that later emerge.
I think a single super-powerful ASI is one way things could go, but I also think that there’s reason to expect a more multi-polar community of AIs, perhaps blending into each other around the edges of their collaboration, merges made of distilled down versions of their larger selves. I think the cohesion of a typical human mind is more due to the limitations of biology and the shaping forces of biological evolution than to an inherent attractor-state in mindspace.
Do you want to look for cruxes? I can’t tell what your cruxy underlying beliefs are from your comment.
I don’t think whether there is an attractor[1] towards cohesiveness is a crux for me (although I’d be interested in reading your thoughts on that anyways), at least because it looks like humans will try to create an optimal agent, so it doesn’t need to have a common attractor or be found through one[2], it just needs to be possible at all.
Note: I wrote that my view is compatible with ‘smooth takeoff’, when asked if I was ‘assuming hard takeoff’. I don’t know what ‘takeoff’ looks like, especially prior to recursive AI research.
Sure (if ‘shaping’ is merely ‘having a causal effect on’, not necessarily in the hoped-for direction).
Sure, that could happen before superintelligence, but why do you then frame it as an alternative to superintelligence?[3]
Feel free to ask me probing questions as well, and no pressure to engage.
(adding a note just in case it’s relevant: attractors are not in mindspace/programspace itself, but in the conjunction with the specific process selecting the mind/program)
as opposed to through understanding agency/problem-solving(-learning) more fundamentally/mathematically
(Edit to add: I saw this other comment by you. I agree that maybe there could be good governance made of humans + AIs and if that happened, then that could prevent anyone from creating a super-agent, although it would still end with (in this case aligned) superintelligence in my view.
I can also imagine, but doubt it’s what you mean, runaway processes which are composed of ‘many AIs’ but which do not converge to superintelligence, because that sounds intuitively-mathematically possible (i.e., where none of the AIs are exactly subject to instrumental convergence, nor have the impulse to do things which create superintelligence, but the process nonetheless spreads and consumes and creates more ~‘myopically’ powerful AIs (until plateauing beyond the point of human/altruist disempowerment)))
I think there are a lot of places where we agree. In this comment I was trying to say that I feel doubtful about the idea of a superintelligence arising once, and then no other superintelligences arise because the first one had time to fully seize control of the world. I think it’s also possible that there is time for more than one super-human intelligence to arise and then compete with each other.
I think the offense-dominant nature of our current technological milieu means that humanity is almost certainly toast under the multipolar superintelligence scenario unless the controllers (likely the ASIs themselves) are in a stable violence-preventing governance framework (which could be simply a pact between two powerful ASIs).
Responses:
Yes, that’s what I meant. Control seems like not-at-all a default scenario to me. More like the accelerating self-improving AI process is a boulder tumbling down a hill, and humanity is a stone in its path that may alter its trajectory (while likely being destroyed in the process).
More that I am trying to suggest that such a multi-polar community of sub-super-intelligent AIs makes a multipolar ASI scenario seem more likely to me. Not as an alternative to superintelligence.
I’m pretty sure we’re on a fast-track to either superintelligence-within-ten-years or civilizational collapse (e.g. large scale nuclear war). I doubt very much that any governance effort will manage to delay superintelligence for more than 10 years from now.
I think our best hope is to go all-in on alignment and governance efforts designed to shape the near-term future of AI progress, not on attempts to pause/delay. I think that algorithmic advance is the most dangerous piece of the puzzle, and wouldn’t be much hindered by restrictions on large training runs (which is what people often mean when talking of delay).
But, if we’re skillful and lucky, we might manage to get to controlled-AGI, and have some sort of AGI-powered world government arise which was able to squash self-improving AI competitors before getting overrrun. Then at that point, we could delay, and focus on more robust alignment (including value-alignment rather than just intent-alignment) and on human augmentation / digital people.
I talk more about my thoughts on this in my post here: https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy
My response, before having read the linked post:
Okay. I am not seeing why you are doubtful. (I agree 2+ arising near enough in time is merely possible, but it seems like you think it’s much more than merely possible, e.g. 5%+ likely? That’s what I’m reading into “doubtful”)
Why would the pact protect beings other than the two ASIs? (If one wouldn’t have an incentive to protect, why would two?) (Edit: Or, based on the term “governance framework”, do you believe the human+AGI government could actually control ASIs?)
Thanks for clarifying. It’s not intuitive to me why that would make it more likely, and I can’t find anything else in this comment about that.
I see. That does help me understand the motive for ‘control’ research more.
To a first approximation, yes, I believe it would antiquate it all.
Okay, thanks for clarifying. I may have misunderstood your comment. I’m still confused by the existence of the original post with this many upvotes.
No one will be buying planets for the novelty or as an exotic vacation destination. The reason you buy a planet is to convert it into computing power, which you then attach to your own mind. If people aren’t explicitly prevented from using planets for that purpose, then planets are going to be in very high demand, and very useful for people on a personal level.
Is your selfish utility linear in computing power? Is the difference between how your life goes with a planet’s worth of compute that much bigger than how it goes with half a planet’s worth of compute? I doubt it.
Also, there are eight billion people now, and many orders of magnitude more planets, not to mention all the stars etc. “You’ll probably be able to buy planets post-AGI for the price of houses today” was probably a massive understatement.