I’ve heard many people say something like “money won’t matter post-AGI”. This has always struck me as odd, and as most likely completely incorrect.
Given our exchange in the comments, perhaps you should clarify that you aren’t trying to argue that saving money to spend after AGI is a good strategy, you agree it’s a bad strategy and sometimes when people say “money won’t matter post-AGI” they are meaning to say “saving money to spend after AGI is a bad strategy” whereas you are taking it to mean “we’ll all be living in egalitarian utopia after AGI” or something like that.
Edited to add: The main takeaway of this post is meant to be: Labour-replacing AI will shift the relative importance of human v non-human factors of production, which reduces the incentives for society to care about humans while making existing powers more effective and entrenched. Many people are reading this post in a way where either (a) “capital” means just “money” (rather than also including physical capital like factories and data centres), or (b) the main concern is human-human inequality (rather than broader societal concerns about humanity’s collective position, the potential for social change, and human agency).
However:
perhaps you should clarify that you aren’t trying to argue that saving money to spend after AGI is a good strategy, you agree it’s a bad strategy
I think my take is a bit more nuanced:
in my post, I explicitly disagree with focusing purely on getting money now, and especially oppose abandoning more neglected ways of impacting AI development in favour of ways that also optimise for personal capital accumulation (see the start of the takeaways section)
the reason is that I think now is a uniquely “liquid” / high-leverage time to shape the world through hard work, especially because the world might soon get much more locked-in and because current AI makes it easier to do things
(also, I think modern culture is way too risk averse in general, and worry many people will do motivated reasoning and end up thinking they should accept the quant finance / top lab pay package for fancy AI reasons, when their actual reason is that they just want that security and status for prosaic reasons, and the world would benefit most from them actually daring to work on some neglected impactful thing)
however, it’s also true that money is a very fungible resource, and we’re heading into very uncertain times where the value of labour (most people’s current biggest investment) looks likely to plummet
if I had to give advice to people who aren’t working on influencing AI for the better, I’d focus on generically “moving upwind” in terms of fungible resources: connections, money, skills, etc. If I had to pick one to advise a bystander to optimise for, I’d put social connections above money—robust in more scenarios (e.g. in very politicky worlds where money alone doesn’t help), has deep value in any world where humans survive, in post-labour futures even more likely to be a future nexus of status competition, and more life-affirming and happiness-boosting in the meantime
This is despite agreeing with the takes in your earlier comment. My exact views in more detail (comments/summaries in square brackets):
The post-AGI economy might not involve money, it might be more of a command economy. [yep, this is plausible, but as I write here, I’m guessing my odds on this are lower than yours—I think a command economy with a singleton is plausible but not the median outcome]
Even if it involves money, the relationship between how much money someone has before and how much money they have after might not be anywhere close to 1:1. For example: [loss of control, non-monetary power, destructive war] [yep, the capital strategy is not risk-free, but this only really applies re selfish concerns if there are better ways to prepare for post-AGI; c.f. my point about social connections above]
Even if saving money through AGI converts 1:1 into money after the singularity, it will probably be worth less in utility to you
[because even low levels of wealth will max out personal utility post-AGI] [seems likely true, modulo some uncertainty about: (a) utility from positional goods v absolute goods v striving, and (b) whether “everyone gets UBI”-esque stuff is stable/likely, or fails due to despotism / competitive incentives / whatever]
[because for altruistic goals the leverage from influencing AI now is probably greater than leverage of competing against everyone else’s saved capital after AGI] [complicated, but I think this is very correct at least for individuals and most orgs]
Regarding:
you are taking it to mean “we’ll all be living in egalitarian utopia after AGI” or something like that
I think there’s a decent chance we’ll live in a material-quality-of-life-utopia after AGI, assuming “Things Go Fine” (i.e. misalignment / war / going-out-with-a-whimper don’t happen). I think it’s unlikely to be egalitarian in the “everyone has the same opportunities and resources”, for the reasons I lay out above. There are lots of valid arguments for why, if Things Go Fine, it will still be much better than today despite that inequality, and the inequality might not practically matter very much because consumption gets maxed out etc. To be clear, I am very against cancelling the transhumanist utopia because some people will be able to buy 30 planets rather than just a continent. But there are some structural things that make me worried about stagnation, culture, and human relevance in such worlds.
In particular, I’d be curious to hear your takes about the section on state incentives after labour-replacing AI, which I don’t think you’ve addressed and which I think is fairly cruxy to why I’m less optimistic than you about things going well for most humans even given massive abundance and tech.
I am not sure you are less optimistic than me about things going well for most humans even given massive abundance and tech. We might not disagree. In particular I think I’m more worried about coups/power-grabs than you are; you say both considerations point in different directions whereas I think they point in the same (bad) direction.
I think that if things go well for most humans, it’ll either be because we manage to end this crazy death race to AGI and get some serious regulation etc., or because the power-hungry CEO or President in charge is also benevolent and humble and decides to devolve power rather than effectively tell the army of AGIs “go forth and do what’s best according to me.” (And also in that scenario because alignment turned out to be easy / we got lucky and things worked well despite YOLOing it and investing relatively little in alignment + control)
I’m more worried about coups/power-grabs than you are;
We don’t have to make individual guesses. It seems reasonable to get a base rate from human history. Although we may all disagree about how much this will generalise to AGI, evidence still seems better than guessing.
My impression from history is that coups/power-grabs and revolutions are common when the current system breaks down, or when there is a big capabilities advance (guns, radio, printing press, bombs, etc) between new actors and old.
War between old actors also seems likely in these situations because an asymmetric capabilities advance makes winner-takes-all approaches profitable. Winning a war, empire, or colony can historically pay off, but only if you have the advantage to win.
Given our exchange in the comments, perhaps you should clarify that you aren’t trying to argue that saving money to spend after AGI is a good strategy, you agree it’s a bad strategy and sometimes when people say “money won’t matter post-AGI” they are meaning to say “saving money to spend after AGI is a bad strategy” whereas you are taking it to mean “we’ll all be living in egalitarian utopia after AGI” or something like that.
I already added this to the start of the post:
However:
I think my take is a bit more nuanced:
in my post, I explicitly disagree with focusing purely on getting money now, and especially oppose abandoning more neglected ways of impacting AI development in favour of ways that also optimise for personal capital accumulation (see the start of the takeaways section)
the reason is that I think now is a uniquely “liquid” / high-leverage time to shape the world through hard work, especially because the world might soon get much more locked-in and because current AI makes it easier to do things
(also, I think modern culture is way too risk averse in general, and worry many people will do motivated reasoning and end up thinking they should accept the quant finance / top lab pay package for fancy AI reasons, when their actual reason is that they just want that security and status for prosaic reasons, and the world would benefit most from them actually daring to work on some neglected impactful thing)
however, it’s also true that money is a very fungible resource, and we’re heading into very uncertain times where the value of labour (most people’s current biggest investment) looks likely to plummet
if I had to give advice to people who aren’t working on influencing AI for the better, I’d focus on generically “moving upwind” in terms of fungible resources: connections, money, skills, etc. If I had to pick one to advise a bystander to optimise for, I’d put social connections above money—robust in more scenarios (e.g. in very politicky worlds where money alone doesn’t help), has deep value in any world where humans survive, in post-labour futures even more likely to be a future nexus of status competition, and more life-affirming and happiness-boosting in the meantime
This is despite agreeing with the takes in your earlier comment. My exact views in more detail (comments/summaries in square brackets):
Regarding:
I think there’s a decent chance we’ll live in a material-quality-of-life-utopia after AGI, assuming “Things Go Fine” (i.e. misalignment / war / going-out-with-a-whimper don’t happen). I think it’s unlikely to be egalitarian in the “everyone has the same opportunities and resources”, for the reasons I lay out above. There are lots of valid arguments for why, if Things Go Fine, it will still be much better than today despite that inequality, and the inequality might not practically matter very much because consumption gets maxed out etc. To be clear, I am very against cancelling the transhumanist utopia because some people will be able to buy 30 planets rather than just a continent. But there are some structural things that make me worried about stagnation, culture, and human relevance in such worlds.
In particular, I’d be curious to hear your takes about the section on state incentives after labour-replacing AI, which I don’t think you’ve addressed and which I think is fairly cruxy to why I’m less optimistic than you about things going well for most humans even given massive abundance and tech.
Thanks for the clarification!
I am not sure you are less optimistic than me about things going well for most humans even given massive abundance and tech. We might not disagree. In particular I think I’m more worried about coups/power-grabs than you are; you say both considerations point in different directions whereas I think they point in the same (bad) direction.
I think that if things go well for most humans, it’ll either be because we manage to end this crazy death race to AGI and get some serious regulation etc., or because the power-hungry CEO or President in charge is also benevolent and humble and decides to devolve power rather than effectively tell the army of AGIs “go forth and do what’s best according to me.” (And also in that scenario because alignment turned out to be easy / we got lucky and things worked well despite YOLOing it and investing relatively little in alignment + control)
We don’t have to make individual guesses. It seems reasonable to get a base rate from human history. Although we may all disagree about how much this will generalise to AGI, evidence still seems better than guessing.
My impression from history is that coups/power-grabs and revolutions are common when the current system breaks down, or when there is a big capabilities advance (guns, radio, printing press, bombs, etc) between new actors and old.
War between old actors also seems likely in these situations because an asymmetric capabilities advance makes winner-takes-all approaches profitable. Winning a war, empire, or colony can historically pay off, but only if you have the advantage to win.