This post seems to misunderstand what it is responding to and underplay a very key point: that material needs will likely be met (and selfish non-positional preferences mostly satisfied) due to extreme abundance (if humans retain control).
It mentions this offhand:
Given sufficiently strong AI, this is not a risk about insufficient material comfort.
But, this was a key thing people were claiming when arguing that money won’t matter. They were claiming that personal savings will likely not be that important for guaranteeing a reasonable amount of material comfort (or that a tiny amount of personal savings will suffice).
It seems like there are two importantly different types of preferences:
Material needs and roughly log returns (non-positional) selfish preferences
Scope sensitive preferences
Indeed, for scope sensitive preferences (that you expect won’t be shared with whoever otherwise ends up with power), you want to maximize your power and insofar as money allows for more of this power (e.g. buying galaxies), then money looks good.
However, note that if these preferences are altruistic and likely to be the kind of thing other people might be sympathetic to, personal savings are IMO likely to be not-that-important relative to other actions.
Further, I do actually think that the default outcome is that existing governments at least initially retain control over most resources such that capital isn’t clearly that important (for long run scope sensitive preferences), but I won’t argue for this here (and the post does directly argue against this).
that material needs will likely be met (and selfish non-positional preferences mostly satisfied) due to extreme abundance (if humans retain control).
I agree with this, though I’d add: “if humans retain control” and some sufficient combination of culture/economics/politics/incentives continues opposing arbitrary despotism.
I also think that even if all material needs are met, avoiding social stasis and lock-in matters.
Scope sensitive preferences
Scope sensitivity of preferences is a key concept that matters here, thanks for pointing that out.
Various other considerations about types of preferences / things you can care about (presented without endorsement):
instrumental preference to avoid stasis because of a belief it leads to other bad things (e.g. stagnant intellectual / moral / political / cultural progress, increasing autocracy)
altruistic preferences combined with a fear that less altruism will result if today’s wealth hierarchy is locked in, than if social progress and disruption continued
a belief that it’s culturally good when human competition has some anchoring to object-level physical reality (c.f. the links here)
a general belief in a tendency for things to go off the rails without a ground-truth unbeatable feedback signal that the higher-level process needs to be wary of—see Gwern’s Evolution as a backstop for RL
preferences that become more scope-sensitive due to transhumanist cognitive enhancement
positional preferences, i.e. wanting to be higher-status or more something than some other human(s)
a meta-positional-preference that positions are not locked in, because competition is fun
a preference for future generations having at least as much of a chance to shape the world, themselves, and their position as the current generation
an aesthetic preference for a world where hard work is rewarded, or rags-to-riches stories are possible
However, note that if these preferences are altruistic and likely to be the kind of thing other people might be sympathetic to, personal savings are IMO likely to be not-that-important relative to other actions.
I agree with this on an individual level. (On an org level, I think philanthropic foundations might want to consider my arguments above for money buying more results soon, but this needs to be balanced against higher leverage on AI futures sooner rather than later.)
Further, I do actually think that the default outcome is that existing governments at least initially retain control over most resources such that capital isn’t clearly that important, but I won’t argue for this here (and the post does directly argue against this).
Where do I directly argue against that? A big chunk of this post is pointing out how the shifting relative importance of capital v labour changes the incentives of states. By default, I expect states to remain the most important and powerful institutions, but the frame here is very much human v non-human inputs to power and what that means for humans, without any particular stance on how the non-human inputs are organised. I don’t think states v companies v whatever fundamentally changes the dynamic; with labour-replacing AI power flows from data centres, other physical capital, and whoever has the financial capital to pay for it, and sidesteps humans doing work, and that is the shift I care about.
(However, I think which institutions do the bulk of decision-making re AI does matter for a lot of other reasons, and I’d be very curious to get your takes on that)
My guess is that the most fundamental disagreement here is about how much power tries to get away with when it can. My read of history leans towards: things are good for people when power is correlated with things being good for people, and otherwise not (though I think material abundance is very important too and always helps a lot). I am very skeptical of the stability of good worlds where incentives and selection pressures do not point towards human welfare.
For example, assuming a multipolar world where power flows from AI, the equilibrium is putting all your resources on AI competition and none on human welfare. I don’t think it’s anywhere near certain we actually reach that equilibrium, since sustained cooperation is possible (c.f. Ostrom’s Governing the Commons), and since a fairly trivial fraction of the post-AGI economy’s resources might suffice for human prosperity (and since maybe we in fact do get a singleton—but I’d have other issues with that). But this sort of concern still seems neglected and important to me.
Under log returns to money, personal savings still matter a lot for selfish preferences. Suppose the material comfort component of someone’s utility is 0 utils at an consumption of $1/day. Then a moderately wealthy person consuming $1000/day today will be at 7 utils. The owner of a galaxy, at maybe $10^30 / day, will be at 69 utils, but doubling their resources will still add the same 0.69 utils it would for today’s moderately wealthy person. So my guess is they will still try pretty hard at acquiring more resources, similarly to people in developed economies today who balk at their income being halved and see it as a pretty extreme sacrifice.
True, though I think many people have the intuition that returns diminish faster than log (at least given current tech).
For example, most people think increasing their income from $10k to $20k would do more for their material wellbeing than increasing it from $1bn to $2bn.
I think the key issue is whether new tech makes it easier to buy huge amounts of utility, or that people want to satisfy other preferences beyond material wellbeing (which may have log or even close to linear returns).
There are always diminishing returns to money spent on consumption, but technological progress creates new products that expand what money can buy. For example, no amount of money in 1990 was enough to buy an iPhone.
More abstractly, there are two effects from AGI-driven growth: moving to a further point on the utility curve such that the derivative is lower, and new products increasing the derivative at every point on the curve (relative to what it was on the old curve). So even if in the future the lifestyles of people with no savings and no labor income will be way better than the lifestyles of anyone alive today, they still might be far worse than the lifestyles of people in the future who own a lot of capital.
If you feel this post misunderstands what it is responding to, can you link to a good presentation of the other view on these issues?
This post seems to misunderstand what it is responding to and underplay a very key point: that material needs will likely be met (and selfish non-positional preferences mostly satisfied) due to extreme abundance (if humans retain control).
It mentions this offhand:
But, this was a key thing people were claiming when arguing that money won’t matter. They were claiming that personal savings will likely not be that important for guaranteeing a reasonable amount of material comfort (or that a tiny amount of personal savings will suffice).
It seems like there are two importantly different types of preferences:
Material needs and roughly log returns (non-positional) selfish preferences
Scope sensitive preferences
Indeed, for scope sensitive preferences (that you expect won’t be shared with whoever otherwise ends up with power), you want to maximize your power and insofar as money allows for more of this power (e.g. buying galaxies), then money looks good.
However, note that if these preferences are altruistic and likely to be the kind of thing other people might be sympathetic to, personal savings are IMO likely to be not-that-important relative to other actions.
Further, I do actually think that the default outcome is that existing governments at least initially retain control over most resources such that capital isn’t clearly that important (for long run scope sensitive preferences), but I won’t argue for this here (and the post does directly argue against this).
fwiw, I see this post less as “responding” to something, and more laying out considerations on their own with some contrasting takes as a foil.
(On Substack, the title is “Capital, AGI, and human ambition”, which is perhaps better)
I agree with this, though I’d add: “if humans retain control” and some sufficient combination of culture/economics/politics/incentives continues opposing arbitrary despotism.
I also think that even if all material needs are met, avoiding social stasis and lock-in matters.
Scope sensitivity of preferences is a key concept that matters here, thanks for pointing that out.
Various other considerations about types of preferences / things you can care about (presented without endorsement):
instrumental preference to avoid stasis because of a belief it leads to other bad things (e.g. stagnant intellectual / moral / political / cultural progress, increasing autocracy)
altruistic preferences combined with a fear that less altruism will result if today’s wealth hierarchy is locked in, than if social progress and disruption continued
a belief that it’s culturally good when human competition has some anchoring to object-level physical reality (c.f. the links here)
a general belief in a tendency for things to go off the rails without a ground-truth unbeatable feedback signal that the higher-level process needs to be wary of—see Gwern’s Evolution as a backstop for RL
preferences that become more scope-sensitive due to transhumanist cognitive enhancement
positional preferences, i.e. wanting to be higher-status or more something than some other human(s)
a meta-positional-preference that positions are not locked in, because competition is fun
a preference for future generations having at least as much of a chance to shape the world, themselves, and their position as the current generation
an aesthetic preference for a world where hard work is rewarded, or rags-to-riches stories are possible
I agree with this on an individual level. (On an org level, I think philanthropic foundations might want to consider my arguments above for money buying more results soon, but this needs to be balanced against higher leverage on AI futures sooner rather than later.)
Where do I directly argue against that? A big chunk of this post is pointing out how the shifting relative importance of capital v labour changes the incentives of states. By default, I expect states to remain the most important and powerful institutions, but the frame here is very much human v non-human inputs to power and what that means for humans, without any particular stance on how the non-human inputs are organised. I don’t think states v companies v whatever fundamentally changes the dynamic; with labour-replacing AI power flows from data centres, other physical capital, and whoever has the financial capital to pay for it, and sidesteps humans doing work, and that is the shift I care about.
(However, I think which institutions do the bulk of decision-making re AI does matter for a lot of other reasons, and I’d be very curious to get your takes on that)
My guess is that the most fundamental disagreement here is about how much power tries to get away with when it can. My read of history leans towards: things are good for people when power is correlated with things being good for people, and otherwise not (though I think material abundance is very important too and always helps a lot). I am very skeptical of the stability of good worlds where incentives and selection pressures do not point towards human welfare.
For example, assuming a multipolar world where power flows from AI, the equilibrium is putting all your resources on AI competition and none on human welfare. I don’t think it’s anywhere near certain we actually reach that equilibrium, since sustained cooperation is possible (c.f. Ostrom’s Governing the Commons), and since a fairly trivial fraction of the post-AGI economy’s resources might suffice for human prosperity (and since maybe we in fact do get a singleton—but I’d have other issues with that). But this sort of concern still seems neglected and important to me.
Under log returns to money, personal savings still matter a lot for selfish preferences. Suppose the material comfort component of someone’s utility is 0 utils at an consumption of $1/day. Then a moderately wealthy person consuming $1000/day today will be at 7 utils. The owner of a galaxy, at maybe $10^30 / day, will be at 69 utils, but doubling their resources will still add the same 0.69 utils it would for today’s moderately wealthy person. So my guess is they will still try pretty hard at acquiring more resources, similarly to people in developed economies today who balk at their income being halved and see it as a pretty extreme sacrifice.
True, though I think many people have the intuition that returns diminish faster than log (at least given current tech).
For example, most people think increasing their income from $10k to $20k would do more for their material wellbeing than increasing it from $1bn to $2bn.
I think the key issue is whether new tech makes it easier to buy huge amounts of utility, or that people want to satisfy other preferences beyond material wellbeing (which may have log or even close to linear returns).
There are always diminishing returns to money spent on consumption, but technological progress creates new products that expand what money can buy. For example, no amount of money in 1990 was enough to buy an iPhone.
More abstractly, there are two effects from AGI-driven growth: moving to a further point on the utility curve such that the derivative is lower, and new products increasing the derivative at every point on the curve (relative to what it was on the old curve). So even if in the future the lifestyles of people with no savings and no labor income will be way better than the lifestyles of anyone alive today, they still might be far worse than the lifestyles of people in the future who own a lot of capital.
If you feel this post misunderstands what it is responding to, can you link to a good presentation of the other view on these issues?