that material needs will likely be met (and selfish non-positional preferences mostly satisfied) due to extreme abundance (if humans retain control).
I agree with this, though I’d add: “if humans retain control” and some sufficient combination of culture/economics/politics/incentives continues opposing arbitrary despotism.
I also think that even if all material needs are met, avoiding social stasis and lock-in matters.
Scope sensitive preferences
Scope sensitivity of preferences is a key concept that matters here, thanks for pointing that out.
Various other considerations about types of preferences / things you can care about (presented without endorsement):
instrumental preference to avoid stasis because of a belief it leads to other bad things (e.g. stagnant intellectual / moral / political / cultural progress, increasing autocracy)
altruistic preferences combined with a fear that less altruism will result if today’s wealth hierarchy is locked in, than if social progress and disruption continued
a belief that it’s culturally good when human competition has some anchoring to object-level physical reality (c.f. the links here)
a general belief in a tendency for things to go off the rails without a ground-truth unbeatable feedback signal that the higher-level process needs to be wary of—see Gwern’s Evolution as a backstop for RL
preferences that become more scope-sensitive due to transhumanist cognitive enhancement
positional preferences, i.e. wanting to be higher-status or more something than some other human(s)
a meta-positional-preference that positions are not locked in, because competition is fun
a preference for future generations having at least as much of a chance to shape the world, themselves, and their position as the current generation
an aesthetic preference for a world where hard work is rewarded, or rags-to-riches stories are possible
However, note that if these preferences are altruistic and likely to be the kind of thing other people might be sympathetic to, personal savings are IMO likely to be not-that-important relative to other actions.
I agree with this on an individual level. (On an org level, I think philanthropic foundations might want to consider my arguments above for money buying more results soon, but this needs to be balanced against higher leverage on AI futures sooner rather than later.)
Further, I do actually think that the default outcome is that existing governments at least initially retain control over most resources such that capital isn’t clearly that important, but I won’t argue for this here (and the post does directly argue against this).
Where do I directly argue against that? A big chunk of this post is pointing out how the shifting relative importance of capital v labour changes the incentives of states. By default, I expect states to remain the most important and powerful institutions, but the frame here is very much human v non-human inputs to power and what that means for humans, without any particular stance on how the non-human inputs are organised. I don’t think states v companies v whatever fundamentally changes the dynamic; with labour-replacing AI power flows from data centres, other physical capital, and whoever has the financial capital to pay for it, and sidesteps humans doing work, and that is the shift I care about.
(However, I think which institutions do the bulk of decision-making re AI does matter for a lot of other reasons, and I’d be very curious to get your takes on that)
My guess is that the most fundamental disagreement here is about how much power tries to get away with when it can. My read of history leans towards: things are good for people when power is correlated with things being good for people, and otherwise not (though I think material abundance is very important too and always helps a lot). I am very skeptical of the stability of good worlds where incentives and selection pressures do not point towards human welfare.
For example, assuming a multipolar world where power flows from AI, the equilibrium is putting all your resources on AI competition and none on human welfare. I don’t think it’s anywhere near certain we actually reach that equilibrium, since sustained cooperation is possible (c.f. Ostrom’s Governing the Commons), and since a fairly trivial fraction of the post-AGI economy’s resources might suffice for human prosperity (and since maybe we in fact do get a singleton—but I’d have other issues with that). But this sort of concern still seems neglected and important to me.
fwiw, I see this post less as “responding” to something, and more laying out considerations on their own with some contrasting takes as a foil.
(On Substack, the title is “Capital, AGI, and human ambition”, which is perhaps better)
I agree with this, though I’d add: “if humans retain control” and some sufficient combination of culture/economics/politics/incentives continues opposing arbitrary despotism.
I also think that even if all material needs are met, avoiding social stasis and lock-in matters.
Scope sensitivity of preferences is a key concept that matters here, thanks for pointing that out.
Various other considerations about types of preferences / things you can care about (presented without endorsement):
instrumental preference to avoid stasis because of a belief it leads to other bad things (e.g. stagnant intellectual / moral / political / cultural progress, increasing autocracy)
altruistic preferences combined with a fear that less altruism will result if today’s wealth hierarchy is locked in, than if social progress and disruption continued
a belief that it’s culturally good when human competition has some anchoring to object-level physical reality (c.f. the links here)
a general belief in a tendency for things to go off the rails without a ground-truth unbeatable feedback signal that the higher-level process needs to be wary of—see Gwern’s Evolution as a backstop for RL
preferences that become more scope-sensitive due to transhumanist cognitive enhancement
positional preferences, i.e. wanting to be higher-status or more something than some other human(s)
a meta-positional-preference that positions are not locked in, because competition is fun
a preference for future generations having at least as much of a chance to shape the world, themselves, and their position as the current generation
an aesthetic preference for a world where hard work is rewarded, or rags-to-riches stories are possible
I agree with this on an individual level. (On an org level, I think philanthropic foundations might want to consider my arguments above for money buying more results soon, but this needs to be balanced against higher leverage on AI futures sooner rather than later.)
Where do I directly argue against that? A big chunk of this post is pointing out how the shifting relative importance of capital v labour changes the incentives of states. By default, I expect states to remain the most important and powerful institutions, but the frame here is very much human v non-human inputs to power and what that means for humans, without any particular stance on how the non-human inputs are organised. I don’t think states v companies v whatever fundamentally changes the dynamic; with labour-replacing AI power flows from data centres, other physical capital, and whoever has the financial capital to pay for it, and sidesteps humans doing work, and that is the shift I care about.
(However, I think which institutions do the bulk of decision-making re AI does matter for a lot of other reasons, and I’d be very curious to get your takes on that)
My guess is that the most fundamental disagreement here is about how much power tries to get away with when it can. My read of history leans towards: things are good for people when power is correlated with things being good for people, and otherwise not (though I think material abundance is very important too and always helps a lot). I am very skeptical of the stability of good worlds where incentives and selection pressures do not point towards human welfare.
For example, assuming a multipolar world where power flows from AI, the equilibrium is putting all your resources on AI competition and none on human welfare. I don’t think it’s anywhere near certain we actually reach that equilibrium, since sustained cooperation is possible (c.f. Ostrom’s Governing the Commons), and since a fairly trivial fraction of the post-AGI economy’s resources might suffice for human prosperity (and since maybe we in fact do get a singleton—but I’d have other issues with that). But this sort of concern still seems neglected and important to me.