I don’t view ASI as substantially different than an upload economy. There are strong theoretical reasons why (relatively extreme) inequality is necessary for pareto efficiency, and pareto efficiency is the very thing which creates utility (see critch’s recent argument for example, but there were strong reasons to have similar beliefs long before).
The distribution of contributions towards the future is extremely heavy tailed: most contribute almost nothing, a select few contribute enormously. Future systems must effectively trade with the present to get created at all: this is just as true for corporations as it is for future complex AI systems (which will be very similar to corporations).
Furthermore, uploads will be able to create copies of themselves proportional to their wealth, so wealth and measure become fungible/indistinguishable. This is already true to some extent today—the distribution of genetic ancestry is one of high inequality, the distribution of upload descendancy will be far more inequal and on accelerated timescales.
rather than a more-likely-to-maximize-utility criterion such as “whoever needs it most right now”.
This is a bizarre, disastrously misguided socialist political fantasy.
The optimal allocation of future resources over current humans will necessarily take the form of something like a historical backpropagated shapley value distribution: future utility allocated proportionally to counterfactual importance in creating said future utility. Well functioning capitalist economies already do this absent externalities; the function of good governance is to internalize all externalities.
I don’t view ASI as substantially different than an upload economy.
I’m very confused about why you think that. Unlike an economy, an aligned ASI is an agent. Its utility function can be something that looks at the kind of economy you ndescribe, and goes “huh, actually, extreme inequality seems not great, what if everyone got a reasonable amount of resources instead”.
It’s like you don’t think the people who get CEV’d would ever notice Moloch; their reflection processes would just go “oh yeah whatever this is fine keep the economy going”.
Most worlds where we don’t die are worlds where a single aligned ASI achieves decisive strategic advantage (and thus is permanently in charge of the future), in which case that single AI is an agent running a person/group’s CEV, which gets to look at the-world-as-a-whole and notice when its utility isn’t maximized and then can just do something else which is not that.
Some of the worlds where we don’t die are multipolar, which just means that a few different ASIs achieve decisive strategic advantage at a close-enough time that what they fill the lightcone with is a weighed composition of their various utility functions. But the set of ASIs who get to take part in that composition is still a constant set, locked-in forever, without any competition to worry about, and that composed utility function is then what looks at the-world-as-a-whole.
This point, that I think an immutable set of ASIs grab the entire future and then they’re in charge forever and they simply stop any competitor to themselves from ever appearing, feels like it’s in the direction of the crucial disagreement between our perspectives. Whether my “socialist political fantasy” indeed describes what worlds-where-we-don’t-die actually look like feels like it’s downstream of that.
the function of good governance is to internalize all externalities.
That sure is a take! Wouldn’t the function of good governance be to maximize nice things, regardless of whether that’s best achieved by patching all the externality-holes in a capitalist economy, or by doing something which is not that?
I don’t view ASI as substantially different than an upload economy.
I’m very confused about why you think that.
You ignored most of my explanation so I’ll reiterate a bit differently. But first taboo the ASI fantasy.
any good post-AGI future is one with uploading—humans will want this
uploads will be very similar to AI, and become moreso as they transcend
the resulting upload economy is one of many agents with different values
the organizational structure of any pareto optimal multi-agent system is necessarily market-like
it is a provable fact that wealth/power inequality is a consequent requisite side effect
Most worlds where we don’t die are worlds where a single aligned ASI achieves decisive strategic advantage
Unlikely but it also doesn’t matter as what alignment actually means is the resulting ASI must approximate pareto optimality with respect to various stakeholder utility functions, which requires that:
it uses stakeholder’s own beliefs to evaluate utility of actions
it must redistribute stakeholder power (ie wealth) toward agents with better predictive beliefs over time (in a fashion that looks like internal bayesian updating).
In other words, the internal structure of the optimal ASI is nigh indistinguishable from an optimal market.
Additionally, the powerful AI systems which are actually created are far more likely to be one which precommit to honoring their creator stakeholder weath distribution. In fact—that is part of what alignment actually means.
I don’t view ASI as substantially different than an upload economy. There are strong theoretical reasons why (relatively extreme) inequality is necessary for pareto efficiency, and pareto efficiency is the very thing which creates utility (see critch’s recent argument for example, but there were strong reasons to have similar beliefs long before).
The distribution of contributions towards the future is extremely heavy tailed: most contribute almost nothing, a select few contribute enormously. Future systems must effectively trade with the present to get created at all: this is just as true for corporations as it is for future complex AI systems (which will be very similar to corporations).
Furthermore, uploads will be able to create copies of themselves proportional to their wealth, so wealth and measure become fungible/indistinguishable. This is already true to some extent today—the distribution of genetic ancestry is one of high inequality, the distribution of upload descendancy will be far more inequal and on accelerated timescales.
This is a bizarre, disastrously misguided socialist political fantasy.
The optimal allocation of future resources over current humans will necessarily take the form of something like a historical backpropagated shapley value distribution: future utility allocated proportionally to counterfactual importance in creating said future utility. Well functioning capitalist economies already do this absent externalities; the function of good governance is to internalize all externalities.
I’m very confused about why you think that. Unlike an economy, an aligned ASI is an agent. Its utility function can be something that looks at the kind of economy you ndescribe, and goes “huh, actually, extreme inequality seems not great, what if everyone got a reasonable amount of resources instead”.
It’s like you don’t think the people who get CEV’d would ever notice Moloch; their reflection processes would just go “oh yeah whatever this is fine keep the economy going”.
Most worlds where we don’t die are worlds where a single aligned ASI achieves decisive strategic advantage (and thus is permanently in charge of the future), in which case that single AI is an agent running a person/group’s CEV, which gets to look at the-world-as-a-whole and notice when its utility isn’t maximized and then can just do something else which is not that.
Some of the worlds where we don’t die are multipolar, which just means that a few different ASIs achieve decisive strategic advantage at a close-enough time that what they fill the lightcone with is a weighed composition of their various utility functions. But the set of ASIs who get to take part in that composition is still a constant set, locked-in forever, without any competition to worry about, and that composed utility function is then what looks at the-world-as-a-whole.
This point, that I think an immutable set of ASIs grab the entire future and then they’re in charge forever and they simply stop any competitor to themselves from ever appearing, feels like it’s in the direction of the crucial disagreement between our perspectives. Whether my “socialist political fantasy” indeed describes what worlds-where-we-don’t-die actually look like feels like it’s downstream of that.
That sure is a take! Wouldn’t the function of good governance be to maximize nice things, regardless of whether that’s best achieved by patching all the externality-holes in a capitalist economy, or by doing something which is not that?
You ignored most of my explanation so I’ll reiterate a bit differently. But first taboo the ASI fantasy.
any good post-AGI future is one with uploading—humans will want this
uploads will be very similar to AI, and become moreso as they transcend
the resulting upload economy is one of many agents with different values
the organizational structure of any pareto optimal multi-agent system is necessarily market-like
it is a provable fact that wealth/power inequality is a consequent requisite side effect
Unlikely but it also doesn’t matter as what alignment actually means is the resulting ASI must approximate pareto optimality with respect to various stakeholder utility functions, which requires that:
it uses stakeholder’s own beliefs to evaluate utility of actions
it must redistribute stakeholder power (ie wealth) toward agents with better predictive beliefs over time (in a fashion that looks like internal bayesian updating).
In other words, the internal structure of the optimal ASI is nigh indistinguishable from an optimal market.
Additionally, the powerful AI systems which are actually created are far more likely to be one which precommit to honoring their creator stakeholder weath distribution. In fact—that is part of what alignment actually means.