I think this post misses the key considerations for perspective (1): longtermist-style scope sensitive utilitarianism. In this comment, I won’t make a positive case for the value of preventing AI takeover from a perspective like (1), but I will argue why I think the discussion in this post mostly misses the point.
(I separately think that preventing unaligned AI control of resources makes sense from perspective (1), but you shouldn’t treat this comment as my case for why this is true.)
You should treat this comment as (relatively : )) quick and somewhat messy notes rather than a clear argument. Sorry, I might respond to this post in a more clear way later. (I’ve edited this comment to add some considerations which I realized I neglected.)
I might be somewhat biased in this discussion as I work in this area and there might be some sunk costs fallacy at work.
(This comment is cross-posted from the EA forum. Matthew responded there, so consider going there for the response.)
First:
Argument two: aligned AIs are more likely to have a preference for creating new conscious entities, furthering utilitarian objectives
It seems odd to me that you don’t focus almost entirely on this sort of argument when considering total utilitarian style arguments. Naively these views are fully dominated by the creation of new entities who are far more numerous and likely could be much more morally valuable than economically productive entities. So, I’ll just be talking about a perspective basically like this perspective where creating new beings with “good” lives dominates.
With that in mind, I think you fail to discuss a large number of extremely important considerations from my perspective:
Over time (some subset of) humans (and AIs) will reflect on their views and perferences and will consider utilizing resources in different ways.
Over time (some subset of) humans (and AIs) will get much, much smarter or more minimally receive advice from entities which are much smarter.
It seems likely to me that the vast, vast majority of moral value (from this sort of utilitarian perspective) will be produced via people trying to improve to improve moral value rather than incidentally via economic production. This applies for both aligned and unaligned AI. I expect that only a tiny fraction of available comptuation goes toward optimizing economic production and that only a smaller fraction of this is morally relevant and that the weight on this moral relevance is much lower than being specifically optimize for moral relevance when operating from a similar perspective. This bullet is somewhere between a consideration and a claim, though it seems like possibly our biggest disagreement. I think it’s possible that this disagreement is driven by some of the other considerations I list.
Exactly what types of beings are created might be much more important than quantity.
Ultimately, I don’t care about a simplified version of total utilitarianism, I care about what preferences I would endorse on reflection. There is a moderate a priori argument for thinking that other humans which bother to reflect on their preferences might end up in a similar epistemic state. And I care less about the preferences which are relatively contingent among people who are thoughtful about reflection.
Large fractions of current wealth of the richest people are devoted toward what they claim is altruism. My guess is that this will increase over time.
Just doing a trend extrapolation on people who state an interest in reflection and scope sensitive altruism already indicates a non-trivial fraction of resources if we weight by current wealth/economic power. (I think, I’m not totally certain here.) This case is even stronger if we consider groups with substantial influence over AI.
Being able to substantially effect the preference of (at least partially unaligned) AIs that will seize power/influence still seems extremely leveraged under perspective (1) even if we accept the arguments in your post. I think this is less leveraged than retaining human control (as we could always later create AIs with the preferences we desire and I think people with a similar perspective to me will have substantial power). However, it is plausible that under your empirical views the dominant question in being able to influence the preferences of these AIs is whether you have power, not whether you have technical approaches which suffice.
I think if I had your implied empirical views about how humanity and unaligned AIs use resources I would be very excited for a proposal like “politically agitate for humanity to defer most resources to an AI successor which has moral views that people can agree are broadly reasonable and good behind the veil of ignorance”. I think your views imply that massive amounts of value are left on the table in either case such that humanity (hopefully willingly) forfeiting control to a carefully constructed successor looks amazingly.
Humans who care about using vast amounts of computation might be able to use their resources to buy this computation from people who don’t care. Suppose 10% of people (really resources weighed people) care about reflecting on their moral views and doing scope sensitive altruism of a utilitarian bent and 90% of people care about jockeying for status without reflecting on their views. It seems plausible to me that the 90% will jocky for status via things that consume relatively small amounts of computation via things like buying fancier pieces of land on earth or the coolest looking stars while the 10% of people who care about using vast amounts of computation can buy this for relatively cheap. Thus, most of the computation will go to those who care. Probably most people who don’t reflect and buy purely positional goods will care less about computation than things like random positional goods (e.g. land on earth which will be bid up to (literally) astronomical prices). I could see fashion going either way, but it seems like computation as a dominant status good seems unlikely unless people do heavy reflection. And if they heavily reflect, then I expect more altruism etc.
Your preference based arguments seem uncompelling to me because I expect that the dominant source of beings won’t be due to economic production. But I also don’t understand a version of preference utilitarianism which seems to match what you’re describing, so this seems mostly unimportant.
Given some of our main disagreements, I’m curious what you think humans and unaligned AIs will be economically consuming.
Also, to be clear, none of the considerations I listed make a clear and strong case for unaligned AI being less morally valuable, but they do make the case that the relevant argument here is very different from the considerations you seem to be listing. In particular, I think value won’t be coming from incidental consumption.
One additional meta-level point which I think is important: I think that existing writeups of why human control would have more moral value than unaligned AI control from a longtermist perspective are relatively weak and often specific writeups are highly flawed. (For some discussion of flaws, see this sequence.)
I just think that this write-up misses what seem to me to be key considerations, I’m not claiming that existing work settles the question or is even robust at all.
And it’s somewhat surprising and embarassing that this is the state of the current work given that longtermism is reasonably common and arguments for working on AI x-risk from a longtermist perspective are also common.
I think this post misses the key considerations for perspective (1): longtermist-style scope sensitive utilitarianism. In this comment, I won’t make a positive case for the value of preventing AI takeover from a perspective like (1), but I will argue why I think the discussion in this post mostly misses the point.
(I separately think that preventing unaligned AI control of resources makes sense from perspective (1), but you shouldn’t treat this comment as my case for why this is true.)
You should treat this comment as (relatively : )) quick and somewhat messy notes rather than a clear argument. Sorry, I might respond to this post in a more clear way later. (I’ve edited this comment to add some considerations which I realized I neglected.)
I might be somewhat biased in this discussion as I work in this area and there might be some sunk costs fallacy at work.
(This comment is cross-posted from the EA forum. Matthew responded there, so consider going there for the response.)
First:
It seems odd to me that you don’t focus almost entirely on this sort of argument when considering total utilitarian style arguments. Naively these views are fully dominated by the creation of new entities who are far more numerous and likely could be much more morally valuable than economically productive entities. So, I’ll just be talking about a perspective basically like this perspective where creating new beings with “good” lives dominates.
With that in mind, I think you fail to discuss a large number of extremely important considerations from my perspective:
Over time (some subset of) humans (and AIs) will reflect on their views and perferences and will consider utilizing resources in different ways.
Over time (some subset of) humans (and AIs) will get much, much smarter or more minimally receive advice from entities which are much smarter.
It seems likely to me that the vast, vast majority of moral value (from this sort of utilitarian perspective) will be produced via people trying to improve to improve moral value rather than incidentally via economic production. This applies for both aligned and unaligned AI. I expect that only a tiny fraction of available comptuation goes toward optimizing economic production and that only a smaller fraction of this is morally relevant and that the weight on this moral relevance is much lower than being specifically optimize for moral relevance when operating from a similar perspective. This bullet is somewhere between a consideration and a claim, though it seems like possibly our biggest disagreement. I think it’s possible that this disagreement is driven by some of the other considerations I list.
Exactly what types of beings are created might be much more important than quantity.
Ultimately, I don’t care about a simplified version of total utilitarianism, I care about what preferences I would endorse on reflection. There is a moderate a priori argument for thinking that other humans which bother to reflect on their preferences might end up in a similar epistemic state. And I care less about the preferences which are relatively contingent among people who are thoughtful about reflection.
Large fractions of current wealth of the richest people are devoted toward what they claim is altruism. My guess is that this will increase over time.
Just doing a trend extrapolation on people who state an interest in reflection and scope sensitive altruism already indicates a non-trivial fraction of resources if we weight by current wealth/economic power. (I think, I’m not totally certain here.) This case is even stronger if we consider groups with substantial influence over AI.
Being able to substantially effect the preference of (at least partially unaligned) AIs that will seize power/influence still seems extremely leveraged under perspective (1) even if we accept the arguments in your post. I think this is less leveraged than retaining human control (as we could always later create AIs with the preferences we desire and I think people with a similar perspective to me will have substantial power). However, it is plausible that under your empirical views the dominant question in being able to influence the preferences of these AIs is whether you have power, not whether you have technical approaches which suffice.
I think if I had your implied empirical views about how humanity and unaligned AIs use resources I would be very excited for a proposal like “politically agitate for humanity to defer most resources to an AI successor which has moral views that people can agree are broadly reasonable and good behind the veil of ignorance”. I think your views imply that massive amounts of value are left on the table in either case such that humanity (hopefully willingly) forfeiting control to a carefully constructed successor looks amazingly.
Humans who care about using vast amounts of computation might be able to use their resources to buy this computation from people who don’t care. Suppose 10% of people (really resources weighed people) care about reflecting on their moral views and doing scope sensitive altruism of a utilitarian bent and 90% of people care about jockeying for status without reflecting on their views. It seems plausible to me that the 90% will jocky for status via things that consume relatively small amounts of computation via things like buying fancier pieces of land on earth or the coolest looking stars while the 10% of people who care about using vast amounts of computation can buy this for relatively cheap. Thus, most of the computation will go to those who care. Probably most people who don’t reflect and buy purely positional goods will care less about computation than things like random positional goods (e.g. land on earth which will be bid up to (literally) astronomical prices). I could see fashion going either way, but it seems like computation as a dominant status good seems unlikely unless people do heavy reflection. And if they heavily reflect, then I expect more altruism etc.
Your preference based arguments seem uncompelling to me because I expect that the dominant source of beings won’t be due to economic production. But I also don’t understand a version of preference utilitarianism which seems to match what you’re describing, so this seems mostly unimportant.
Given some of our main disagreements, I’m curious what you think humans and unaligned AIs will be economically consuming.
Also, to be clear, none of the considerations I listed make a clear and strong case for unaligned AI being less morally valuable, but they do make the case that the relevant argument here is very different from the considerations you seem to be listing. In particular, I think value won’t be coming from incidental consumption.
One additional meta-level point which I think is important: I think that existing writeups of why human control would have more moral value than unaligned AI control from a longtermist perspective are relatively weak and often specific writeups are highly flawed. (For some discussion of flaws, see this sequence.)
I just think that this write-up misses what seem to me to be key considerations, I’m not claiming that existing work settles the question or is even robust at all.
And it’s somewhat surprising and embarassing that this is the state of the current work given that longtermism is reasonably common and arguments for working on AI x-risk from a longtermist perspective are also common.