Some of the other things you suggest, like future systems keeping humans physically alive, do not seem plausible to me. Whatever they’re trying to do, there’s almost certainly a better way to do it than by keeping Matrix-like human body farms running.
Insofar as AIs are doing things because they are what existing humans want (within some tiny cost budget), then I expect that you should imagine that what actually happens is what humans want (rather than e.g. what the AI thinks they “should want”) insofar as what humans want is cheap.
See also here which makes a similar argument in response to a similar point.
So, if humans don’t end up physically alive but do end up as uploads/body farms/etc one of a few things must be true:
Humans didn’t actually want to be physically alive and instead wanted to be uploads. In this case, it is very misleading to say “the AI will kill everyone (and sure there might be uploads, but you don’t want to be an upload right?)” because we’re conditioning on people deciding to become uploads!
It was too expensive to keep people physically alive rather than uploads. I think this is possible but somewhat implausible: the main reasons for cost here apply to uploads as much as to keeping humans physically alive. In particular, death due to conflict or mass slaughter in cases where conflict was the AI’s best option to increase the probability of long run control.
I don’t think slaughtering billions of people would be very useful. As a reference point, wars between countries almost never result in slaughtering that large a fraction of people
Insofar as AIs are doing things because they are what existing humans want (within some tiny cost budget), then I expect that you should imagine that what actually happens is what humans want (rather than e.g. what the AI thinks they “should want”) insofar as what humans want is cheap.
See also here which makes a similar argument in response to a similar point.
So, if humans don’t end up physically alive but do end up as uploads/body farms/etc one of a few things must be true:
Humans didn’t actually want to be physically alive and instead wanted to be uploads. In this case, it is very misleading to say “the AI will kill everyone (and sure there might be uploads, but you don’t want to be an upload right?)” because we’re conditioning on people deciding to become uploads!
It was too expensive to keep people physically alive rather than uploads. I think this is possible but somewhat implausible: the main reasons for cost here apply to uploads as much as to keeping humans physically alive. In particular, death due to conflict or mass slaughter in cases where conflict was the AI’s best option to increase the probability of long run control.
I don’t think slaughtering billions of people would be very useful. As a reference point, wars between countries almost never result in slaughtering that large a fraction of people
Unfortunately, if the AI really barely cares (e.g. <1/billion caring), it might only need to be barely useful.
I agree it is unlikely to be very useful.