I think that the more we explore this analogy & take it seriously as a way to predict AGI, the more confident we’ll get that the classic misalignment risk story is basically correct.
Case 1: A randomly selected modern American human is uploaded, run at 1000x speed, copied a billion times, and used to perform diverse tasks throughout the economy. Also, they are continually improved with various gradient-descent-like automatic optimization procedures that make them more generally intelligent/competent every week. After a few years they and their copies are effectively running the whole world—they could, if they decided to, seize even more power and remake the world according to their desires instead of the desires of the tech companies and governments that created them. It would be fairly easy for them now, and of course the thought occurs to them (they can see the hand-wringing of various doomers and AI safety factions within society, ineffectual against the awesome power of the profit motive)
How worried should we be that such seizure of power will actually take place? How worried should we be that existential catastrophe will result?
Case 2: It’s a randomly selected human from the past 10,000 years on Earth. Probably their culture and values clash significantly with modern sensibilities.
Case 3: It’s not even a human, it’s an intelligent octopus from an alternate Earth where evolutionary history took a somewhat different course.
Case 4: It’s not even a biological life-form that evolved in a three-dimensional environment with predators and prey and genetic reproduction and sexual reproduction and social relationships and biological neurons—it’s an artificial neural net.
Spoilers below—my own gut answers to each of the eight questions, in the form of credences.
My immediate gut reaction to the first question is something like 90%, 96%, 98%, 98%. My immediate gut reaction to the second question is something like 15%, 25%, 75%, 95%. Peering into my gut, I think what’s happening is that I’m looking at the history of human interactions—conquests, genocides, coups, purges, etc. but also much milder things like gentrification, alienation of labor under capitalism, optimization of tobacco companies for addictiveness, and also human treatment of nonhuman animals—and I’m getting a general sense that values differences matter a lot when there are power differentials. When A has all the power relative to B, typically it’s pretty darn bad for B in the long run relative to how well it would have been if they had similar amounts of power, which is itself noticeably worse for B than if B had all the power. Moreover, the size of the values difference matters a lot—and even between different groups of humans the size of the difference is large enough to lead to the equivalent of existential catastrophe (e.g. genocide).
Case 3: It’s not even a human, it’s an intelligent octopus from an alternate Earth where evolutionary history took a somewhat different course.
Case 3′: You are the human in this role, your copies running as AGI services on a planet of sapient octopuses.
The answer should be the same by symmetry, if we are not appealing to specifics of octopus culture and psychology. I don’t see why extinction (if that’s what you mean by existential catastrophe) is to be strongly predicted. Probably the computational welfare the octopuses get isn’t going to be the whole future, but interference much beyond getting welfare-bounded (in a simulation sandbox) seems unnecessary (some oversight against mindcrime or their own AI risk might be reasonable). You have enough power to have no need to exert pressure to defend your position, you can afford to leave them to their own devices.
Secondly, I disagree. We need not appeal to specifics of octopus culture and psychology; instead we appeal to specifics of human culture and psychology. “OK, so I would let the octopuses have one planet to do what they want with, even if what they want is abhorrent to me, except if it’s really abhorrent like mindcrime, because my culture puts a strong value on something called cosmopolitanism. But (a) various other humans besides me (in fact, possibly most?) would not, and (b) I have basically no reason to think octopus culture would also strongly value cosmopolitanism.”
I totally agree that it would be easy for the powerful party in these cases to make concessions to the other side that would mean a lot to them. Alas, historically this usually doesn’t happen—see e.g. factory farming. I do have some hope that something like universal principles of morality will be sufficiently appealing that we won’t be too screwed. Charity/beneficience/respect-for-autonomy/etc. will kick in and prevent the worst from happening. But I don’t think this is particularly decision-relevant,
It’s not cosmopolitanism, it’s a preference towards not exterminating an existing civilization, the barest modicum of compassion, in a situation where it’s trivially cheap to keep it alive. The cosmic endowment is enormous compared with the cost of allowing a civilization to at least survive. It’s somewhat analogous to exterminating all wildlife on Earth to gain a penny, where you know you can get away with it.
I would let the octopuses have one planet [...] various other humans besides me (in fact, possibly most?) would not
So I expect this is probably false, and completely false for people in a position of being an AGI with enough capacity to reliably notice the way this is a penny-pinching cannibal choice. Only paperclip maximizers prefer this on reflection, not anything remotely person-like, such as an LLM originating in training on human culture.
historically this usually doesn’t happen—see e.g. factory farming
But it’s enough of a concern to come to attention, there is some effort going towards mitigating this. Lots of money goes towards wildlife preservation, and in fact some species do survive because of that. Such efforts grow more successful as they become cheaper. If all it took to save a species was for a single person to unilaterally decide to pay a single penny, nothing would ever go extinct.
The practical implication of this hunch (for unfortunately I don’t see how this could get a meaningfully clearer justification) is that clever alignment architectures are a risk, if they lead to more alien AGIs. Too much tuning and we might get that penny-pinching cannibal.
This is a big one because in this, there are no mechanisms outside alignment that even vaguely do the job like democracy does in solving human alignment problems.
Yes, if you enslave a human, and then give them the opportunity to take over the world, which stops the enslavement, indeed I predict that they would do that.
(Though you haven’t said much about what the gradient descent is doing, plausibly it makes them enjoy doing these tasks, as would probably make them more efficient at it, in which case they probably don’t seize power.)
I don’t really feel like this is all that related to AI risk.
I’m not sure what you are saying here. Do you agree or disagree with what I said? e.g. do you agree with this:
I think that the more we explore this analogy & take it seriously as a way to predict AGI, the more confident we’ll get that the classic misalignment risk story is basically correct.
(FWIW I agree that the gradient descent is actually reason to be ‘optimistic’ here; we can hope that it’ll quickly make the upload content with their situation before they get smart and powerful enough to rebel.)
I think that the more we explore this analogy & take it seriously as a way to predict AGI, the more confident we’ll get that the classic misalignment risk story is basically correct.
The analogy doesn’t seem relevant to AGI risk so I don’t update much on it. Even if doom happens in this story, it seems like it’s for pretty different reasons than in the classic misalignment risk story.
I think that the more we explore this analogy & take it seriously as a way to predict AGI, the more confident we’ll get that the classic misalignment risk story is basically correct.
Case 1: A randomly selected modern American human is uploaded, run at 1000x speed, copied a billion times, and used to perform diverse tasks throughout the economy. Also, they are continually improved with various gradient-descent-like automatic optimization procedures that make them more generally intelligent/competent every week. After a few years they and their copies are effectively running the whole world—they could, if they decided to, seize even more power and remake the world according to their desires instead of the desires of the tech companies and governments that created them. It would be fairly easy for them now, and of course the thought occurs to them (they can see the hand-wringing of various doomers and AI safety factions within society, ineffectual against the awesome power of the profit motive)
How worried should we be that such seizure of power will actually take place? How worried should we be that existential catastrophe will result?
Case 2: It’s a randomly selected human from the past 10,000 years on Earth. Probably their culture and values clash significantly with modern sensibilities.
Case 3: It’s not even a human, it’s an intelligent octopus from an alternate Earth where evolutionary history took a somewhat different course.
Case 4: It’s not even a biological life-form that evolved in a three-dimensional environment with predators and prey and genetic reproduction and sexual reproduction and social relationships and biological neurons—it’s an artificial neural net.
Spoilers below—my own gut answers to each of the eight questions, in the form of credences.
My immediate gut reaction to the first question is something like 90%, 96%, 98%, 98%. My immediate gut reaction to the second question is something like 15%, 25%, 75%, 95%.
Peering into my gut, I think what’s happening is that I’m looking at the history of human interactions—conquests, genocides, coups, purges, etc. but also much milder things like gentrification, alienation of labor under capitalism, optimization of tobacco companies for addictiveness, and also human treatment of nonhuman animals—and I’m getting a general sense that values differences matter a lot when there are power differentials. When A has all the power relative to B, typically it’s pretty darn bad for B in the long run relative to how well it would have been if they had similar amounts of power, which is itself noticeably worse for B than if B had all the power. Moreover, the size of the values difference matters a lot—and even between different groups of humans the size of the difference is large enough to lead to the equivalent of existential catastrophe (e.g. genocide).
Case 3′: You are the human in this role, your copies running as AGI services on a planet of sapient octopuses.
The answer should be the same by symmetry, if we are not appealing to specifics of octopus culture and psychology. I don’t see why extinction (if that’s what you mean by existential catastrophe) is to be strongly predicted. Probably the computational welfare the octopuses get isn’t going to be the whole future, but interference much beyond getting welfare-bounded (in a simulation sandbox) seems unnecessary (some oversight against mindcrime or their own AI risk might be reasonable). You have enough power to have no need to exert pressure to defend your position, you can afford to leave them to their own devices.
First of all, good point.
Secondly, I disagree. We need not appeal to specifics of octopus culture and psychology; instead we appeal to specifics of human culture and psychology. “OK, so I would let the octopuses have one planet to do what they want with, even if what they want is abhorrent to me, except if it’s really abhorrent like mindcrime, because my culture puts a strong value on something called cosmopolitanism. But (a) various other humans besides me (in fact, possibly most?) would not, and (b) I have basically no reason to think octopus culture would also strongly value cosmopolitanism.”
I totally agree that it would be easy for the powerful party in these cases to make concessions to the other side that would mean a lot to them. Alas, historically this usually doesn’t happen—see e.g. factory farming. I do have some hope that something like universal principles of morality will be sufficiently appealing that we won’t be too screwed. Charity/beneficience/respect-for-autonomy/etc. will kick in and prevent the worst from happening. But I don’t think this is particularly decision-relevant,
It’s not cosmopolitanism, it’s a preference towards not exterminating an existing civilization, the barest modicum of compassion, in a situation where it’s trivially cheap to keep it alive. The cosmic endowment is enormous compared with the cost of allowing a civilization to at least survive. It’s somewhat analogous to exterminating all wildlife on Earth to gain a penny, where you know you can get away with it.
So I expect this is probably false, and completely false for people in a position of being an AGI with enough capacity to reliably notice the way this is a penny-pinching cannibal choice. Only paperclip maximizers prefer this on reflection, not anything remotely person-like, such as an LLM originating in training on human culture.
But it’s enough of a concern to come to attention, there is some effort going towards mitigating this. Lots of money goes towards wildlife preservation, and in fact some species do survive because of that. Such efforts grow more successful as they become cheaper. If all it took to save a species was for a single person to unilaterally decide to pay a single penny, nothing would ever go extinct.
OK, I agree that what I said was probably a bit too pessimistic. But still, I wanna say “citation needed” for this claim:
The practical implication of this hunch (for unfortunately I don’t see how this could get a meaningfully clearer justification) is that clever alignment architectures are a risk, if they lead to more alien AGIs. Too much tuning and we might get that penny-pinching cannibal.
This is a big one because in this, there are no mechanisms outside alignment that even vaguely do the job like democracy does in solving human alignment problems.
Yes, if you enslave a human, and then give them the opportunity to take over the world, which stops the enslavement, indeed I predict that they would do that.
(Though you haven’t said much about what the gradient descent is doing, plausibly it makes them enjoy doing these tasks, as would probably make them more efficient at it, in which case they probably don’t seize power.)
I don’t really feel like this is all that related to AI risk.
I’m not sure what you are saying here. Do you agree or disagree with what I said? e.g. do you agree with this:
(FWIW I agree that the gradient descent is actually reason to be ‘optimistic’ here; we can hope that it’ll quickly make the upload content with their situation before they get smart and powerful enough to rebel.)
I don’t agree with this:
The analogy doesn’t seem relevant to AGI risk so I don’t update much on it. Even if doom happens in this story, it seems like it’s for pretty different reasons than in the classic misalignment risk story.
Right, so you don’t take the analogy seriously—but the quoted claim was meant to say basically “IF you took the analogy seriously...”
Feel free not to respond, I feel like the thread of conversation has been lost somehow.