Well, I’m not sure how far that advances things, but a possible failure mode—or is it? -- of a Friendly AI occurs to me. In fact, I foresee opinions being divided about whether this would be a failure or a success.
Someone makes an AI, and intends it to be Friendly, but the following happens when it takes off.
It decides to create as many humans as it can, all living excellent lives, far better than what even the most fortunate existing human has. And these will be real lives, no tricks with simulations, no mere tickling of pleasure centres out of a mistaken idea of real utility. It’s the paradise we wanted. The only catch is, we won’t be in it. None of these people will be descendants or copies of us. We, it decides, just aren’t good enough at being the humans we want to be. It’s going to build a new race from scratch. We can hang around if we like, it’s not going to disassemble us for raw material, but we won’t be able to participate in the paradise it will build. We’re just not up to it, any more than a chimp can be a human.
It could transform us little by little into fully functional members of the new civilisation, maintaining continuity of identity. However, it assures us, and our proof of Friendliness assures us that we can believe it, the people that we would then be would not credit our present selves as having made any significant contribution to their identity.
I ‘knew’ the idea presented in the link for a couple of years, but it simply clicked when I read the article, probably the writing style plus time did it for me.
Well, I’m not sure how far that advances things, but a possible failure mode—or is it? -- of a Friendly AI occurs to me. In fact, I foresee opinions being divided about whether this would be a failure or a success.
Someone makes an AI, and intends it to be Friendly, but the following happens when it takes off.
It decides to create as many humans as it can, all living excellent lives, far better than what even the most fortunate existing human has. And these will be real lives, no tricks with simulations, no mere tickling of pleasure centres out of a mistaken idea of real utility. It’s the paradise we wanted. The only catch is, we won’t be in it. None of these people will be descendants or copies of us. We, it decides, just aren’t good enough at being the humans we want to be. It’s going to build a new race from scratch. We can hang around if we like, it’s not going to disassemble us for raw material, but we won’t be able to participate in the paradise it will build. We’re just not up to it, any more than a chimp can be a human.
It could transform us little by little into fully functional members of the new civilisation, maintaining continuity of identity. However, it assures us, and our proof of Friendliness assures us that we can believe it, the people that we would then be would not credit our present selves as having made any significant contribution to their identity.
Is this a good outcome, or a failure?
it’s good ..
you seem to be saying-implying?- that continuity of identity should be very important for minds greater than ours, see http://www.goertzel.org/new_essays/IllusionOfImmortality.htm
I ‘knew’ the idea presented in the link for a couple of years, but it simply clicked when I read the article, probably the writing style plus time did it for me.