OK. Lets work with quaesarthago, moritaeneou, and vincredulcem. They are names/concepts to delineate certain areas of mindspace so that I can talk about the qualities of those areas.
In Q space—goals are few, specified in advance, and not open to alternative interpretation
In M space—goals are slightly more numerous but less well-specified, more subject to interpretation and change, and considered to be owned by the mind with property rights oevr them
In V space—the goals are as numerous and diverse as the mind can imagine and the mind does not consider itself to own them
Specified is used as per specification; determined in advance, immutable, and hopefully not open to alternative interpretations
Personal is used as ownership
Maximal is both largest in number and most diverse in equal measure. I am fully aware of the difficulties in counting clouds or using simple numbers where infinite copies of identical objects are possible.
Q is dangerous because if the few goals (or one goal) conflict with your goals, you are going to be very unhappy
M is dangerous because its slightly greater number of goals are owned by it and subject to interpretation and modification by it and if the slightly greater number of goals conflict with your goals, you are going to be very unhappy
V tries to achieve all goals, including yours
All I have done is to define wisdom as the quality of having maximal goals. That is very different from the normal interpretation of safe AGI.
And, actually, your theological fiction is pretty close to what I had in mind (and well-expressed. Thank you).
Well, I’m not sure how far that advances things, but a possible failure mode—or is it? -- of a Friendly AI occurs to me. In fact, I foresee opinions being divided about whether this would be a failure or a success.
Someone makes an AI, and intends it to be Friendly, but the following happens when it takes off.
It decides to create as many humans as it can, all living excellent lives, far better than what even the most fortunate existing human has. And these will be real lives, no tricks with simulations, no mere tickling of pleasure centres out of a mistaken idea of real utility. It’s the paradise we wanted. The only catch is, we won’t be in it. None of these people will be descendants or copies of us. We, it decides, just aren’t good enough at being the humans we want to be. It’s going to build a new race from scratch. We can hang around if we like, it’s not going to disassemble us for raw material, but we won’t be able to participate in the paradise it will build. We’re just not up to it, any more than a chimp can be a human.
It could transform us little by little into fully functional members of the new civilisation, maintaining continuity of identity. However, it assures us, and our proof of Friendliness assures us that we can believe it, the people that we would then be would not credit our present selves as having made any significant contribution to their identity.
I ‘knew’ the idea presented in the link for a couple of years, but it simply clicked when I read the article, probably the writing style plus time did it for me.
OK. Lets work with quaesarthago, moritaeneou, and vincredulcem. They are names/concepts to delineate certain areas of mindspace so that I can talk about the qualities of those areas.
In Q space—goals are few, specified in advance, and not open to alternative interpretation
In M space—goals are slightly more numerous but less well-specified, more subject to interpretation and change, and considered to be owned by the mind with property rights oevr them
In V space—the goals are as numerous and diverse as the mind can imagine and the mind does not consider itself to own them
Specified is used as per specification; determined in advance, immutable, and hopefully not open to alternative interpretations
Personal is used as ownership
Maximal is both largest in number and most diverse in equal measure. I am fully aware of the difficulties in counting clouds or using simple numbers where infinite copies of identical objects are possible.
Q is dangerous because if the few goals (or one goal) conflict with your goals, you are going to be very unhappy
M is dangerous because its slightly greater number of goals are owned by it and subject to interpretation and modification by it and if the slightly greater number of goals conflict with your goals, you are going to be very unhappy
V tries to achieve all goals, including yours
All I have done is to define wisdom as the quality of having maximal goals. That is very different from the normal interpretation of safe AGI.
And, actually, your theological fiction is pretty close to what I had in mind (and well-expressed. Thank you).
Well, I’m not sure how far that advances things, but a possible failure mode—or is it? -- of a Friendly AI occurs to me. In fact, I foresee opinions being divided about whether this would be a failure or a success.
Someone makes an AI, and intends it to be Friendly, but the following happens when it takes off.
It decides to create as many humans as it can, all living excellent lives, far better than what even the most fortunate existing human has. And these will be real lives, no tricks with simulations, no mere tickling of pleasure centres out of a mistaken idea of real utility. It’s the paradise we wanted. The only catch is, we won’t be in it. None of these people will be descendants or copies of us. We, it decides, just aren’t good enough at being the humans we want to be. It’s going to build a new race from scratch. We can hang around if we like, it’s not going to disassemble us for raw material, but we won’t be able to participate in the paradise it will build. We’re just not up to it, any more than a chimp can be a human.
It could transform us little by little into fully functional members of the new civilisation, maintaining continuity of identity. However, it assures us, and our proof of Friendliness assures us that we can believe it, the people that we would then be would not credit our present selves as having made any significant contribution to their identity.
Is this a good outcome, or a failure?
it’s good ..
you seem to be saying-implying?- that continuity of identity should be very important for minds greater than ours, see http://www.goertzel.org/new_essays/IllusionOfImmortality.htm
I ‘knew’ the idea presented in the link for a couple of years, but it simply clicked when I read the article, probably the writing style plus time did it for me.