I can see why the post is being downvoted. It is full of stuff that no doubt makes sense to you, but does not communicate much to me, nor, judging by the comments, to other people who do not share the inside of your head.
If in the three definitions you had written, not “intelligence”, “consciousness”, and “wisdom” but, say, quaesarthago, moritaeneou, and vincredulcem, I would not have been able to map them onto any existing named concepts. So I can only read those definitions as meaning: here are three concepts I want to talk about, to which I will for convenience give these names.
But what are the concepts? What are “specified”, “personal”, and “maximal” goals? (Maximal by what measure?)
“Intelligence” and “consciouness” are described as dangerous, but not wisdom—why not?
And there’s a lot of talk about numbers of goals, and apparently the numbers matter, because wisdom is what tries to achieve “a maximal number of goals”. But how do you count goals? From one point of view, there is only a single goal: utility. But even without going to full-out individual utilitarianism, counting goals looks like counting clouds.
You end by saying that we need artificial wisdom. Given the lack of development of the concepts, you are saying no more than that we need AGI to be safe. This is a largely shared belief here, but you provide no insight into how to achieve that, nothing that leads outside the circle of words. All you have done is to define “wisdom” to mean the quality of safe AGI.
Digression into theological fiction: One method for a Wisdom to achieve a maximal number of goals would be by splitting itself into a large number of fragments, each with different goals but each imbued with a fragment of the original Wisdom, although unknowing of its origin. It could have the fragments breed and reproduce by some sort of genetic algorithm, so as to further increase the multiplicity of goals. It might also withhold a part of itself from this process, in order to supervise the world it had made, and now and then edit out parts that seemed to be running off the rails, but not too drastically, or it would be defeating the point of creating these creatures. Memetic interventions might be more benign, now and then sending avatars with a greater awareness of their original nature to inspire their fellows. And once the game had been played out and no further novelty was emerging, then It would collect up Its fragments and absorb them into a new Wisdom, higher than the old. After long eons of contemplating Itself, It would begin the same process all over again.
OK. Lets work with quaesarthago, moritaeneou, and vincredulcem. They are names/concepts to delineate certain areas of mindspace so that I can talk about the qualities of those areas.
In Q space—goals are few, specified in advance, and not open to alternative interpretation
In M space—goals are slightly more numerous but less well-specified, more subject to interpretation and change, and considered to be owned by the mind with property rights oevr them
In V space—the goals are as numerous and diverse as the mind can imagine and the mind does not consider itself to own them
Specified is used as per specification; determined in advance, immutable, and hopefully not open to alternative interpretations
Personal is used as ownership
Maximal is both largest in number and most diverse in equal measure. I am fully aware of the difficulties in counting clouds or using simple numbers where infinite copies of identical objects are possible.
Q is dangerous because if the few goals (or one goal) conflict with your goals, you are going to be very unhappy
M is dangerous because its slightly greater number of goals are owned by it and subject to interpretation and modification by it and if the slightly greater number of goals conflict with your goals, you are going to be very unhappy
V tries to achieve all goals, including yours
All I have done is to define wisdom as the quality of having maximal goals. That is very different from the normal interpretation of safe AGI.
And, actually, your theological fiction is pretty close to what I had in mind (and well-expressed. Thank you).
Well, I’m not sure how far that advances things, but a possible failure mode—or is it? -- of a Friendly AI occurs to me. In fact, I foresee opinions being divided about whether this would be a failure or a success.
Someone makes an AI, and intends it to be Friendly, but the following happens when it takes off.
It decides to create as many humans as it can, all living excellent lives, far better than what even the most fortunate existing human has. And these will be real lives, no tricks with simulations, no mere tickling of pleasure centres out of a mistaken idea of real utility. It’s the paradise we wanted. The only catch is, we won’t be in it. None of these people will be descendants or copies of us. We, it decides, just aren’t good enough at being the humans we want to be. It’s going to build a new race from scratch. We can hang around if we like, it’s not going to disassemble us for raw material, but we won’t be able to participate in the paradise it will build. We’re just not up to it, any more than a chimp can be a human.
It could transform us little by little into fully functional members of the new civilisation, maintaining continuity of identity. However, it assures us, and our proof of Friendliness assures us that we can believe it, the people that we would then be would not credit our present selves as having made any significant contribution to their identity.
I ‘knew’ the idea presented in the link for a couple of years, but it simply clicked when I read the article, probably the writing style plus time did it for me.
Oh. I see why the post is being downvoted as well. I’m being forced to address multiple audiences with different requirements by a nearly universal inclination to look for anything to justify criticism or downvoting—particularly since I’m rocking the boat or perceived as a newbie.
I’m a firm believer in Crocker’s Rules for myself but think that LessWrong and the SIAI have made huge mistakes in creating an echo chamber which slows/stifles the creation of new ideas and the location of errors in old ideas as well as alienating many, many potential allies.
I think we’re seeing different reasons. I think you’re being downvoted because people think you’re wrong, and you think you’re being downvoted because people think you’re right.
I’m being forced to address multiple audiences with different requirements by a nearly universal inclination to look for anything to justify criticism or downvoting—particularly since I’m rocking the boat or perceived as a newbie.
That hypothesis fails to account for all of the dataset: a lot of top-level posts don’t get downvoted to oblivion, even when they’re “rocking the boat” more than you are: see this and this.
I don’t perceive you as “rocking the boat”; I don’t understand enough of what you’re trying to say to tell whether I agree or not. I don’t think you’re more confused or less clear than the average lesswronger, however your top-levels posts on ethics and Friendly AI come off as more confused/confusing than the average top-level post on ethics and Friendly AI, most of which were written by Eliezer.
I don’t know if the perceived confusion comes from the fact that your own thinking is confused, that your thinking is clear but your writing is unclear, or that I myself am confused or biased in some way. There is a lot of writing that falls into that category (Foucault and Derrida come to mind), and I don’t consider it a worthwhile use of my time to try to figure it out, as there is also a large supply of clear writing available.
I can see why the post is being downvoted. It is full of stuff that no doubt makes sense to you, but does not communicate much to me, nor, judging by the comments, to other people who do not share the inside of your head.
If in the three definitions you had written, not “intelligence”, “consciousness”, and “wisdom” but, say, quaesarthago, moritaeneou, and vincredulcem, I would not have been able to map them onto any existing named concepts. So I can only read those definitions as meaning: here are three concepts I want to talk about, to which I will for convenience give these names.
But what are the concepts? What are “specified”, “personal”, and “maximal” goals? (Maximal by what measure?)
“Intelligence” and “consciouness” are described as dangerous, but not wisdom—why not?
And there’s a lot of talk about numbers of goals, and apparently the numbers matter, because wisdom is what tries to achieve “a maximal number of goals”. But how do you count goals? From one point of view, there is only a single goal: utility. But even without going to full-out individual utilitarianism, counting goals looks like counting clouds.
You end by saying that we need artificial wisdom. Given the lack of development of the concepts, you are saying no more than that we need AGI to be safe. This is a largely shared belief here, but you provide no insight into how to achieve that, nothing that leads outside the circle of words. All you have done is to define “wisdom” to mean the quality of safe AGI.
Digression into theological fiction: One method for a Wisdom to achieve a maximal number of goals would be by splitting itself into a large number of fragments, each with different goals but each imbued with a fragment of the original Wisdom, although unknowing of its origin. It could have the fragments breed and reproduce by some sort of genetic algorithm, so as to further increase the multiplicity of goals. It might also withhold a part of itself from this process, in order to supervise the world it had made, and now and then edit out parts that seemed to be running off the rails, but not too drastically, or it would be defeating the point of creating these creatures. Memetic interventions might be more benign, now and then sending avatars with a greater awareness of their original nature to inspire their fellows. And once the game had been played out and no further novelty was emerging, then It would collect up Its fragments and absorb them into a new Wisdom, higher than the old. After long eons of contemplating Itself, It would begin the same process all over again.
But I don’t think this is what you had in mind.
OK. Lets work with quaesarthago, moritaeneou, and vincredulcem. They are names/concepts to delineate certain areas of mindspace so that I can talk about the qualities of those areas.
In Q space—goals are few, specified in advance, and not open to alternative interpretation
In M space—goals are slightly more numerous but less well-specified, more subject to interpretation and change, and considered to be owned by the mind with property rights oevr them
In V space—the goals are as numerous and diverse as the mind can imagine and the mind does not consider itself to own them
Specified is used as per specification; determined in advance, immutable, and hopefully not open to alternative interpretations
Personal is used as ownership
Maximal is both largest in number and most diverse in equal measure. I am fully aware of the difficulties in counting clouds or using simple numbers where infinite copies of identical objects are possible.
Q is dangerous because if the few goals (or one goal) conflict with your goals, you are going to be very unhappy
M is dangerous because its slightly greater number of goals are owned by it and subject to interpretation and modification by it and if the slightly greater number of goals conflict with your goals, you are going to be very unhappy
V tries to achieve all goals, including yours
All I have done is to define wisdom as the quality of having maximal goals. That is very different from the normal interpretation of safe AGI.
And, actually, your theological fiction is pretty close to what I had in mind (and well-expressed. Thank you).
Well, I’m not sure how far that advances things, but a possible failure mode—or is it? -- of a Friendly AI occurs to me. In fact, I foresee opinions being divided about whether this would be a failure or a success.
Someone makes an AI, and intends it to be Friendly, but the following happens when it takes off.
It decides to create as many humans as it can, all living excellent lives, far better than what even the most fortunate existing human has. And these will be real lives, no tricks with simulations, no mere tickling of pleasure centres out of a mistaken idea of real utility. It’s the paradise we wanted. The only catch is, we won’t be in it. None of these people will be descendants or copies of us. We, it decides, just aren’t good enough at being the humans we want to be. It’s going to build a new race from scratch. We can hang around if we like, it’s not going to disassemble us for raw material, but we won’t be able to participate in the paradise it will build. We’re just not up to it, any more than a chimp can be a human.
It could transform us little by little into fully functional members of the new civilisation, maintaining continuity of identity. However, it assures us, and our proof of Friendliness assures us that we can believe it, the people that we would then be would not credit our present selves as having made any significant contribution to their identity.
Is this a good outcome, or a failure?
it’s good ..
you seem to be saying-implying?- that continuity of identity should be very important for minds greater than ours, see http://www.goertzel.org/new_essays/IllusionOfImmortality.htm
I ‘knew’ the idea presented in the link for a couple of years, but it simply clicked when I read the article, probably the writing style plus time did it for me.
Oh. I see why the post is being downvoted as well. I’m being forced to address multiple audiences with different requirements by a nearly universal inclination to look for anything to justify criticism or downvoting—particularly since I’m rocking the boat or perceived as a newbie.
I’m a firm believer in Crocker’s Rules for myself but think that LessWrong and the SIAI have made huge mistakes in creating an echo chamber which slows/stifles the creation of new ideas and the location of errors in old ideas as well as alienating many, many potential allies.
I think we’re seeing different reasons. I think you’re being downvoted because people think you’re wrong, and you think you’re being downvoted because people think you’re right.
That hypothesis fails to account for all of the dataset: a lot of top-level posts don’t get downvoted to oblivion, even when they’re “rocking the boat” more than you are: see this and this.
I don’t perceive you as “rocking the boat”; I don’t understand enough of what you’re trying to say to tell whether I agree or not. I don’t think you’re more confused or less clear than the average lesswronger, however your top-levels posts on ethics and Friendly AI come off as more confused/confusing than the average top-level post on ethics and Friendly AI, most of which were written by Eliezer.
I don’t know if the perceived confusion comes from the fact that your own thinking is confused, that your thinking is clear but your writing is unclear, or that I myself am confused or biased in some way. There is a lot of writing that falls into that category (Foucault and Derrida come to mind), and I don’t consider it a worthwhile use of my time to try to figure it out, as there is also a large supply of clear writing available.