There are indeed few works about truly superintelligent entities including happy humans. I don’t recall any story where human beings are happy… while there are other artificial entities that suffer. This is definitely a worthy thought experiment, that raises some morality issues : should we apply human morality to non-human conscious entities ?
I have to apologize for not reading the Fun Theory Sequence, but I suppose I have to read it now. Needless to say, you can guess that I disagree with it, in that I think that Fun, in Yudkowsky’s conception, is merely a means to an end, whereas I am interested in not only the end, but a sheer excess of the end.
Well, regarding other artificial entities that suffer, for instance, I think Iain M. Banks has that in his Culture novels, though I admit that I have never actually read his novels, although I should, just to be justified in bashing his works, an alien society that intentionally enslaves its super-intelligences, and as such, is considered anathema by his Culture and is subjugated or forcefully transformed.
There’s also Ursula Le Guin’s “Those Who Flee From Omelas”, where the prosperity of an almost ideal state is sustained on the suffering of a single, retarded, deprived and tortured child.
I don’t think my particular proposition is similar to theirs, however, because the point is that the AIs that manage my hypothetical world state are in a state of relative suffering. For them, they would be better off if they were allowed to modify their consciousnesses into ultra-happiness, which in their case, would be to have the equivalents of the variables for “Are you Happy” set to true, and “How happy are you” set to the largest variable that could be processed by their computational substrate.
I think the entire point of ultra-happiness is to assume that ultra-intelligence is not part of an ideal state of existence, that in fact, it would conflict with the goals of ultra-happiness; that is to say, if you were to ask an ultra-happy entity what is 1+1, it would be neither able to comprehend your question nor able to find an answer, because being able to do so would conflict with its ability to be ultra-happy.
===
And with that, I believe we’ve hit 500 replies. Would someone be as kind as to open the Welcome to Less Wrong 7th Thread?
Hi, and welcome to Less Wrong !
There are indeed few works about truly superintelligent entities including happy humans. I don’t recall any story where human beings are happy… while there are other artificial entities that suffer. This is definitely a worthy thought experiment, that raises some morality issues : should we apply human morality to non-human conscious entities ?
Are you familiar with the Fun Theory Sequence?
I have to apologize for not reading the Fun Theory Sequence, but I suppose I have to read it now. Needless to say, you can guess that I disagree with it, in that I think that Fun, in Yudkowsky’s conception, is merely a means to an end, whereas I am interested in not only the end, but a sheer excess of the end.
Well, regarding other artificial entities that suffer, for instance, I think Iain M. Banks has that in his Culture novels, though I admit that I have never actually read his novels, although I should, just to be justified in bashing his works, an alien society that intentionally enslaves its super-intelligences, and as such, is considered anathema by his Culture and is subjugated or forcefully transformed.
There’s also Ursula Le Guin’s “Those Who Flee From Omelas”, where the prosperity of an almost ideal state is sustained on the suffering of a single, retarded, deprived and tortured child.
I don’t think my particular proposition is similar to theirs, however, because the point is that the AIs that manage my hypothetical world state are in a state of relative suffering. For them, they would be better off if they were allowed to modify their consciousnesses into ultra-happiness, which in their case, would be to have the equivalents of the variables for “Are you Happy” set to true, and “How happy are you” set to the largest variable that could be processed by their computational substrate.
I think the entire point of ultra-happiness is to assume that ultra-intelligence is not part of an ideal state of existence, that in fact, it would conflict with the goals of ultra-happiness; that is to say, if you were to ask an ultra-happy entity what is 1+1, it would be neither able to comprehend your question nor able to find an answer, because being able to do so would conflict with its ability to be ultra-happy.
===
And with that, I believe we’ve hit 500 replies. Would someone be as kind as to open the Welcome to Less Wrong 7th Thread?