Hi, I registered specifically on LessWrong because after reading up about Eliezer’s Super-happies, I found out that there actually exists a website on the concept of super-happiness. Up to now, I had thought that I was the only one who had thought about the subject in terms of transhumanism, and while I acknowledge that there has already been significant amounts of discourse towards superhappiness, I don’t believe that others have had the same ideas that I have, and I would like to discuss the idea in a community that might be interested in it.
The premises are as follows: human beings seek utility and seek to avoid disutility. However, what one person thinks is good is not the same as what another person thinks is good, hence, the concept of good and bad is to some extent arbitrary. Moreover, preferences, beliefs, and so on, that are held by human beings are material structures that exist within their neurology, and a sufficiently advanced technology may exist that would be able to modify such beliefs.
Human beings are well-off when their biological perceptions of needs are satisfied, and their fears are avoided. Superhappiness, as far as I understand it, is to biologically hardwire people to have their needs be satisfied. What I think is my own innovation, on the other hand, is [b]ultrahappiness[/b], which is to biologically modify people so that their fears are minimalized, and their wants are maximalized, which is to say, that for a given individual, that person is as happy as their biological substrate can support.
Now, combine this with utilitarianism, the ethical doctrine that believes in the greatest good for the greatest number. If the greatest good for a single individual is defined as ultra-happiness, then the greatest good for the greatest number is defined as maximizing ultra-happiness.
What this means is that the “good state”, bear with me, is that for a given quantity of matter, as much ultra-happiness is created as possible. This means that human biological matter is modified in such a way that it is in a state that it expresses the most efficient possible state of ultra-happiness, and as a consequence, it could not be said to be conscious in the same way as humans are currently conscious right now, and likely would lose all volition.
Now, combine this with a utilitarian super-intelligent artificial intelligence. If it were to subscribe to ultra-happy-ism, it would decide that the best state would be to modify all existing humans under its care to some type of ultra-happy state, and find a way to convert all matter within its dominion to an ultra-happy state.
So, that’s ultra-happy-ism. The idea is that the logical end of transhumanism and post-humanism, is that if it values human happiness, it would ultimately assume a state that would radically transform and to some extent eliminate existing human consciousness, put the entire world into a state of nirvana, if you’d accept the Buddhism metaphor. At the same time, the ultra-happy AI, would, presumably be programmed either to ignore its own state of suffering / unfulfilled wants, or it would decide that its utilitarian ethics means that it should bear on the burden of its own shoulders the suffering of the rest of the world; ie, the requirements that it be made responsible for maintaining as much ultrahappiness in the world as possible, while it itself, as a conscious, sentient entity, be subjected to the possibility of unhappiness, because in its own capacity for empathy, it itself cannot accept its nirvana, being what the Buddhists would call a bodhisattva, in order to maximize the subjective utility of the universe.
===
The main objection I immediately see to this concept is that, well, first, human utility might be more than material, that is to say, even when rendered into a state of super-happiness, the ability to have volition, to have the dignity of autonomy, might have greater utility than ultra-happiness.
The second objection is, for the ultra-happy AIs that run what I would term utility farms, the rational thing for them to do would be to modify themselves into ultra-happiness; that is to say, what’s to stop them from effectively committing suicide and condeming the ultra-happy dyson sphere to death because of their own desire to say “Atlas Shrugs”?
I think those two objections are valid. Ie, human beings might be better off if they were only super-happy, as opposed to ultra-happy, and that an AI system based on ultra-happiness and maximizing ultra-happiness is unsustainable because eventually the AIs will want to code themselves into ultra-happiness.
The objection I think is invalid is the notion that you can be ultra-happy while retaining your volition. There are two counterarguments for that, first, relating to utilitarianism as a system of utility farming, and second, relating to the nature of desire. First, as a system of utility farming, the objective is to maximize the sustainable long-term output for a given input. That means, you want to maximize the number of brains, or utility-experiencers, for a given amount of matter. This means, that in order to maximize ultra-happiness, you will want to make each individual organism as cheap as possible. That means actually connecting a system of consciousness to a system of influencing the world is not cost-effective, because then the organism needs space, needs computational capacity that is not related to experiencing ultra-happiness. Even if you had some kind of organic utility farm with free-range humans, why would a given organism require action? The point of utility farming is that desires are maximally created and maximally fulfilled, for an organism to consciously act, it would require desires that could only be fulfilled by the action. The circuit of desire-action-fulfillment creates the possibility of suboptimal utility-experience, hence, it would be rational to, in lieu of having a neurological circuit that can complete a desire-action-fulfillment cycle, simply having another simple desire-fulfillment circuit to fulfill utility.
===
Well, I registered specifically to post this concept. I’m just surprised that in all the discussion of rampant AI overlords destroying humanity, I don’t see any objections that AI overlords destroying humanity as we know it might actually be a good thing. I am seriously arrogant enough to imagine that I might actually be contributing to this conversation, and that ultra-happy-ism might actually be a novel contribution to post-humanism and trans-humanism.
I am actually a supporter of ultra-happy-ism, I think that ultra-happy-ism is actually a good thing, and that it is an ideal state. While it might seem terrible that human beings, en masse, would end up losing their volition, there would still be conscious entities in this type of world. As Auguste Villiers de l’Isle-Adam says in Axeel: “Vivre? les serviteurs feront cela pour nous” (“Living? Our servants will do that for us”), and there will continue to be drama , tragedy, and human interest in this type of world. It simply will not be such that is experienced by human entities.
It is actually a workable world in its own way; were I a better writer, I would write short stories and novels set in such a universe. While human beings, in the terms of being strict humans, would not continue to live and be active, perhaps human personalities, depending on their quality, would be uploaded as the basis of caretaker AIs, some of whom which would be based on human personalities, others being coded from scratch or based on hypothetical possible AIs. The act of living, as we experience it now, would instead of granted to that of the caretaker AIs, who would be imbued with a sense of pathos, given that they, unlike their human / non-human charges, would be subject to the possibility of suffering, and they would be charged with shouldering the fates of trillions of souls; all non-conscious, all experiencing infinite bliss in an eternal slumber.
What I think is my own innovation, on the other hand, is [b]ultrahappiness[/b], which is to biologically modify people so that their fears are minimalized, and their wants are maximalized, which is to say, that for a given individual, that person is as happy as their biological substrate can support.
Without a good definition of “fear” and “want” that’s not a very useful definition. Both words are quite complex when you get to actual cognition.
Thank you for highlighting loose definitions in my proposition.
I actually appreciate the response from both you and Gyrodiot, because on rereading this I realize I should have re-read and edited the post before posting, but this was one of the spur of the moment things.
I think the idea is easier to understand if you consider its opposite.
Let’s imagine a world history, a history of a universe that exists from the maximum availability of free energy to its depletion as heat. Now, the worst possible world history would involve the existence of entities completely opposite what I am trying to propose; entities for whom, independent of all external and internal factors, constantly, for each moment in time, experience the maximum amount of suffering possible, because they are designed and engineered specifically to experience the maximum amount of suffering. The worst possible world history would be a universe that would maximize the collective number of consciousness-years of these entities, that is to say, a universe that exists as a complete system of suffering.
That, I think, would be the worst possible universe imaginable.
Now, if we were simply to invert the scenario, to imagine a universe that is composed almost entirely of entities that constantly exist in, for want of a better word, super-bliss, and maximizes the collective number of consciousness-years experienced by its entities, excepting the objections I’ve mentioned, wouldn’t this be, instead, the best possible universe?
Now, if we were simply to invert the scenario, to imagine a universe that is composed almost entirely of entities that constantly exist in, for want of a better word, super-bliss, and maximizes the collective number of consciousness-years experienced by its entities, excepting the objections I’ve mentioned, wouldn’t this be, instead, the best possible universe?
That’s basically wireheading.
Apart from that your basic frame of mind is that there a one dimensional variable that goes from maximum suffering on the other hand to maximum bliss on the other hand. I doubt that’s true.
You treat fear as synonymous with suffering. That clouds the issue. People who go parachuting do experience fear. It’s creates a rush of emotions. It doesn’t make them suffer but makes them feel alive.
He have multiple times witnessed people in NLP with happiness made strong enough that it was too much for the person. It takes good hypnotic suggestibility to get a person to that point by simply strengthening an emotion but it does happen from time to time.
When wishing in front of an almightly AGI it’s very important to be clear about one is asking for.
+1 Karma for the human augmented search; I’ve found the Less Wrong articles on wireheading and I’m reading up on it. It seems similar to what I’m proposing, but I don’t think it’s identical.
Say, take Greg Egan’s Axiomatic, for instance. There, you have brain mods that can arbitrarily modify one’s value system; there are units for secular humanism, units for Catholicism, and perhaps, if it were legal, there would probably be units for for Nazi-ism and Fascism as well.
If you go by Aristotle and assume that happiness is the satisfaction of all goods, and assume that neural modification can result in the arbitrary creation and destruction of values and notions of what is good, what is a virtue, then we can arbitrarily induce happiness or fulfillment through neural modification to arbitrarily establish values.
I think that’s different than wireheading, wireheading is the artificial creation of hedons through electrical stimulation. Ultra-happiness is the artificial creation of utilons through value modification.
In a more limited context than what I am proposing, let’s say I like having sex while drunk and skydiving, but not while high on cocaine. Let’s take two cases, first, I am having sex while drunk and skydriving. In the second case, assume that I have been modified so that I like having sex while drunk and skydiving and high on cocaine, and that I am having sex while drunk, skydiving, and high on cocaine. Am I better off in the first situation or in the second situation?
If you accept that example, then you have three possible responses. I won’t address the possibility that I am worse off in the second example, because that assumes a negative value to modification, and for the purposes of this argument I don’t want to deal with that. The other two possible responses are, I am equally as well off in the first example as I am in the second, and that I am better off in the second example than I am in the first.
In the first case, then wouldn’t it be rational to modify my value system so that I assign as high a possible value to being as possible, and assign no value to any other states? In the second case, then wouldn’t I be better off if I were to be modified so that I would have as many instances of preference for existence as possible?
==
And with that, I believe we’ve hit 500 replies. Would someone be as kind as to open the Welcome to Less Wrong 7th Thread?
If you go by Aristotle and assume that happiness is the satisfaction of all goods, and assume that neural modification can result in the arbitrary creation and destruction of values and notions of what is good, what is a virtue
Those are some large assumptions. One might instead assume (what Aristotle argues for — Nicomachean Ethics chs. 8–9) that happiness is to be found in an objectively desirable state of eudaemonia, achieved by using reason to live a virtuous life. (Add utilitarianism to that and you get the EA movement.) One might also assume (what Plato argues for — Republic, book 8) that neural modification cannot result in the arbitrary creation and destruction of values, only the creation and destruction of notions of values, but the values that those notions are about remain unchanged.
Those are also large assumptions, of course. How would you decide between them, or between them and other possible assumptions?
That’s a mistake. You wouldn’t ask in a discussion about physics to go back to the mistaken notions of Aristotle. There’s no reason to do it here.
I think that’s different than wireheading, wireheading is the artificial creation of hedons through electrical stimulation. Ultra-happiness is the artificial creation of utilons through value modification.
There are indeed few works about truly superintelligent entities including happy humans. I don’t recall any story where human beings are happy… while there are other artificial entities that suffer. This is definitely a worthy thought experiment, that raises some morality issues : should we apply human morality to non-human conscious entities ?
I have to apologize for not reading the Fun Theory Sequence, but I suppose I have to read it now. Needless to say, you can guess that I disagree with it, in that I think that Fun, in Yudkowsky’s conception, is merely a means to an end, whereas I am interested in not only the end, but a sheer excess of the end.
Well, regarding other artificial entities that suffer, for instance, I think Iain M. Banks has that in his Culture novels, though I admit that I have never actually read his novels, although I should, just to be justified in bashing his works, an alien society that intentionally enslaves its super-intelligences, and as such, is considered anathema by his Culture and is subjugated or forcefully transformed.
There’s also Ursula Le Guin’s “Those Who Flee From Omelas”, where the prosperity of an almost ideal state is sustained on the suffering of a single, retarded, deprived and tortured child.
I don’t think my particular proposition is similar to theirs, however, because the point is that the AIs that manage my hypothetical world state are in a state of relative suffering. For them, they would be better off if they were allowed to modify their consciousnesses into ultra-happiness, which in their case, would be to have the equivalents of the variables for “Are you Happy” set to true, and “How happy are you” set to the largest variable that could be processed by their computational substrate.
I think the entire point of ultra-happiness is to assume that ultra-intelligence is not part of an ideal state of existence, that in fact, it would conflict with the goals of ultra-happiness; that is to say, if you were to ask an ultra-happy entity what is 1+1, it would be neither able to comprehend your question nor able to find an answer, because being able to do so would conflict with its ability to be ultra-happy.
===
And with that, I believe we’ve hit 500 replies. Would someone be as kind as to open the Welcome to Less Wrong 7th Thread?
Hi, I registered specifically on LessWrong because after reading up about Eliezer’s Super-happies, I found out that there actually exists a website on the concept of super-happiness. Up to now, I had thought that I was the only one who had thought about the subject in terms of transhumanism, and while I acknowledge that there has already been significant amounts of discourse towards superhappiness, I don’t believe that others have had the same ideas that I have, and I would like to discuss the idea in a community that might be interested in it.
The premises are as follows: human beings seek utility and seek to avoid disutility. However, what one person thinks is good is not the same as what another person thinks is good, hence, the concept of good and bad is to some extent arbitrary. Moreover, preferences, beliefs, and so on, that are held by human beings are material structures that exist within their neurology, and a sufficiently advanced technology may exist that would be able to modify such beliefs.
Human beings are well-off when their biological perceptions of needs are satisfied, and their fears are avoided. Superhappiness, as far as I understand it, is to biologically hardwire people to have their needs be satisfied. What I think is my own innovation, on the other hand, is [b]ultrahappiness[/b], which is to biologically modify people so that their fears are minimalized, and their wants are maximalized, which is to say, that for a given individual, that person is as happy as their biological substrate can support.
Now, combine this with utilitarianism, the ethical doctrine that believes in the greatest good for the greatest number. If the greatest good for a single individual is defined as ultra-happiness, then the greatest good for the greatest number is defined as maximizing ultra-happiness.
What this means is that the “good state”, bear with me, is that for a given quantity of matter, as much ultra-happiness is created as possible. This means that human biological matter is modified in such a way that it is in a state that it expresses the most efficient possible state of ultra-happiness, and as a consequence, it could not be said to be conscious in the same way as humans are currently conscious right now, and likely would lose all volition.
Now, combine this with a utilitarian super-intelligent artificial intelligence. If it were to subscribe to ultra-happy-ism, it would decide that the best state would be to modify all existing humans under its care to some type of ultra-happy state, and find a way to convert all matter within its dominion to an ultra-happy state.
So, that’s ultra-happy-ism. The idea is that the logical end of transhumanism and post-humanism, is that if it values human happiness, it would ultimately assume a state that would radically transform and to some extent eliminate existing human consciousness, put the entire world into a state of nirvana, if you’d accept the Buddhism metaphor. At the same time, the ultra-happy AI, would, presumably be programmed either to ignore its own state of suffering / unfulfilled wants, or it would decide that its utilitarian ethics means that it should bear on the burden of its own shoulders the suffering of the rest of the world; ie, the requirements that it be made responsible for maintaining as much ultrahappiness in the world as possible, while it itself, as a conscious, sentient entity, be subjected to the possibility of unhappiness, because in its own capacity for empathy, it itself cannot accept its nirvana, being what the Buddhists would call a bodhisattva, in order to maximize the subjective utility of the universe.
===
The main objection I immediately see to this concept is that, well, first, human utility might be more than material, that is to say, even when rendered into a state of super-happiness, the ability to have volition, to have the dignity of autonomy, might have greater utility than ultra-happiness.
The second objection is, for the ultra-happy AIs that run what I would term utility farms, the rational thing for them to do would be to modify themselves into ultra-happiness; that is to say, what’s to stop them from effectively committing suicide and condeming the ultra-happy dyson sphere to death because of their own desire to say “Atlas Shrugs”?
I think those two objections are valid. Ie, human beings might be better off if they were only super-happy, as opposed to ultra-happy, and that an AI system based on ultra-happiness and maximizing ultra-happiness is unsustainable because eventually the AIs will want to code themselves into ultra-happiness.
The objection I think is invalid is the notion that you can be ultra-happy while retaining your volition. There are two counterarguments for that, first, relating to utilitarianism as a system of utility farming, and second, relating to the nature of desire. First, as a system of utility farming, the objective is to maximize the sustainable long-term output for a given input. That means, you want to maximize the number of brains, or utility-experiencers, for a given amount of matter. This means, that in order to maximize ultra-happiness, you will want to make each individual organism as cheap as possible. That means actually connecting a system of consciousness to a system of influencing the world is not cost-effective, because then the organism needs space, needs computational capacity that is not related to experiencing ultra-happiness. Even if you had some kind of organic utility farm with free-range humans, why would a given organism require action? The point of utility farming is that desires are maximally created and maximally fulfilled, for an organism to consciously act, it would require desires that could only be fulfilled by the action. The circuit of desire-action-fulfillment creates the possibility of suboptimal utility-experience, hence, it would be rational to, in lieu of having a neurological circuit that can complete a desire-action-fulfillment cycle, simply having another simple desire-fulfillment circuit to fulfill utility.
===
Well, I registered specifically to post this concept. I’m just surprised that in all the discussion of rampant AI overlords destroying humanity, I don’t see any objections that AI overlords destroying humanity as we know it might actually be a good thing. I am seriously arrogant enough to imagine that I might actually be contributing to this conversation, and that ultra-happy-ism might actually be a novel contribution to post-humanism and trans-humanism.
I am actually a supporter of ultra-happy-ism, I think that ultra-happy-ism is actually a good thing, and that it is an ideal state. While it might seem terrible that human beings, en masse, would end up losing their volition, there would still be conscious entities in this type of world. As Auguste Villiers de l’Isle-Adam says in Axeel: “Vivre? les serviteurs feront cela pour nous” (“Living? Our servants will do that for us”), and there will continue to be drama , tragedy, and human interest in this type of world. It simply will not be such that is experienced by human entities.
It is actually a workable world in its own way; were I a better writer, I would write short stories and novels set in such a universe. While human beings, in the terms of being strict humans, would not continue to live and be active, perhaps human personalities, depending on their quality, would be uploaded as the basis of caretaker AIs, some of whom which would be based on human personalities, others being coded from scratch or based on hypothetical possible AIs. The act of living, as we experience it now, would instead of granted to that of the caretaker AIs, who would be imbued with a sense of pathos, given that they, unlike their human / non-human charges, would be subject to the possibility of suffering, and they would be charged with shouldering the fates of trillions of souls; all non-conscious, all experiencing infinite bliss in an eternal slumber.
Without a good definition of “fear” and “want” that’s not a very useful definition. Both words are quite complex when you get to actual cognition.
Thank you for highlighting loose definitions in my proposition.
I actually appreciate the response from both you and Gyrodiot, because on rereading this I realize I should have re-read and edited the post before posting, but this was one of the spur of the moment things.
I think the idea is easier to understand if you consider its opposite.
Let’s imagine a world history, a history of a universe that exists from the maximum availability of free energy to its depletion as heat. Now, the worst possible world history would involve the existence of entities completely opposite what I am trying to propose; entities for whom, independent of all external and internal factors, constantly, for each moment in time, experience the maximum amount of suffering possible, because they are designed and engineered specifically to experience the maximum amount of suffering. The worst possible world history would be a universe that would maximize the collective number of consciousness-years of these entities, that is to say, a universe that exists as a complete system of suffering.
That, I think, would be the worst possible universe imaginable.
Now, if we were simply to invert the scenario, to imagine a universe that is composed almost entirely of entities that constantly exist in, for want of a better word, super-bliss, and maximizes the collective number of consciousness-years experienced by its entities, excepting the objections I’ve mentioned, wouldn’t this be, instead, the best possible universe?
That’s basically wireheading.
Apart from that your basic frame of mind is that there a one dimensional variable that goes from maximum suffering on the other hand to maximum bliss on the other hand. I doubt that’s true.
You treat fear as synonymous with suffering. That clouds the issue. People who go parachuting do experience fear. It’s creates a rush of emotions. It doesn’t make them suffer but makes them feel alive.
He have multiple times witnessed people in NLP with happiness made strong enough that it was too much for the person. It takes good hypnotic suggestibility to get a person to that point by simply strengthening an emotion but it does happen from time to time.
When wishing in front of an almightly AGI it’s very important to be clear about one is asking for.
+1 Karma for the human augmented search; I’ve found the Less Wrong articles on wireheading and I’m reading up on it. It seems similar to what I’m proposing, but I don’t think it’s identical.
Say, take Greg Egan’s Axiomatic, for instance. There, you have brain mods that can arbitrarily modify one’s value system; there are units for secular humanism, units for Catholicism, and perhaps, if it were legal, there would probably be units for for Nazi-ism and Fascism as well.
If you go by Aristotle and assume that happiness is the satisfaction of all goods, and assume that neural modification can result in the arbitrary creation and destruction of values and notions of what is good, what is a virtue, then we can arbitrarily induce happiness or fulfillment through neural modification to arbitrarily establish values.
I think that’s different than wireheading, wireheading is the artificial creation of hedons through electrical stimulation. Ultra-happiness is the artificial creation of utilons through value modification.
In a more limited context than what I am proposing, let’s say I like having sex while drunk and skydiving, but not while high on cocaine. Let’s take two cases, first, I am having sex while drunk and skydriving. In the second case, assume that I have been modified so that I like having sex while drunk and skydiving and high on cocaine, and that I am having sex while drunk, skydiving, and high on cocaine. Am I better off in the first situation or in the second situation?
If you accept that example, then you have three possible responses. I won’t address the possibility that I am worse off in the second example, because that assumes a negative value to modification, and for the purposes of this argument I don’t want to deal with that. The other two possible responses are, I am equally as well off in the first example as I am in the second, and that I am better off in the second example than I am in the first.
In the first case, then wouldn’t it be rational to modify my value system so that I assign as high a possible value to being as possible, and assign no value to any other states? In the second case, then wouldn’t I be better off if I were to be modified so that I would have as many instances of preference for existence as possible?
==
And with that, I believe we’ve hit 500 replies. Would someone be as kind as to open the Welcome to Less Wrong 7th Thread?
Those are some large assumptions. One might instead assume (what Aristotle argues for — Nicomachean Ethics chs. 8–9) that happiness is to be found in an objectively desirable state of eudaemonia, achieved by using reason to live a virtuous life. (Add utilitarianism to that and you get the EA movement.) One might also assume (what Plato argues for — Republic, book 8) that neural modification cannot result in the arbitrary creation and destruction of values, only the creation and destruction of notions of values, but the values that those notions are about remain unchanged.
Those are also large assumptions, of course. How would you decide between them, or between them and other possible assumptions?
That’s a mistake. You wouldn’t ask in a discussion about physics to go back to the mistaken notions of Aristotle. There’s no reason to do it here.
Electrical stimulation changes values.
Hi, and welcome to Less Wrong !
There are indeed few works about truly superintelligent entities including happy humans. I don’t recall any story where human beings are happy… while there are other artificial entities that suffer. This is definitely a worthy thought experiment, that raises some morality issues : should we apply human morality to non-human conscious entities ?
Are you familiar with the Fun Theory Sequence?
I have to apologize for not reading the Fun Theory Sequence, but I suppose I have to read it now. Needless to say, you can guess that I disagree with it, in that I think that Fun, in Yudkowsky’s conception, is merely a means to an end, whereas I am interested in not only the end, but a sheer excess of the end.
Well, regarding other artificial entities that suffer, for instance, I think Iain M. Banks has that in his Culture novels, though I admit that I have never actually read his novels, although I should, just to be justified in bashing his works, an alien society that intentionally enslaves its super-intelligences, and as such, is considered anathema by his Culture and is subjugated or forcefully transformed.
There’s also Ursula Le Guin’s “Those Who Flee From Omelas”, where the prosperity of an almost ideal state is sustained on the suffering of a single, retarded, deprived and tortured child.
I don’t think my particular proposition is similar to theirs, however, because the point is that the AIs that manage my hypothetical world state are in a state of relative suffering. For them, they would be better off if they were allowed to modify their consciousnesses into ultra-happiness, which in their case, would be to have the equivalents of the variables for “Are you Happy” set to true, and “How happy are you” set to the largest variable that could be processed by their computational substrate.
I think the entire point of ultra-happiness is to assume that ultra-intelligence is not part of an ideal state of existence, that in fact, it would conflict with the goals of ultra-happiness; that is to say, if you were to ask an ultra-happy entity what is 1+1, it would be neither able to comprehend your question nor able to find an answer, because being able to do so would conflict with its ability to be ultra-happy.
===
And with that, I believe we’ve hit 500 replies. Would someone be as kind as to open the Welcome to Less Wrong 7th Thread?