What I think is my own innovation, on the other hand, is [b]ultrahappiness[/b], which is to biologically modify people so that their fears are minimalized, and their wants are maximalized, which is to say, that for a given individual, that person is as happy as their biological substrate can support.
Without a good definition of “fear” and “want” that’s not a very useful definition. Both words are quite complex when you get to actual cognition.
Thank you for highlighting loose definitions in my proposition.
I actually appreciate the response from both you and Gyrodiot, because on rereading this I realize I should have re-read and edited the post before posting, but this was one of the spur of the moment things.
I think the idea is easier to understand if you consider its opposite.
Let’s imagine a world history, a history of a universe that exists from the maximum availability of free energy to its depletion as heat. Now, the worst possible world history would involve the existence of entities completely opposite what I am trying to propose; entities for whom, independent of all external and internal factors, constantly, for each moment in time, experience the maximum amount of suffering possible, because they are designed and engineered specifically to experience the maximum amount of suffering. The worst possible world history would be a universe that would maximize the collective number of consciousness-years of these entities, that is to say, a universe that exists as a complete system of suffering.
That, I think, would be the worst possible universe imaginable.
Now, if we were simply to invert the scenario, to imagine a universe that is composed almost entirely of entities that constantly exist in, for want of a better word, super-bliss, and maximizes the collective number of consciousness-years experienced by its entities, excepting the objections I’ve mentioned, wouldn’t this be, instead, the best possible universe?
Now, if we were simply to invert the scenario, to imagine a universe that is composed almost entirely of entities that constantly exist in, for want of a better word, super-bliss, and maximizes the collective number of consciousness-years experienced by its entities, excepting the objections I’ve mentioned, wouldn’t this be, instead, the best possible universe?
That’s basically wireheading.
Apart from that your basic frame of mind is that there a one dimensional variable that goes from maximum suffering on the other hand to maximum bliss on the other hand. I doubt that’s true.
You treat fear as synonymous with suffering. That clouds the issue. People who go parachuting do experience fear. It’s creates a rush of emotions. It doesn’t make them suffer but makes them feel alive.
He have multiple times witnessed people in NLP with happiness made strong enough that it was too much for the person. It takes good hypnotic suggestibility to get a person to that point by simply strengthening an emotion but it does happen from time to time.
When wishing in front of an almightly AGI it’s very important to be clear about one is asking for.
+1 Karma for the human augmented search; I’ve found the Less Wrong articles on wireheading and I’m reading up on it. It seems similar to what I’m proposing, but I don’t think it’s identical.
Say, take Greg Egan’s Axiomatic, for instance. There, you have brain mods that can arbitrarily modify one’s value system; there are units for secular humanism, units for Catholicism, and perhaps, if it were legal, there would probably be units for for Nazi-ism and Fascism as well.
If you go by Aristotle and assume that happiness is the satisfaction of all goods, and assume that neural modification can result in the arbitrary creation and destruction of values and notions of what is good, what is a virtue, then we can arbitrarily induce happiness or fulfillment through neural modification to arbitrarily establish values.
I think that’s different than wireheading, wireheading is the artificial creation of hedons through electrical stimulation. Ultra-happiness is the artificial creation of utilons through value modification.
In a more limited context than what I am proposing, let’s say I like having sex while drunk and skydiving, but not while high on cocaine. Let’s take two cases, first, I am having sex while drunk and skydriving. In the second case, assume that I have been modified so that I like having sex while drunk and skydiving and high on cocaine, and that I am having sex while drunk, skydiving, and high on cocaine. Am I better off in the first situation or in the second situation?
If you accept that example, then you have three possible responses. I won’t address the possibility that I am worse off in the second example, because that assumes a negative value to modification, and for the purposes of this argument I don’t want to deal with that. The other two possible responses are, I am equally as well off in the first example as I am in the second, and that I am better off in the second example than I am in the first.
In the first case, then wouldn’t it be rational to modify my value system so that I assign as high a possible value to being as possible, and assign no value to any other states? In the second case, then wouldn’t I be better off if I were to be modified so that I would have as many instances of preference for existence as possible?
==
And with that, I believe we’ve hit 500 replies. Would someone be as kind as to open the Welcome to Less Wrong 7th Thread?
If you go by Aristotle and assume that happiness is the satisfaction of all goods, and assume that neural modification can result in the arbitrary creation and destruction of values and notions of what is good, what is a virtue
Those are some large assumptions. One might instead assume (what Aristotle argues for — Nicomachean Ethics chs. 8–9) that happiness is to be found in an objectively desirable state of eudaemonia, achieved by using reason to live a virtuous life. (Add utilitarianism to that and you get the EA movement.) One might also assume (what Plato argues for — Republic, book 8) that neural modification cannot result in the arbitrary creation and destruction of values, only the creation and destruction of notions of values, but the values that those notions are about remain unchanged.
Those are also large assumptions, of course. How would you decide between them, or between them and other possible assumptions?
That’s a mistake. You wouldn’t ask in a discussion about physics to go back to the mistaken notions of Aristotle. There’s no reason to do it here.
I think that’s different than wireheading, wireheading is the artificial creation of hedons through electrical stimulation. Ultra-happiness is the artificial creation of utilons through value modification.
Without a good definition of “fear” and “want” that’s not a very useful definition. Both words are quite complex when you get to actual cognition.
Thank you for highlighting loose definitions in my proposition.
I actually appreciate the response from both you and Gyrodiot, because on rereading this I realize I should have re-read and edited the post before posting, but this was one of the spur of the moment things.
I think the idea is easier to understand if you consider its opposite.
Let’s imagine a world history, a history of a universe that exists from the maximum availability of free energy to its depletion as heat. Now, the worst possible world history would involve the existence of entities completely opposite what I am trying to propose; entities for whom, independent of all external and internal factors, constantly, for each moment in time, experience the maximum amount of suffering possible, because they are designed and engineered specifically to experience the maximum amount of suffering. The worst possible world history would be a universe that would maximize the collective number of consciousness-years of these entities, that is to say, a universe that exists as a complete system of suffering.
That, I think, would be the worst possible universe imaginable.
Now, if we were simply to invert the scenario, to imagine a universe that is composed almost entirely of entities that constantly exist in, for want of a better word, super-bliss, and maximizes the collective number of consciousness-years experienced by its entities, excepting the objections I’ve mentioned, wouldn’t this be, instead, the best possible universe?
That’s basically wireheading.
Apart from that your basic frame of mind is that there a one dimensional variable that goes from maximum suffering on the other hand to maximum bliss on the other hand. I doubt that’s true.
You treat fear as synonymous with suffering. That clouds the issue. People who go parachuting do experience fear. It’s creates a rush of emotions. It doesn’t make them suffer but makes them feel alive.
He have multiple times witnessed people in NLP with happiness made strong enough that it was too much for the person. It takes good hypnotic suggestibility to get a person to that point by simply strengthening an emotion but it does happen from time to time.
When wishing in front of an almightly AGI it’s very important to be clear about one is asking for.
+1 Karma for the human augmented search; I’ve found the Less Wrong articles on wireheading and I’m reading up on it. It seems similar to what I’m proposing, but I don’t think it’s identical.
Say, take Greg Egan’s Axiomatic, for instance. There, you have brain mods that can arbitrarily modify one’s value system; there are units for secular humanism, units for Catholicism, and perhaps, if it were legal, there would probably be units for for Nazi-ism and Fascism as well.
If you go by Aristotle and assume that happiness is the satisfaction of all goods, and assume that neural modification can result in the arbitrary creation and destruction of values and notions of what is good, what is a virtue, then we can arbitrarily induce happiness or fulfillment through neural modification to arbitrarily establish values.
I think that’s different than wireheading, wireheading is the artificial creation of hedons through electrical stimulation. Ultra-happiness is the artificial creation of utilons through value modification.
In a more limited context than what I am proposing, let’s say I like having sex while drunk and skydiving, but not while high on cocaine. Let’s take two cases, first, I am having sex while drunk and skydriving. In the second case, assume that I have been modified so that I like having sex while drunk and skydiving and high on cocaine, and that I am having sex while drunk, skydiving, and high on cocaine. Am I better off in the first situation or in the second situation?
If you accept that example, then you have three possible responses. I won’t address the possibility that I am worse off in the second example, because that assumes a negative value to modification, and for the purposes of this argument I don’t want to deal with that. The other two possible responses are, I am equally as well off in the first example as I am in the second, and that I am better off in the second example than I am in the first.
In the first case, then wouldn’t it be rational to modify my value system so that I assign as high a possible value to being as possible, and assign no value to any other states? In the second case, then wouldn’t I be better off if I were to be modified so that I would have as many instances of preference for existence as possible?
==
And with that, I believe we’ve hit 500 replies. Would someone be as kind as to open the Welcome to Less Wrong 7th Thread?
Those are some large assumptions. One might instead assume (what Aristotle argues for — Nicomachean Ethics chs. 8–9) that happiness is to be found in an objectively desirable state of eudaemonia, achieved by using reason to live a virtuous life. (Add utilitarianism to that and you get the EA movement.) One might also assume (what Plato argues for — Republic, book 8) that neural modification cannot result in the arbitrary creation and destruction of values, only the creation and destruction of notions of values, but the values that those notions are about remain unchanged.
Those are also large assumptions, of course. How would you decide between them, or between them and other possible assumptions?
That’s a mistake. You wouldn’t ask in a discussion about physics to go back to the mistaken notions of Aristotle. There’s no reason to do it here.
Electrical stimulation changes values.