The problem with the ‘god shaped hole’ situation (and questions of happiness in general) is that if something doesn’t make you happy NOW, it becomes very difficult to believe that it will make you happy LATER.
For example, say some Soma-drug was invented that, once taken, would make you blissfully happy for the rest of your life. Would you take it? Our immediate reaction is to say ‘no’, probably because we don’t like the idea of ‘fake’, chemically-induced happiness. In other words, because the idea doesn’t make us happy now, we don’t really believe it will make us happy later.
Valuing truth seems like just another way of saying truth makes you happy. Because filling the god shaped hole means not valuing truth, the idea doesn’t make you happy right now, so you don’t really believe it will make you happy later.
For example, say some Soma-drug was invented that, once taken, would make you blissfully happy for the rest of your life. Would you take it?
I try my best to value other peoples’ happiness equal to my own. If taking a happiness-inducing pill was likely to make me a kinder, more generous, more productive person, I would choose to take it (with some misgivings related to it seeming like ‘cheating’ and ‘not good for character-building’) but if it were to make me less kind/generous/productive, I would have much stronger misgivings.
I would definitely take the Soma, and don’t see why anyone wouldn’t. Odd, the differences between what people find acceptable.
Is anyone else with me in desiring chemically-induced happiness as much as any other? (Well, all happiness is chemically-induced, when you get right down to it, so I assume there are no qualitative differences.)
I wouldn’t take it. I desire to help others, and it gives me pleasure to do so, it makes me suffer to harm others, and I desire not to do so.
Being perpetually in a state of extreme pleasure would make this pleasure/suffering irrelevant, and might lead me to behave less in line with my desires.
So, being perpetually in a state of extreme pleasure seems like a bad idea to me.
I agree with you completely. I can understand why others might not agree with me, but for me, pleasure isn’t so much a goal as a result of accomplishing my goals.
I think one of the points underrepresented in these “Not For the Sake of XXX Alone” posts is how people would respond to a least convenient world possible in which they would be forced to make sharp trade-offs between competing values.
For instance, I value diversity, a kind of narrative depth to raw experiences. But if I had to choose either sustainable, chemically induced unsophisticated pleasure or else diverse pain and misery with narrative depth, I’d almost certainly choose the pleasure.
This is relevant to FAI and CEV, I think. If the success probability of simple, pleasure-generating FAI is higher than more sophisticated (and difficult) “Not For the Sake of XXX Alone”-respecting FAI, it might be better opting for the pleasure-generating version.
I value diversity, a kind of narrative depth to raw experiences. But if I had to choose either sustainable, chemically induced unsophisticated pleasure or else diverse pain and misery with narrative depth, I’d almost certainly choose the pleasure.
Agreed. I also think people tend to underestimate the goodness of pure bliss: I have experienced such a state, and I’m here to tell you, the concerns about XXX become very much more minor than you would expect. They don’t disappear—if you like painting, you’ll still want to paint—but you suddenly understand how minor the pleasure painting gives you really is, in comparison.
The argument wasn’t that you need the joy of scientific discovery; it was that scientific discovery is important to us for reasons entirely apart from joy. You would never want a Soma substitute for scientific discovery, because that wouldn’t involve… you know… actual scientific discovery.
This is just wire-heading isn’t it? At least, that is what you should search for if you want to hear what people on this site tend to think about this sort of idea. I am not certain of my own view of it. I tend to think I’d wire-head at first, but then some implications I find on more reflection make me unsure.
I tend to think I’d wire-head at first, but then some implications I find on more reflection make me unsure.
Same here. That is, I know I’d wirehead—I don’t see any bothersome implications with that idea alone. However, if you add in something like “once you wirehead you are immobile and cannot do anything else”, then I become more unsure.
It does not matter if you are immobilized. Once you are wire-heading there is no reason you would ever stop since you’ve already got peak pleasure/joy. I think this effectively immobilizes you. There is no problem that could come to you that wouldn’t be best solved by more wire-heading, except for a threat to the wire-heading itself.
I think you’re simply assuming that we’re motivated primarily by happiness in that case.
Valuing the truth doesn’t suddenly make me happy when someone announces to me, and I verify, that my entire family has been eaten by wombats. If I didn’t value the truth at all, I might be able to ignore reality and persist in my erroneous belief that my family is alive and wombats are as cute and cuddly as I have always believed. But I don’t try to do that, and I don’t regret my decision or any inability to maintain erroneous beliefs.
A soma drug offends my sensibilities on some level. It violates my moral value of “don’t mess around with my brain except through standard sensory experiences or with my explicit and informed consent” (and no brainwashing: no concerted efforts to modify my opinions or attitudes or behaviors except through normal human interactions, like arguing and talking and long walks on the beach).
I value at least some of these moral sensibilities higher than my current or future happiness. That’s why I would choose against the soma. Not because I doubt its efficacy.
The problem with the ‘god shaped hole’ situation (and questions of happiness in general) is that if something doesn’t make you happy NOW, it becomes very difficult to believe that it will make you happy LATER.
For example, say some Soma-drug was invented that, once taken, would make you blissfully happy for the rest of your life. Would you take it? Our immediate reaction is to say ‘no’, probably because we don’t like the idea of ‘fake’, chemically-induced happiness. In other words, because the idea doesn’t make us happy now, we don’t really believe it will make us happy later.
Valuing truth seems like just another way of saying truth makes you happy. Because filling the god shaped hole means not valuing truth, the idea doesn’t make you happy right now, so you don’t really believe it will make you happy later.
I try my best to value other peoples’ happiness equal to my own. If taking a happiness-inducing pill was likely to make me a kinder, more generous, more productive person, I would choose to take it (with some misgivings related to it seeming like ‘cheating’ and ‘not good for character-building’) but if it were to make me less kind/generous/productive, I would have much stronger misgivings.
I would definitely take the Soma, and don’t see why anyone wouldn’t. Odd, the differences between what people find acceptable.
Is anyone else with me in desiring chemically-induced happiness as much as any other? (Well, all happiness is chemically-induced, when you get right down to it, so I assume there are no qualitative differences.)
I wouldn’t take it. I desire to help others, and it gives me pleasure to do so, it makes me suffer to harm others, and I desire not to do so.
Being perpetually in a state of extreme pleasure would make this pleasure/suffering irrelevant, and might lead me to behave less in line with my desires.
So, being perpetually in a state of extreme pleasure seems like a bad idea to me.
I agree with you completely. I can understand why others might not agree with me, but for me, pleasure isn’t so much a goal as a result of accomplishing my goals.
I’m reminded of Yudkowsky’s Not For the Sake of Happiness Alone.
I think one of the points underrepresented in these “Not For the Sake of XXX Alone” posts is how people would respond to a least convenient world possible in which they would be forced to make sharp trade-offs between competing values.
For instance, I value diversity, a kind of narrative depth to raw experiences. But if I had to choose either sustainable, chemically induced unsophisticated pleasure or else diverse pain and misery with narrative depth, I’d almost certainly choose the pleasure.
This is relevant to FAI and CEV, I think. If the success probability of simple, pleasure-generating FAI is higher than more sophisticated (and difficult) “Not For the Sake of XXX Alone”-respecting FAI, it might be better opting for the pleasure-generating version.
Agreed. I also think people tend to underestimate the goodness of pure bliss: I have experienced such a state, and I’m here to tell you, the concerns about XXX become very much more minor than you would expect. They don’t disappear—if you like painting, you’ll still want to paint—but you suddenly understand how minor the pleasure painting gives you really is, in comparison.
Or at least that’s how I felt, anyway.
He makes good points, but note that there’s nothing saying you couldn’t take Soma and participate in the joy of scientific discovery (or whatever).
The argument wasn’t that you need the joy of scientific discovery; it was that scientific discovery is important to us for reasons entirely apart from joy. You would never want a Soma substitute for scientific discovery, because that wouldn’t involve… you know… actual scientific discovery.
Additionally, another different take on this is Yvain’s Are Wireheads Happy?.
This is just wire-heading isn’t it? At least, that is what you should search for if you want to hear what people on this site tend to think about this sort of idea. I am not certain of my own view of it. I tend to think I’d wire-head at first, but then some implications I find on more reflection make me unsure.
Same here. That is, I know I’d wirehead—I don’t see any bothersome implications with that idea alone. However, if you add in something like “once you wirehead you are immobile and cannot do anything else”, then I become more unsure.
It does not matter if you are immobilized. Once you are wire-heading there is no reason you would ever stop since you’ve already got peak pleasure/joy. I think this effectively immobilizes you. There is no problem that could come to you that wouldn’t be best solved by more wire-heading, except for a threat to the wire-heading itself.
I think you’re simply assuming that we’re motivated primarily by happiness in that case.
Valuing the truth doesn’t suddenly make me happy when someone announces to me, and I verify, that my entire family has been eaten by wombats. If I didn’t value the truth at all, I might be able to ignore reality and persist in my erroneous belief that my family is alive and wombats are as cute and cuddly as I have always believed. But I don’t try to do that, and I don’t regret my decision or any inability to maintain erroneous beliefs.
A soma drug offends my sensibilities on some level. It violates my moral value of “don’t mess around with my brain except through standard sensory experiences or with my explicit and informed consent” (and no brainwashing: no concerted efforts to modify my opinions or attitudes or behaviors except through normal human interactions, like arguing and talking and long walks on the beach).
I value at least some of these moral sensibilities higher than my current or future happiness. That’s why I would choose against the soma. Not because I doubt its efficacy.