Human values are complex; there’s no reason to think that our values reduce to happiness, and lots of evidence that they don’t.
Let’s imagine two possible futures for humanity:
One, a drug is developed that offers unimaginable happiness, a thousand times better than heroin or whatever the drug that currently creates the most happiness is. Everyone is cured of aging and then hooked up to a machine that dispenses this drug until the heat death of the universe. The rest of our future light cone is converted into orgasmium. They are all maximally happy.
Two…
I think an eternity of what we’ve got right now would be better than number one, but I imagine lots of people on LessWrong would disagree with that. The best future I can imagine would be one where we make our own choices and our own mistakes, where we learn more about the world around us, get smarter, and get stronger, a world happier than this one, but not cured of disappointment and heartbreak entirely… Eliezer’s written about this at some length.
Some people honestly prefer future 1, and that’s fine. But the original poster seemed to be saying he accepts future 1 is right but would hate it, which should be a red flag.
I don’t think a drug would be adequate. Bland happiness is not enough, I would prefer a future with an optimal mix of pleasurable quales. This is why I prefer the “wireheading” term.
I don’t understand how you could possibly prefer the status quo. Imagine everything was exactly the same but one single person was a little bit happier. Wouldn’t you prefer this future? If you prefer futures where people are happier as a rule then isn’t the best future the one where people are most happy?
I don’t understand how he could hate being happy. People enjoy being happy by definition.
Imagine everything was exactly the same but one single person was a little bit happier. Wouldn’t you prefer this future? If you prefer futures where people are happier as a rule then isn’t the best future the one where people are most happy?
Choosing a world where everything is the same except that one person is a bit happier suggests a preference for more happiness than there currently is, all else being equal. It doesn’t even remotely suggest a preference for happiness maximizing at any cost.
I would prefer to this one a world where everything is exactly the same except I have a bit more ice cream in my freezer than I currently do, but I don’t want the universe tiled with ice cream.
So you would prefer a world where everyone is maximally happy all the time but otherwise nothing is different?
Just like, making the ridiculous assumption that the marginal utility of more ice cream was constant, you would prefer a universe tiled with ice cream as long as it didn’t get in the way of anything else or use resources important for anything else?
So you would prefer a world where everyone is maximally happy all the time but otherwise nothing is different?
I think this has way too many consequences to frame meaningfully as “but nothing otherwise is different.” Kind of like “everything is exactly the same except the polarity of gravity is reversed.” I can’t judge how much utility to assign to a world where everyone is maximally happy all the time but the world is otherwise just like ours, because I can’t even make sense of the notion.
If you assign constant marginal utility to increases in ice cream and assume that ice cream can be increased indefinitely while keeping everything else constant, then of course you can increase utility by continuing to add more ice cream, simply as a matter of basic math. But I would say that not only is it not a meaningful proposition, it’s not really illustrative of anything in particular save for how not to use mathematical models.
I would prefer “status quo plus one person is more happy” to “status quo”. I would not prefer “orgasmium” to “status quo”, because I honestly think orgasmium is nearly as undesirable as paperclips.
If you prefer futures where people are happier as a rule then isn’t the best future the one where people are most happy?
Doesn’t follow. I generally prefer futures where people are happier; I also generally prefer futures where they have greater autonomy, novel experiences, meaningful challenge… When these trade off, I sometimes choose one, sometimes another. The “best future” I can imagine is probably a balance of all of these.
I don’t understand how he could hate being happy. People enjoy being happy by definition.
Present-him presumably is very unhappy at the thought of becoming someone who will be happily a wirehead, just as present-me doesn’t want to try heroin though it would undoubtedly make me happy.
But why would you want to live in a world where people are less happy than they could be? That sounds terribly evil.
I don’t think bland happiness is optimal. I’d prefer happiness along with an optimal mixture of pleasant quales.
Human values are complex; there’s no reason to think that our values reduce to happiness, and lots of evidence that they don’t.
Let’s imagine two possible futures for humanity: One, a drug is developed that offers unimaginable happiness, a thousand times better than heroin or whatever the drug that currently creates the most happiness is. Everyone is cured of aging and then hooked up to a machine that dispenses this drug until the heat death of the universe. The rest of our future light cone is converted into orgasmium. They are all maximally happy.
Two… I think an eternity of what we’ve got right now would be better than number one, but I imagine lots of people on LessWrong would disagree with that. The best future I can imagine would be one where we make our own choices and our own mistakes, where we learn more about the world around us, get smarter, and get stronger, a world happier than this one, but not cured of disappointment and heartbreak entirely… Eliezer’s written about this at some length.
Some people honestly prefer future 1, and that’s fine. But the original poster seemed to be saying he accepts future 1 is right but would hate it, which should be a red flag.
I don’t think a drug would be adequate. Bland happiness is not enough, I would prefer a future with an optimal mix of pleasurable quales. This is why I prefer the “wireheading” term.
I don’t understand how you could possibly prefer the status quo. Imagine everything was exactly the same but one single person was a little bit happier. Wouldn’t you prefer this future? If you prefer futures where people are happier as a rule then isn’t the best future the one where people are most happy?
I don’t understand how he could hate being happy. People enjoy being happy by definition.
Choosing a world where everything is the same except that one person is a bit happier suggests a preference for more happiness than there currently is, all else being equal. It doesn’t even remotely suggest a preference for happiness maximizing at any cost.
I would prefer to this one a world where everything is exactly the same except I have a bit more ice cream in my freezer than I currently do, but I don’t want the universe tiled with ice cream.
So you would prefer a world where everyone is maximally happy all the time but otherwise nothing is different?
Just like, making the ridiculous assumption that the marginal utility of more ice cream was constant, you would prefer a universe tiled with ice cream as long as it didn’t get in the way of anything else or use resources important for anything else?
I think this has way too many consequences to frame meaningfully as “but nothing otherwise is different.” Kind of like “everything is exactly the same except the polarity of gravity is reversed.” I can’t judge how much utility to assign to a world where everyone is maximally happy all the time but the world is otherwise just like ours, because I can’t even make sense of the notion.
If you assign constant marginal utility to increases in ice cream and assume that ice cream can be increased indefinitely while keeping everything else constant, then of course you can increase utility by continuing to add more ice cream, simply as a matter of basic math. But I would say that not only is it not a meaningful proposition, it’s not really illustrative of anything in particular save for how not to use mathematical models.
I would prefer “status quo plus one person is more happy” to “status quo”. I would not prefer “orgasmium” to “status quo”, because I honestly think orgasmium is nearly as undesirable as paperclips.
Doesn’t follow. I generally prefer futures where people are happier; I also generally prefer futures where they have greater autonomy, novel experiences, meaningful challenge… When these trade off, I sometimes choose one, sometimes another. The “best future” I can imagine is probably a balance of all of these.
Present-him presumably is very unhappy at the thought of becoming someone who will be happily a wirehead, just as present-me doesn’t want to try heroin though it would undoubtedly make me happy.