Well, I am new here, and I suppose it was a slightly presumptive of me to say that. I was just trying to introduce myself with a few of the thoughts I’ve had while reading here.
To attempt to clarify, I think that this story is rather like the fable of the Dragon-Tyrant. To live a life with even the faintest hint of displeasure is a horrific crime, the thought goes. I am under the impression that most people here operate with some sort of utilitarianist philosophy. This to me seems to imply that unless one declares that there is no objective state for which utilitarianism is to be directed towards, humanity in this example is wrong. (In case someone is making the distinction between ethics and morals, as an engineer, it doesn’t strike me as important.)
To the issue of akrasia, I don’t see this as a case. My own judgement says that a life like theirs is vapid and devoid of meaning. Fighting to the death against one’s own best judgement probably isn’t rare either; I expect many, many soldiers have died fighting wars they despised, and who had options other than fighting them. In effect, I feel like this is multiplication by zero, and add infinity. You have more at the end; you’re just no longer the unique complex individual you were, and I could not bear to submit to that.
To live a life with even the faintest hint of displeasure is a horrific crime, the thought goes. I am under the impression that most people here operate with some sort of utilitarianist philosophy. This to me seems to imply that unless one declares that there is no objective state for which utilitarianism is to be directed towards, humanity in this example is wrong.
The general thrust of the Superhappy segments of Three Worlds Collide seems to be that simple utilitarian schemas based on subjective happiness or pleasure are insufficient to describe human value systems or preferences as they’re expressed in the wild. Similar points are made in the Fun Theory sequence. Neither of these mean that utilitarianism generally is wrong; merely that the utility function we’re summing (or averaging over, or taking the minimum of, etc.) isn’t as simple as sometimes assumed.
Now, Fun Theory is probably one of the less well-developed sequences here (unfortunately, in my view; it’s a very deep question, and intimately related to human value structure and all its AI consequences), and you’re certainly free to prefer 3WC’s assimilation ending or to believe that the kind of soft wireheading the Superhappies embody really is optimal under some more or less objective criterion. That does seem to be implied in one form or another by several major schools of ethics, and any intuition pump I could deploy to convince you otherwise would probably end up looking a lot like the Assimilation Ending, which I gather you don’t find convincing.
Personally, though, I’m inclined to be sympathetic to the True Ending, and think more generally that pain and suffering tend to be wrongly conflated with moral evil when in fact there’s a considerably looser and more subtle relationship between the two. But I’m nowhere near a fully developed ethics, and while this seems to have something to do with the “complexity” you mentioned I feel like stopping there would be an unjustified handwave.
To the issue of akrasia, I don’t see this as a case. My own judgement says that a life like theirs is vapid and devoid of meaning. Fighting to the death against one’s own best judgement probably isn’t rare either; I expect many, many soldiers have died fighting wars they despised, and who had options other than fighting them. In effect, I feel like this is multiplication by zero, and add infinity. You have more at the end; you’re just no longer the unique complex individual you were, and I could not bear to submit to that.
And you think that not being able to bear submitting to that is wrong?
Personally, I’m one of those who prefers the assimilation ending, there are quite a few of us, and I certainly wouldn’t be tempted to fight to the death or kill myself to avoid it. But for a person who would fight to the death to avoid it to say that assimilation is optimal and the True Ending is senseless seems to me to be incoherent.
I think the confusion comes from what you mean by “utilitarian.” The whole point of Three Worlds Collide (well, one of the points), is that human preferences are not for happiness alone; the things we value include a life that’s not “vapid and devoid of meaning”, even if it’s happy! That’s why (to the extent we have to pick labels) I am a preference utilitarian, which seems to be the most common ethical philosophy I’ve encountered here (we’ll know more when Yvain’s survey comes out). If you prefer not to be a Superhappy, then preference utilitarianism says you shouldn’t be one.
When you catch yourself saying “the right thing is X, but the world I’d actually want to live in is Y,” be careful—a world that’s actually optimal would probably be one you want to live in.
I know it’s been some time, but I wanted to thank you for the reply. I’ve thought considerably, and I still feel that I’m right. I’m going to try to explain again.
Sure, we all have our own utility functions. Now, if you’re trying to maximize utility for everyone, that’s no easy task, and you’ll end up with a relatively small amount of utility.
Would you condone someone for forcing someone else to try chocolate, if that person believed it tasted bad, but loved it as soon as they tried it? If someone mentally deranged set themselves on fire and asked you not to save them, would you? If someone is refusing cancer treatment because “Science is evil”, I at least would force the treatment on them. Would you force transhumanity on everyone who refused it, is probably a better question for LessWrong. I feel that, though I may violate others’ utility functions, we’re all mentally deranged, and so someone should save us. Someone should violate our utility preferences to change them. Because that would bring an enormous amount of utility.
I’m struggling how to reconcile respecting preferences with how much of society today works. Personally, I don’t think anyone should ever violate my utility preferences. But can you deny that there are people you think should have theirs changed? I’m inclined to think that a large part of this community is.
If you haven’t, you should read Yvain’s Consequentialism FAQ, which addresses some of these points in a little more detail.
Would you condone someone for forcing someone else to try chocolate, if that person believed it tasted bad, but loved it as soon as they tried it? If someone mentally deranged set themselves on fire and asked you not to save them, would you? If someone is refusing cancer treatment because “Science is evil”, I at least would force the treatment on them. Would you force transhumanity on everyone who refused it, is probably a better question for LessWrong.
Preference utilitarianism works well for any situation you’ll encounter in real life, but it’s possible to propose questions it doesn’t answer very well. A popular answer to the above question on LessWrong comes from the idea of coherent extrapolated volition (The paper itself may be outdated). Essentially, it asks what we would want were we better informed, more self-aware, and more the people we wished we were. In philosophy, this is called idealized preference theory. CEV probably says we shouldn’t force someone to eat chocolate, because their preference for autonomy outweighs their extrapolated preference for chocolate. It probably says we should save the person on fire, since their non-mentally-ill extrapolated volition would want them to live, and ditto the cancer patient.
Forcing transhumanity on people is a harder question, because I’m not sure that everyone’s preferences would converge in this case. In any event, I would not personally do it, because I don’t trust my own reasoning enough.
But can you deny that there are people you think should have theirs changed?
I think all people, to the extent that they can be said to have utility functions, are wrong about what they want at least sometimes. I don’t think we should change their utility function so much as implement their ideal preferences, not their stated ones.
I’m inclined to think that a large part of this community is.
is what? Is willing to change people’s utility functions?
What does this even mean? Forcing immortality on people is at least a coherent notion, although I’m pretty sure most users around here support an individual’s right to self-terminate. But if that was what was meant, calling it ‘transhumanism’ is a little off.
On the other hand, is this referring to something handled by the fun theoretic concept of a eudaimonic rate of intelligence increase?
Yes, I know, “jargon jargon jargon buzzword buzzword rationality,” but I couldn’t think of a better way to phrase that. Sorry.
You’re right. I don’t know what Terminal Awareness meant, but I was thinking of something like uploading someone who doesn’t want to be uploaded, or increasing their intelligence (even at a eudaimonic rate) if they insist they like their current intelligence level just fine.
If it actually is coherent to speak of a “eudaimonic rate” of doing something to someone who doesn’t want it done, I need to significantly revise my understanding of the word “eudaimonic”.
I’m thinking that an eudaimonic rate of intelligence increase is one which maximizes our opportunities for learning, new insights, enjoyment, and personal growth, as opposed to an immediate jump to superintelligence. But I can imagine an exceedingly stubborn person who insists that they don’t want their intelligence increasing at all, even after being told that they will be happier and lead a more meaningful life. Once they get smarter, they’ll presumably be happier with it.
Even if we accept that Fun Theory as outlined by Eliezer really is the best thing possible for human beings, there are certainly some who would currently reject it, right?
It seems to me like your trying to enforce your values on others.
You might think your just trying to help, or do something good.
I’m just a bit skeptical of anyone trying to enforce values rather than inspire or suggest.
But when it comes to the question, “Don’t I have to want to be as happy as possible?” then the answer is simply “No. If you don’t prefer it, why go there?”
I’m not sure if we have a genuine disagreement, or if we’re disputing definitions. So without talking about eudaimonic anything, which of the following do you disagree with, if any?
What we want should be the basis for a better future, but the better future probably won’t look much like what we currently want.
CEV might point to something like uploading or dramatic intelligence enhancement that lots of people won’t currently want, though by definition it would be part of their extrapolated preferences.
A fair share of the population will probably, if polled, actively oppose what CEV says we really want.
It seems unlikely that the optimal intelligence level is the current one, but some people would probably oppose alteration to their intelligence. This isn’t a question of “Don’t I have to want to be as intelligent as possible?” so much as “Is what I currently want a good guide to my extrapolated volition?”
Human values are complex; there’s no reason to think that our values reduce to happiness, and lots of evidence that they don’t.
Let’s imagine two possible futures for humanity:
One, a drug is developed that offers unimaginable happiness, a thousand times better than heroin or whatever the drug that currently creates the most happiness is. Everyone is cured of aging and then hooked up to a machine that dispenses this drug until the heat death of the universe. The rest of our future light cone is converted into orgasmium. They are all maximally happy.
Two…
I think an eternity of what we’ve got right now would be better than number one, but I imagine lots of people on LessWrong would disagree with that. The best future I can imagine would be one where we make our own choices and our own mistakes, where we learn more about the world around us, get smarter, and get stronger, a world happier than this one, but not cured of disappointment and heartbreak entirely… Eliezer’s written about this at some length.
Some people honestly prefer future 1, and that’s fine. But the original poster seemed to be saying he accepts future 1 is right but would hate it, which should be a red flag.
I don’t think a drug would be adequate. Bland happiness is not enough, I would prefer a future with an optimal mix of pleasurable quales. This is why I prefer the “wireheading” term.
I don’t understand how you could possibly prefer the status quo. Imagine everything was exactly the same but one single person was a little bit happier. Wouldn’t you prefer this future? If you prefer futures where people are happier as a rule then isn’t the best future the one where people are most happy?
I don’t understand how he could hate being happy. People enjoy being happy by definition.
Imagine everything was exactly the same but one single person was a little bit happier. Wouldn’t you prefer this future? If you prefer futures where people are happier as a rule then isn’t the best future the one where people are most happy?
Choosing a world where everything is the same except that one person is a bit happier suggests a preference for more happiness than there currently is, all else being equal. It doesn’t even remotely suggest a preference for happiness maximizing at any cost.
I would prefer to this one a world where everything is exactly the same except I have a bit more ice cream in my freezer than I currently do, but I don’t want the universe tiled with ice cream.
So you would prefer a world where everyone is maximally happy all the time but otherwise nothing is different?
Just like, making the ridiculous assumption that the marginal utility of more ice cream was constant, you would prefer a universe tiled with ice cream as long as it didn’t get in the way of anything else or use resources important for anything else?
So you would prefer a world where everyone is maximally happy all the time but otherwise nothing is different?
I think this has way too many consequences to frame meaningfully as “but nothing otherwise is different.” Kind of like “everything is exactly the same except the polarity of gravity is reversed.” I can’t judge how much utility to assign to a world where everyone is maximally happy all the time but the world is otherwise just like ours, because I can’t even make sense of the notion.
If you assign constant marginal utility to increases in ice cream and assume that ice cream can be increased indefinitely while keeping everything else constant, then of course you can increase utility by continuing to add more ice cream, simply as a matter of basic math. But I would say that not only is it not a meaningful proposition, it’s not really illustrative of anything in particular save for how not to use mathematical models.
I would prefer “status quo plus one person is more happy” to “status quo”. I would not prefer “orgasmium” to “status quo”, because I honestly think orgasmium is nearly as undesirable as paperclips.
If you prefer futures where people are happier as a rule then isn’t the best future the one where people are most happy?
Doesn’t follow. I generally prefer futures where people are happier; I also generally prefer futures where they have greater autonomy, novel experiences, meaningful challenge… When these trade off, I sometimes choose one, sometimes another. The “best future” I can imagine is probably a balance of all of these.
I don’t understand how he could hate being happy. People enjoy being happy by definition.
Present-him presumably is very unhappy at the thought of becoming someone who will be happily a wirehead, just as present-me doesn’t want to try heroin though it would undoubtedly make me happy.
To the issue of akrasia, I don’t see this as a case. My own judgement says that a life like theirs is vapid and devoid of meaning. Fighting to the death against one’s own best judgement probably isn’t rare either; I expect many, many soldiers have died fighting wars they despised, and who had options other than fighting them. In effect, I feel like this is multiplication by zero, and add infinity. You have more at the end; you’re just no longer the unique complex individual you were, and I could not bear to submit to that.
It really does seem like either you don’t really believe that the assimilation ending is optimal and you prefer the true ending, or you are suffering from akrasia by fighting against it despite believing that it is. You haven’t really explained why it could be anything else.
Well, I am new here, and I suppose it was a slightly presumptive of me to say that. I was just trying to introduce myself with a few of the thoughts I’ve had while reading here.
To attempt to clarify, I think that this story is rather like the fable of the Dragon-Tyrant. To live a life with even the faintest hint of displeasure is a horrific crime, the thought goes. I am under the impression that most people here operate with some sort of utilitarianist philosophy. This to me seems to imply that unless one declares that there is no objective state for which utilitarianism is to be directed towards, humanity in this example is wrong. (In case someone is making the distinction between ethics and morals, as an engineer, it doesn’t strike me as important.)
To the issue of akrasia, I don’t see this as a case. My own judgement says that a life like theirs is vapid and devoid of meaning. Fighting to the death against one’s own best judgement probably isn’t rare either; I expect many, many soldiers have died fighting wars they despised, and who had options other than fighting them. In effect, I feel like this is multiplication by zero, and add infinity. You have more at the end; you’re just no longer the unique complex individual you were, and I could not bear to submit to that.
The general thrust of the Superhappy segments of Three Worlds Collide seems to be that simple utilitarian schemas based on subjective happiness or pleasure are insufficient to describe human value systems or preferences as they’re expressed in the wild. Similar points are made in the Fun Theory sequence. Neither of these mean that utilitarianism generally is wrong; merely that the utility function we’re summing (or averaging over, or taking the minimum of, etc.) isn’t as simple as sometimes assumed.
Now, Fun Theory is probably one of the less well-developed sequences here (unfortunately, in my view; it’s a very deep question, and intimately related to human value structure and all its AI consequences), and you’re certainly free to prefer 3WC’s assimilation ending or to believe that the kind of soft wireheading the Superhappies embody really is optimal under some more or less objective criterion. That does seem to be implied in one form or another by several major schools of ethics, and any intuition pump I could deploy to convince you otherwise would probably end up looking a lot like the Assimilation Ending, which I gather you don’t find convincing.
Personally, though, I’m inclined to be sympathetic to the True Ending, and think more generally that pain and suffering tend to be wrongly conflated with moral evil when in fact there’s a considerably looser and more subtle relationship between the two. But I’m nowhere near a fully developed ethics, and while this seems to have something to do with the “complexity” you mentioned I feel like stopping there would be an unjustified handwave.
And you think that not being able to bear submitting to that is wrong?
Personally, I’m one of those who prefers the assimilation ending, there are quite a few of us, and I certainly wouldn’t be tempted to fight to the death or kill myself to avoid it. But for a person who would fight to the death to avoid it to say that assimilation is optimal and the True Ending is senseless seems to me to be incoherent.
I think the confusion comes from what you mean by “utilitarian.” The whole point of Three Worlds Collide (well, one of the points), is that human preferences are not for happiness alone; the things we value include a life that’s not “vapid and devoid of meaning”, even if it’s happy! That’s why (to the extent we have to pick labels) I am a preference utilitarian, which seems to be the most common ethical philosophy I’ve encountered here (we’ll know more when Yvain’s survey comes out). If you prefer not to be a Superhappy, then preference utilitarianism says you shouldn’t be one.
When you catch yourself saying “the right thing is X, but the world I’d actually want to live in is Y,” be careful—a world that’s actually optimal would probably be one you want to live in.
If you’re able to summarize what makes the superhappies’ lives vapid and devoid of meaning, I’d be interested.
TerminalAwareness’s words, not mine. I prefer the true ending but wouldn’t call the Superhappies’ lives meaningless.
(nods) Gotcha.
I know it’s been some time, but I wanted to thank you for the reply. I’ve thought considerably, and I still feel that I’m right. I’m going to try to explain again.
Sure, we all have our own utility functions. Now, if you’re trying to maximize utility for everyone, that’s no easy task, and you’ll end up with a relatively small amount of utility.
Would you condone someone for forcing someone else to try chocolate, if that person believed it tasted bad, but loved it as soon as they tried it? If someone mentally deranged set themselves on fire and asked you not to save them, would you? If someone is refusing cancer treatment because “Science is evil”, I at least would force the treatment on them. Would you force transhumanity on everyone who refused it, is probably a better question for LessWrong. I feel that, though I may violate others’ utility functions, we’re all mentally deranged, and so someone should save us. Someone should violate our utility preferences to change them. Because that would bring an enormous amount of utility.
I’m struggling how to reconcile respecting preferences with how much of society today works. Personally, I don’t think anyone should ever violate my utility preferences. But can you deny that there are people you think should have theirs changed? I’m inclined to think that a large part of this community is.
If you haven’t, you should read Yvain’s Consequentialism FAQ, which addresses some of these points in a little more detail.
Preference utilitarianism works well for any situation you’ll encounter in real life, but it’s possible to propose questions it doesn’t answer very well. A popular answer to the above question on LessWrong comes from the idea of coherent extrapolated volition (The paper itself may be outdated). Essentially, it asks what we would want were we better informed, more self-aware, and more the people we wished we were. In philosophy, this is called idealized preference theory. CEV probably says we shouldn’t force someone to eat chocolate, because their preference for autonomy outweighs their extrapolated preference for chocolate. It probably says we should save the person on fire, since their non-mentally-ill extrapolated volition would want them to live, and ditto the cancer patient.
Forcing transhumanity on people is a harder question, because I’m not sure that everyone’s preferences would converge in this case. In any event, I would not personally do it, because I don’t trust my own reasoning enough.
I think all people, to the extent that they can be said to have utility functions, are wrong about what they want at least sometimes. I don’t think we should change their utility function so much as implement their ideal preferences, not their stated ones.
is what? Is willing to change people’s utility functions?
What does this even mean? Forcing immortality on people is at least a coherent notion, although I’m pretty sure most users around here support an individual’s right to self-terminate. But if that was what was meant, calling it ‘transhumanism’ is a little off.
On the other hand, is this referring to something handled by the fun theoretic concept of a eudaimonic rate of intelligence increase?
Yes, I know, “jargon jargon jargon buzzword buzzword rationality,” but I couldn’t think of a better way to phrase that. Sorry.
You’re right. I don’t know what Terminal Awareness meant, but I was thinking of something like uploading someone who doesn’t want to be uploaded, or increasing their intelligence (even at a eudaimonic rate) if they insist they like their current intelligence level just fine.
If it actually is coherent to speak of a “eudaimonic rate” of doing something to someone who doesn’t want it done, I need to significantly revise my understanding of the word “eudaimonic”.
I’m thinking that an eudaimonic rate of intelligence increase is one which maximizes our opportunities for learning, new insights, enjoyment, and personal growth, as opposed to an immediate jump to superintelligence. But I can imagine an exceedingly stubborn person who insists that they don’t want their intelligence increasing at all, even after being told that they will be happier and lead a more meaningful life. Once they get smarter, they’ll presumably be happier with it.
Even if we accept that Fun Theory as outlined by Eliezer really is the best thing possible for human beings, there are certainly some who would currently reject it, right?
It seems to me like your trying to enforce your values on others. You might think your just trying to help, or do something good. I’m just a bit skeptical of anyone trying to enforce values rather than inspire or suggest.
Quote:
:s/happy/intelligent
I’m not sure if we have a genuine disagreement, or if we’re disputing definitions. So without talking about eudaimonic anything, which of the following do you disagree with, if any?
What we want should be the basis for a better future, but the better future probably won’t look much like what we currently want.
CEV might point to something like uploading or dramatic intelligence enhancement that lots of people won’t currently want, though by definition it would be part of their extrapolated preferences.
A fair share of the population will probably, if polled, actively oppose what CEV says we really want.
It seems unlikely that the optimal intelligence level is the current one, but some people would probably oppose alteration to their intelligence. This isn’t a question of “Don’t I have to want to be as intelligent as possible?” so much as “Is what I currently want a good guide to my extrapolated volition?”
Most of these give me the heebie-jeebies, but I don’t really disagree with them.
But why would you want to live in a world where people are less happy than they could be? That sounds terribly evil.
I don’t think bland happiness is optimal. I’d prefer happiness along with an optimal mixture of pleasant quales.
Human values are complex; there’s no reason to think that our values reduce to happiness, and lots of evidence that they don’t.
Let’s imagine two possible futures for humanity: One, a drug is developed that offers unimaginable happiness, a thousand times better than heroin or whatever the drug that currently creates the most happiness is. Everyone is cured of aging and then hooked up to a machine that dispenses this drug until the heat death of the universe. The rest of our future light cone is converted into orgasmium. They are all maximally happy.
Two… I think an eternity of what we’ve got right now would be better than number one, but I imagine lots of people on LessWrong would disagree with that. The best future I can imagine would be one where we make our own choices and our own mistakes, where we learn more about the world around us, get smarter, and get stronger, a world happier than this one, but not cured of disappointment and heartbreak entirely… Eliezer’s written about this at some length.
Some people honestly prefer future 1, and that’s fine. But the original poster seemed to be saying he accepts future 1 is right but would hate it, which should be a red flag.
I don’t think a drug would be adequate. Bland happiness is not enough, I would prefer a future with an optimal mix of pleasurable quales. This is why I prefer the “wireheading” term.
I don’t understand how you could possibly prefer the status quo. Imagine everything was exactly the same but one single person was a little bit happier. Wouldn’t you prefer this future? If you prefer futures where people are happier as a rule then isn’t the best future the one where people are most happy?
I don’t understand how he could hate being happy. People enjoy being happy by definition.
Choosing a world where everything is the same except that one person is a bit happier suggests a preference for more happiness than there currently is, all else being equal. It doesn’t even remotely suggest a preference for happiness maximizing at any cost.
I would prefer to this one a world where everything is exactly the same except I have a bit more ice cream in my freezer than I currently do, but I don’t want the universe tiled with ice cream.
So you would prefer a world where everyone is maximally happy all the time but otherwise nothing is different?
Just like, making the ridiculous assumption that the marginal utility of more ice cream was constant, you would prefer a universe tiled with ice cream as long as it didn’t get in the way of anything else or use resources important for anything else?
I think this has way too many consequences to frame meaningfully as “but nothing otherwise is different.” Kind of like “everything is exactly the same except the polarity of gravity is reversed.” I can’t judge how much utility to assign to a world where everyone is maximally happy all the time but the world is otherwise just like ours, because I can’t even make sense of the notion.
If you assign constant marginal utility to increases in ice cream and assume that ice cream can be increased indefinitely while keeping everything else constant, then of course you can increase utility by continuing to add more ice cream, simply as a matter of basic math. But I would say that not only is it not a meaningful proposition, it’s not really illustrative of anything in particular save for how not to use mathematical models.
I would prefer “status quo plus one person is more happy” to “status quo”. I would not prefer “orgasmium” to “status quo”, because I honestly think orgasmium is nearly as undesirable as paperclips.
Doesn’t follow. I generally prefer futures where people are happier; I also generally prefer futures where they have greater autonomy, novel experiences, meaningful challenge… When these trade off, I sometimes choose one, sometimes another. The “best future” I can imagine is probably a balance of all of these.
Present-him presumably is very unhappy at the thought of becoming someone who will be happily a wirehead, just as present-me doesn’t want to try heroin though it would undoubtedly make me happy.
It really does seem like either you don’t really believe that the assimilation ending is optimal and you prefer the true ending, or you are suffering from akrasia by fighting against it despite believing that it is. You haven’t really explained why it could be anything else.