I’m a first year chemical engineering student in Canada. At some point in time I was linked to The AI-Box Experiment by Yudkowsky, probably 3-1/2 years ago. I’m not sure. The earliest record I have, of an old firefox history file, is Wed Jun 25 20:19:56 ADT 2008. I guess that’s when I first encountered rationality, though it may have been back when I used IE (shudders). I read a lot of his site, and occasionally visited it and againstbias. I though it was pretty complicated, and that I’d see more of that guy in my life. Years later, here I am.
One concern I have is whether or not I belong here. Sure, I like to learn on my own and do a lot of rationality-related stuff, but to accurately express how badly I am at rationality, I will compare my own abilities to most republican’s ability to understand science. I don’t think I’m particularly smart, on top. I argued with teachers and got a ~93% average in High School, though I like to think I understand things more than most students. I have not taken any formal IQ test, but I consistently score a mere 120 on online tests.
My motivation tends to be highly whimsical, and though I’m attempting to track myself on various fronts I keep failing. If I ever get addicted to a drug, I will never escape it. I have horrible dietary habits, though miraculously I have stayed lean enough. I don’t exercise and constantly fail to realize how most people around me could kick my ass.
I’ve read about half the sequences, and taken notes on maybe 15%. I think Gwern’s writing is not top-notch but always a pleasure to read. Methods of Rationality is a mediocre story by an author who isn’t. It’s not even in my top 20 fanfictions. Someday I’ll actually send him some feedback, but I think it would all be ignored because he’s trying make fanfiction something it isn’t. To his credit, it worked much more than I though it would. Three worlds collide demonstrates to me that most of you don’t understand the lack of ethics in this world—you should all accept that assimilation is the optimal solution.
On the other hand, I’d fight to the death and beyond to avoid it. I’m not ready to leave everything I am behind. I’m also not ready to sign up for cryogenics, and I have definitely heard all the arguments for it. My pathetic refutations are that I don’t want to ruin my life trying to survive forever, I’d rather live a good life now, and that I expect either existence is such a cold, cruel place that civilization will fall soon, or other life will preserve my existence anyway. Possibly with time travel. Or just through everything happening, like in Greg Egan’s novel Permutation City.
I think that’s about all I can write today. I hope I don’t make too many enemies here. Hope to get to know you all!
I would say that if you’re interested in rationality, you belong here. It doesn’t matter if you’re not that good at it yet, as long as you’re interested and want to improve then I would say this is where you should be.
Be careful of the priming effects of calling yourself bad at rationality, questioning your place here, saying you’ll never escape a drug addiction, etc. etc. The article on cached selves might be somewhat relevant.
Three worlds collide demonstrates to me that most of you don’t understand the lack of ethics in this world—you should all accept that assimilation is the optimal solution.
On the other hand, I’d fight to the death and beyond to avoid it.
This suggests to me that you don’t understand ethics.
While I’m occasionally convinced of the existence of akrasia, it would be an odd thing to note that one fighting to the death was caused by it.
I’d just like to point out that recently someone asked (doubtfully) whether anyone here still has strong feelings regarding three worlds collide. It seems indeed to have a prominent place in the popular consciousness.
Well, I am new here, and I suppose it was a slightly presumptive of me to say that. I was just trying to introduce myself with a few of the thoughts I’ve had while reading here.
To attempt to clarify, I think that this story is rather like the fable of the Dragon-Tyrant. To live a life with even the faintest hint of displeasure is a horrific crime, the thought goes. I am under the impression that most people here operate with some sort of utilitarianist philosophy. This to me seems to imply that unless one declares that there is no objective state for which utilitarianism is to be directed towards, humanity in this example is wrong. (In case someone is making the distinction between ethics and morals, as an engineer, it doesn’t strike me as important.)
To the issue of akrasia, I don’t see this as a case. My own judgement says that a life like theirs is vapid and devoid of meaning. Fighting to the death against one’s own best judgement probably isn’t rare either; I expect many, many soldiers have died fighting wars they despised, and who had options other than fighting them. In effect, I feel like this is multiplication by zero, and add infinity. You have more at the end; you’re just no longer the unique complex individual you were, and I could not bear to submit to that.
To live a life with even the faintest hint of displeasure is a horrific crime, the thought goes. I am under the impression that most people here operate with some sort of utilitarianist philosophy. This to me seems to imply that unless one declares that there is no objective state for which utilitarianism is to be directed towards, humanity in this example is wrong.
The general thrust of the Superhappy segments of Three Worlds Collide seems to be that simple utilitarian schemas based on subjective happiness or pleasure are insufficient to describe human value systems or preferences as they’re expressed in the wild. Similar points are made in the Fun Theory sequence. Neither of these mean that utilitarianism generally is wrong; merely that the utility function we’re summing (or averaging over, or taking the minimum of, etc.) isn’t as simple as sometimes assumed.
Now, Fun Theory is probably one of the less well-developed sequences here (unfortunately, in my view; it’s a very deep question, and intimately related to human value structure and all its AI consequences), and you’re certainly free to prefer 3WC’s assimilation ending or to believe that the kind of soft wireheading the Superhappies embody really is optimal under some more or less objective criterion. That does seem to be implied in one form or another by several major schools of ethics, and any intuition pump I could deploy to convince you otherwise would probably end up looking a lot like the Assimilation Ending, which I gather you don’t find convincing.
Personally, though, I’m inclined to be sympathetic to the True Ending, and think more generally that pain and suffering tend to be wrongly conflated with moral evil when in fact there’s a considerably looser and more subtle relationship between the two. But I’m nowhere near a fully developed ethics, and while this seems to have something to do with the “complexity” you mentioned I feel like stopping there would be an unjustified handwave.
To the issue of akrasia, I don’t see this as a case. My own judgement says that a life like theirs is vapid and devoid of meaning. Fighting to the death against one’s own best judgement probably isn’t rare either; I expect many, many soldiers have died fighting wars they despised, and who had options other than fighting them. In effect, I feel like this is multiplication by zero, and add infinity. You have more at the end; you’re just no longer the unique complex individual you were, and I could not bear to submit to that.
And you think that not being able to bear submitting to that is wrong?
Personally, I’m one of those who prefers the assimilation ending, there are quite a few of us, and I certainly wouldn’t be tempted to fight to the death or kill myself to avoid it. But for a person who would fight to the death to avoid it to say that assimilation is optimal and the True Ending is senseless seems to me to be incoherent.
I think the confusion comes from what you mean by “utilitarian.” The whole point of Three Worlds Collide (well, one of the points), is that human preferences are not for happiness alone; the things we value include a life that’s not “vapid and devoid of meaning”, even if it’s happy! That’s why (to the extent we have to pick labels) I am a preference utilitarian, which seems to be the most common ethical philosophy I’ve encountered here (we’ll know more when Yvain’s survey comes out). If you prefer not to be a Superhappy, then preference utilitarianism says you shouldn’t be one.
When you catch yourself saying “the right thing is X, but the world I’d actually want to live in is Y,” be careful—a world that’s actually optimal would probably be one you want to live in.
I know it’s been some time, but I wanted to thank you for the reply. I’ve thought considerably, and I still feel that I’m right. I’m going to try to explain again.
Sure, we all have our own utility functions. Now, if you’re trying to maximize utility for everyone, that’s no easy task, and you’ll end up with a relatively small amount of utility.
Would you condone someone for forcing someone else to try chocolate, if that person believed it tasted bad, but loved it as soon as they tried it? If someone mentally deranged set themselves on fire and asked you not to save them, would you? If someone is refusing cancer treatment because “Science is evil”, I at least would force the treatment on them. Would you force transhumanity on everyone who refused it, is probably a better question for LessWrong. I feel that, though I may violate others’ utility functions, we’re all mentally deranged, and so someone should save us. Someone should violate our utility preferences to change them. Because that would bring an enormous amount of utility.
I’m struggling how to reconcile respecting preferences with how much of society today works. Personally, I don’t think anyone should ever violate my utility preferences. But can you deny that there are people you think should have theirs changed? I’m inclined to think that a large part of this community is.
If you haven’t, you should read Yvain’s Consequentialism FAQ, which addresses some of these points in a little more detail.
Would you condone someone for forcing someone else to try chocolate, if that person believed it tasted bad, but loved it as soon as they tried it? If someone mentally deranged set themselves on fire and asked you not to save them, would you? If someone is refusing cancer treatment because “Science is evil”, I at least would force the treatment on them. Would you force transhumanity on everyone who refused it, is probably a better question for LessWrong.
Preference utilitarianism works well for any situation you’ll encounter in real life, but it’s possible to propose questions it doesn’t answer very well. A popular answer to the above question on LessWrong comes from the idea of coherent extrapolated volition (The paper itself may be outdated). Essentially, it asks what we would want were we better informed, more self-aware, and more the people we wished we were. In philosophy, this is called idealized preference theory. CEV probably says we shouldn’t force someone to eat chocolate, because their preference for autonomy outweighs their extrapolated preference for chocolate. It probably says we should save the person on fire, since their non-mentally-ill extrapolated volition would want them to live, and ditto the cancer patient.
Forcing transhumanity on people is a harder question, because I’m not sure that everyone’s preferences would converge in this case. In any event, I would not personally do it, because I don’t trust my own reasoning enough.
But can you deny that there are people you think should have theirs changed?
I think all people, to the extent that they can be said to have utility functions, are wrong about what they want at least sometimes. I don’t think we should change their utility function so much as implement their ideal preferences, not their stated ones.
I’m inclined to think that a large part of this community is.
is what? Is willing to change people’s utility functions?
What does this even mean? Forcing immortality on people is at least a coherent notion, although I’m pretty sure most users around here support an individual’s right to self-terminate. But if that was what was meant, calling it ‘transhumanism’ is a little off.
On the other hand, is this referring to something handled by the fun theoretic concept of a eudaimonic rate of intelligence increase?
Yes, I know, “jargon jargon jargon buzzword buzzword rationality,” but I couldn’t think of a better way to phrase that. Sorry.
You’re right. I don’t know what Terminal Awareness meant, but I was thinking of something like uploading someone who doesn’t want to be uploaded, or increasing their intelligence (even at a eudaimonic rate) if they insist they like their current intelligence level just fine.
If it actually is coherent to speak of a “eudaimonic rate” of doing something to someone who doesn’t want it done, I need to significantly revise my understanding of the word “eudaimonic”.
I’m thinking that an eudaimonic rate of intelligence increase is one which maximizes our opportunities for learning, new insights, enjoyment, and personal growth, as opposed to an immediate jump to superintelligence. But I can imagine an exceedingly stubborn person who insists that they don’t want their intelligence increasing at all, even after being told that they will be happier and lead a more meaningful life. Once they get smarter, they’ll presumably be happier with it.
Even if we accept that Fun Theory as outlined by Eliezer really is the best thing possible for human beings, there are certainly some who would currently reject it, right?
It seems to me like your trying to enforce your values on others.
You might think your just trying to help, or do something good.
I’m just a bit skeptical of anyone trying to enforce values rather than inspire or suggest.
But when it comes to the question, “Don’t I have to want to be as happy as possible?” then the answer is simply “No. If you don’t prefer it, why go there?”
I’m not sure if we have a genuine disagreement, or if we’re disputing definitions. So without talking about eudaimonic anything, which of the following do you disagree with, if any?
What we want should be the basis for a better future, but the better future probably won’t look much like what we currently want.
CEV might point to something like uploading or dramatic intelligence enhancement that lots of people won’t currently want, though by definition it would be part of their extrapolated preferences.
A fair share of the population will probably, if polled, actively oppose what CEV says we really want.
It seems unlikely that the optimal intelligence level is the current one, but some people would probably oppose alteration to their intelligence. This isn’t a question of “Don’t I have to want to be as intelligent as possible?” so much as “Is what I currently want a good guide to my extrapolated volition?”
Human values are complex; there’s no reason to think that our values reduce to happiness, and lots of evidence that they don’t.
Let’s imagine two possible futures for humanity:
One, a drug is developed that offers unimaginable happiness, a thousand times better than heroin or whatever the drug that currently creates the most happiness is. Everyone is cured of aging and then hooked up to a machine that dispenses this drug until the heat death of the universe. The rest of our future light cone is converted into orgasmium. They are all maximally happy.
Two…
I think an eternity of what we’ve got right now would be better than number one, but I imagine lots of people on LessWrong would disagree with that. The best future I can imagine would be one where we make our own choices and our own mistakes, where we learn more about the world around us, get smarter, and get stronger, a world happier than this one, but not cured of disappointment and heartbreak entirely… Eliezer’s written about this at some length.
Some people honestly prefer future 1, and that’s fine. But the original poster seemed to be saying he accepts future 1 is right but would hate it, which should be a red flag.
I don’t think a drug would be adequate. Bland happiness is not enough, I would prefer a future with an optimal mix of pleasurable quales. This is why I prefer the “wireheading” term.
I don’t understand how you could possibly prefer the status quo. Imagine everything was exactly the same but one single person was a little bit happier. Wouldn’t you prefer this future? If you prefer futures where people are happier as a rule then isn’t the best future the one where people are most happy?
I don’t understand how he could hate being happy. People enjoy being happy by definition.
Imagine everything was exactly the same but one single person was a little bit happier. Wouldn’t you prefer this future? If you prefer futures where people are happier as a rule then isn’t the best future the one where people are most happy?
Choosing a world where everything is the same except that one person is a bit happier suggests a preference for more happiness than there currently is, all else being equal. It doesn’t even remotely suggest a preference for happiness maximizing at any cost.
I would prefer to this one a world where everything is exactly the same except I have a bit more ice cream in my freezer than I currently do, but I don’t want the universe tiled with ice cream.
So you would prefer a world where everyone is maximally happy all the time but otherwise nothing is different?
Just like, making the ridiculous assumption that the marginal utility of more ice cream was constant, you would prefer a universe tiled with ice cream as long as it didn’t get in the way of anything else or use resources important for anything else?
So you would prefer a world where everyone is maximally happy all the time but otherwise nothing is different?
I think this has way too many consequences to frame meaningfully as “but nothing otherwise is different.” Kind of like “everything is exactly the same except the polarity of gravity is reversed.” I can’t judge how much utility to assign to a world where everyone is maximally happy all the time but the world is otherwise just like ours, because I can’t even make sense of the notion.
If you assign constant marginal utility to increases in ice cream and assume that ice cream can be increased indefinitely while keeping everything else constant, then of course you can increase utility by continuing to add more ice cream, simply as a matter of basic math. But I would say that not only is it not a meaningful proposition, it’s not really illustrative of anything in particular save for how not to use mathematical models.
I would prefer “status quo plus one person is more happy” to “status quo”. I would not prefer “orgasmium” to “status quo”, because I honestly think orgasmium is nearly as undesirable as paperclips.
If you prefer futures where people are happier as a rule then isn’t the best future the one where people are most happy?
Doesn’t follow. I generally prefer futures where people are happier; I also generally prefer futures where they have greater autonomy, novel experiences, meaningful challenge… When these trade off, I sometimes choose one, sometimes another. The “best future” I can imagine is probably a balance of all of these.
I don’t understand how he could hate being happy. People enjoy being happy by definition.
Present-him presumably is very unhappy at the thought of becoming someone who will be happily a wirehead, just as present-me doesn’t want to try heroin though it would undoubtedly make me happy.
To the issue of akrasia, I don’t see this as a case. My own judgement says that a life like theirs is vapid and devoid of meaning. Fighting to the death against one’s own best judgement probably isn’t rare either; I expect many, many soldiers have died fighting wars they despised, and who had options other than fighting them. In effect, I feel like this is multiplication by zero, and add infinity. You have more at the end; you’re just no longer the unique complex individual you were, and I could not bear to submit to that.
It really does seem like either you don’t really believe that the assimilation ending is optimal and you prefer the true ending, or you are suffering from akrasia by fighting against it despite believing that it is. You haven’t really explained why it could be anything else.
LessWrong community, I say hello to you at last!
I’m a first year chemical engineering student in Canada. At some point in time I was linked to The AI-Box Experiment by Yudkowsky, probably 3-1/2 years ago. I’m not sure. The earliest record I have, of an old firefox history file, is Wed Jun 25 20:19:56 ADT 2008. I guess that’s when I first encountered rationality, though it may have been back when I used IE (shudders). I read a lot of his site, and occasionally visited it and againstbias. I though it was pretty complicated, and that I’d see more of that guy in my life. Years later, here I am.
One concern I have is whether or not I belong here. Sure, I like to learn on my own and do a lot of rationality-related stuff, but to accurately express how badly I am at rationality, I will compare my own abilities to most republican’s ability to understand science. I don’t think I’m particularly smart, on top. I argued with teachers and got a ~93% average in High School, though I like to think I understand things more than most students. I have not taken any formal IQ test, but I consistently score a mere 120 on online tests.
My motivation tends to be highly whimsical, and though I’m attempting to track myself on various fronts I keep failing. If I ever get addicted to a drug, I will never escape it. I have horrible dietary habits, though miraculously I have stayed lean enough. I don’t exercise and constantly fail to realize how most people around me could kick my ass.
I’ve read about half the sequences, and taken notes on maybe 15%. I think Gwern’s writing is not top-notch but always a pleasure to read. Methods of Rationality is a mediocre story by an author who isn’t. It’s not even in my top 20 fanfictions. Someday I’ll actually send him some feedback, but I think it would all be ignored because he’s trying make fanfiction something it isn’t. To his credit, it worked much more than I though it would. Three worlds collide demonstrates to me that most of you don’t understand the lack of ethics in this world—you should all accept that assimilation is the optimal solution.
On the other hand, I’d fight to the death and beyond to avoid it. I’m not ready to leave everything I am behind. I’m also not ready to sign up for cryogenics, and I have definitely heard all the arguments for it. My pathetic refutations are that I don’t want to ruin my life trying to survive forever, I’d rather live a good life now, and that I expect either existence is such a cold, cruel place that civilization will fall soon, or other life will preserve my existence anyway. Possibly with time travel. Or just through everything happening, like in Greg Egan’s novel Permutation City.
I think that’s about all I can write today. I hope I don’t make too many enemies here. Hope to get to know you all!
Welcome to LessWrong!
I would say that if you’re interested in rationality, you belong here. It doesn’t matter if you’re not that good at it yet, as long as you’re interested and want to improve then I would say this is where you should be.
Be careful of the priming effects of calling yourself bad at rationality, questioning your place here, saying you’ll never escape a drug addiction, etc. etc. The article on cached selves might be somewhat relevant.
This suggests to me that you don’t understand ethics.
While I’m occasionally convinced of the existence of akrasia, it would be an odd thing to note that one fighting to the death was caused by it.
I’d just like to point out that recently someone asked (doubtfully) whether anyone here still has strong feelings regarding three worlds collide. It seems indeed to have a prominent place in the popular consciousness.
Well, I am new here, and I suppose it was a slightly presumptive of me to say that. I was just trying to introduce myself with a few of the thoughts I’ve had while reading here.
To attempt to clarify, I think that this story is rather like the fable of the Dragon-Tyrant. To live a life with even the faintest hint of displeasure is a horrific crime, the thought goes. I am under the impression that most people here operate with some sort of utilitarianist philosophy. This to me seems to imply that unless one declares that there is no objective state for which utilitarianism is to be directed towards, humanity in this example is wrong. (In case someone is making the distinction between ethics and morals, as an engineer, it doesn’t strike me as important.)
To the issue of akrasia, I don’t see this as a case. My own judgement says that a life like theirs is vapid and devoid of meaning. Fighting to the death against one’s own best judgement probably isn’t rare either; I expect many, many soldiers have died fighting wars they despised, and who had options other than fighting them. In effect, I feel like this is multiplication by zero, and add infinity. You have more at the end; you’re just no longer the unique complex individual you were, and I could not bear to submit to that.
The general thrust of the Superhappy segments of Three Worlds Collide seems to be that simple utilitarian schemas based on subjective happiness or pleasure are insufficient to describe human value systems or preferences as they’re expressed in the wild. Similar points are made in the Fun Theory sequence. Neither of these mean that utilitarianism generally is wrong; merely that the utility function we’re summing (or averaging over, or taking the minimum of, etc.) isn’t as simple as sometimes assumed.
Now, Fun Theory is probably one of the less well-developed sequences here (unfortunately, in my view; it’s a very deep question, and intimately related to human value structure and all its AI consequences), and you’re certainly free to prefer 3WC’s assimilation ending or to believe that the kind of soft wireheading the Superhappies embody really is optimal under some more or less objective criterion. That does seem to be implied in one form or another by several major schools of ethics, and any intuition pump I could deploy to convince you otherwise would probably end up looking a lot like the Assimilation Ending, which I gather you don’t find convincing.
Personally, though, I’m inclined to be sympathetic to the True Ending, and think more generally that pain and suffering tend to be wrongly conflated with moral evil when in fact there’s a considerably looser and more subtle relationship between the two. But I’m nowhere near a fully developed ethics, and while this seems to have something to do with the “complexity” you mentioned I feel like stopping there would be an unjustified handwave.
And you think that not being able to bear submitting to that is wrong?
Personally, I’m one of those who prefers the assimilation ending, there are quite a few of us, and I certainly wouldn’t be tempted to fight to the death or kill myself to avoid it. But for a person who would fight to the death to avoid it to say that assimilation is optimal and the True Ending is senseless seems to me to be incoherent.
I think the confusion comes from what you mean by “utilitarian.” The whole point of Three Worlds Collide (well, one of the points), is that human preferences are not for happiness alone; the things we value include a life that’s not “vapid and devoid of meaning”, even if it’s happy! That’s why (to the extent we have to pick labels) I am a preference utilitarian, which seems to be the most common ethical philosophy I’ve encountered here (we’ll know more when Yvain’s survey comes out). If you prefer not to be a Superhappy, then preference utilitarianism says you shouldn’t be one.
When you catch yourself saying “the right thing is X, but the world I’d actually want to live in is Y,” be careful—a world that’s actually optimal would probably be one you want to live in.
If you’re able to summarize what makes the superhappies’ lives vapid and devoid of meaning, I’d be interested.
TerminalAwareness’s words, not mine. I prefer the true ending but wouldn’t call the Superhappies’ lives meaningless.
(nods) Gotcha.
I know it’s been some time, but I wanted to thank you for the reply. I’ve thought considerably, and I still feel that I’m right. I’m going to try to explain again.
Sure, we all have our own utility functions. Now, if you’re trying to maximize utility for everyone, that’s no easy task, and you’ll end up with a relatively small amount of utility.
Would you condone someone for forcing someone else to try chocolate, if that person believed it tasted bad, but loved it as soon as they tried it? If someone mentally deranged set themselves on fire and asked you not to save them, would you? If someone is refusing cancer treatment because “Science is evil”, I at least would force the treatment on them. Would you force transhumanity on everyone who refused it, is probably a better question for LessWrong. I feel that, though I may violate others’ utility functions, we’re all mentally deranged, and so someone should save us. Someone should violate our utility preferences to change them. Because that would bring an enormous amount of utility.
I’m struggling how to reconcile respecting preferences with how much of society today works. Personally, I don’t think anyone should ever violate my utility preferences. But can you deny that there are people you think should have theirs changed? I’m inclined to think that a large part of this community is.
If you haven’t, you should read Yvain’s Consequentialism FAQ, which addresses some of these points in a little more detail.
Preference utilitarianism works well for any situation you’ll encounter in real life, but it’s possible to propose questions it doesn’t answer very well. A popular answer to the above question on LessWrong comes from the idea of coherent extrapolated volition (The paper itself may be outdated). Essentially, it asks what we would want were we better informed, more self-aware, and more the people we wished we were. In philosophy, this is called idealized preference theory. CEV probably says we shouldn’t force someone to eat chocolate, because their preference for autonomy outweighs their extrapolated preference for chocolate. It probably says we should save the person on fire, since their non-mentally-ill extrapolated volition would want them to live, and ditto the cancer patient.
Forcing transhumanity on people is a harder question, because I’m not sure that everyone’s preferences would converge in this case. In any event, I would not personally do it, because I don’t trust my own reasoning enough.
I think all people, to the extent that they can be said to have utility functions, are wrong about what they want at least sometimes. I don’t think we should change their utility function so much as implement their ideal preferences, not their stated ones.
is what? Is willing to change people’s utility functions?
What does this even mean? Forcing immortality on people is at least a coherent notion, although I’m pretty sure most users around here support an individual’s right to self-terminate. But if that was what was meant, calling it ‘transhumanism’ is a little off.
On the other hand, is this referring to something handled by the fun theoretic concept of a eudaimonic rate of intelligence increase?
Yes, I know, “jargon jargon jargon buzzword buzzword rationality,” but I couldn’t think of a better way to phrase that. Sorry.
You’re right. I don’t know what Terminal Awareness meant, but I was thinking of something like uploading someone who doesn’t want to be uploaded, or increasing their intelligence (even at a eudaimonic rate) if they insist they like their current intelligence level just fine.
If it actually is coherent to speak of a “eudaimonic rate” of doing something to someone who doesn’t want it done, I need to significantly revise my understanding of the word “eudaimonic”.
I’m thinking that an eudaimonic rate of intelligence increase is one which maximizes our opportunities for learning, new insights, enjoyment, and personal growth, as opposed to an immediate jump to superintelligence. But I can imagine an exceedingly stubborn person who insists that they don’t want their intelligence increasing at all, even after being told that they will be happier and lead a more meaningful life. Once they get smarter, they’ll presumably be happier with it.
Even if we accept that Fun Theory as outlined by Eliezer really is the best thing possible for human beings, there are certainly some who would currently reject it, right?
It seems to me like your trying to enforce your values on others. You might think your just trying to help, or do something good. I’m just a bit skeptical of anyone trying to enforce values rather than inspire or suggest.
Quote:
:s/happy/intelligent
I’m not sure if we have a genuine disagreement, or if we’re disputing definitions. So without talking about eudaimonic anything, which of the following do you disagree with, if any?
What we want should be the basis for a better future, but the better future probably won’t look much like what we currently want.
CEV might point to something like uploading or dramatic intelligence enhancement that lots of people won’t currently want, though by definition it would be part of their extrapolated preferences.
A fair share of the population will probably, if polled, actively oppose what CEV says we really want.
It seems unlikely that the optimal intelligence level is the current one, but some people would probably oppose alteration to their intelligence. This isn’t a question of “Don’t I have to want to be as intelligent as possible?” so much as “Is what I currently want a good guide to my extrapolated volition?”
Most of these give me the heebie-jeebies, but I don’t really disagree with them.
But why would you want to live in a world where people are less happy than they could be? That sounds terribly evil.
I don’t think bland happiness is optimal. I’d prefer happiness along with an optimal mixture of pleasant quales.
Human values are complex; there’s no reason to think that our values reduce to happiness, and lots of evidence that they don’t.
Let’s imagine two possible futures for humanity: One, a drug is developed that offers unimaginable happiness, a thousand times better than heroin or whatever the drug that currently creates the most happiness is. Everyone is cured of aging and then hooked up to a machine that dispenses this drug until the heat death of the universe. The rest of our future light cone is converted into orgasmium. They are all maximally happy.
Two… I think an eternity of what we’ve got right now would be better than number one, but I imagine lots of people on LessWrong would disagree with that. The best future I can imagine would be one where we make our own choices and our own mistakes, where we learn more about the world around us, get smarter, and get stronger, a world happier than this one, but not cured of disappointment and heartbreak entirely… Eliezer’s written about this at some length.
Some people honestly prefer future 1, and that’s fine. But the original poster seemed to be saying he accepts future 1 is right but would hate it, which should be a red flag.
I don’t think a drug would be adequate. Bland happiness is not enough, I would prefer a future with an optimal mix of pleasurable quales. This is why I prefer the “wireheading” term.
I don’t understand how you could possibly prefer the status quo. Imagine everything was exactly the same but one single person was a little bit happier. Wouldn’t you prefer this future? If you prefer futures where people are happier as a rule then isn’t the best future the one where people are most happy?
I don’t understand how he could hate being happy. People enjoy being happy by definition.
Choosing a world where everything is the same except that one person is a bit happier suggests a preference for more happiness than there currently is, all else being equal. It doesn’t even remotely suggest a preference for happiness maximizing at any cost.
I would prefer to this one a world where everything is exactly the same except I have a bit more ice cream in my freezer than I currently do, but I don’t want the universe tiled with ice cream.
So you would prefer a world where everyone is maximally happy all the time but otherwise nothing is different?
Just like, making the ridiculous assumption that the marginal utility of more ice cream was constant, you would prefer a universe tiled with ice cream as long as it didn’t get in the way of anything else or use resources important for anything else?
I think this has way too many consequences to frame meaningfully as “but nothing otherwise is different.” Kind of like “everything is exactly the same except the polarity of gravity is reversed.” I can’t judge how much utility to assign to a world where everyone is maximally happy all the time but the world is otherwise just like ours, because I can’t even make sense of the notion.
If you assign constant marginal utility to increases in ice cream and assume that ice cream can be increased indefinitely while keeping everything else constant, then of course you can increase utility by continuing to add more ice cream, simply as a matter of basic math. But I would say that not only is it not a meaningful proposition, it’s not really illustrative of anything in particular save for how not to use mathematical models.
I would prefer “status quo plus one person is more happy” to “status quo”. I would not prefer “orgasmium” to “status quo”, because I honestly think orgasmium is nearly as undesirable as paperclips.
Doesn’t follow. I generally prefer futures where people are happier; I also generally prefer futures where they have greater autonomy, novel experiences, meaningful challenge… When these trade off, I sometimes choose one, sometimes another. The “best future” I can imagine is probably a balance of all of these.
Present-him presumably is very unhappy at the thought of becoming someone who will be happily a wirehead, just as present-me doesn’t want to try heroin though it would undoubtedly make me happy.
It really does seem like either you don’t really believe that the assimilation ending is optimal and you prefer the true ending, or you are suffering from akrasia by fighting against it despite believing that it is. You haven’t really explained why it could be anything else.