I know it’s been some time, but I wanted to thank you for the reply. I’ve thought considerably, and I still feel that I’m right. I’m going to try to explain again.
Sure, we all have our own utility functions. Now, if you’re trying to maximize utility for everyone, that’s no easy task, and you’ll end up with a relatively small amount of utility.
Would you condone someone for forcing someone else to try chocolate, if that person believed it tasted bad, but loved it as soon as they tried it? If someone mentally deranged set themselves on fire and asked you not to save them, would you? If someone is refusing cancer treatment because “Science is evil”, I at least would force the treatment on them. Would you force transhumanity on everyone who refused it, is probably a better question for LessWrong. I feel that, though I may violate others’ utility functions, we’re all mentally deranged, and so someone should save us. Someone should violate our utility preferences to change them. Because that would bring an enormous amount of utility.
I’m struggling how to reconcile respecting preferences with how much of society today works. Personally, I don’t think anyone should ever violate my utility preferences. But can you deny that there are people you think should have theirs changed? I’m inclined to think that a large part of this community is.
If you haven’t, you should read Yvain’s Consequentialism FAQ, which addresses some of these points in a little more detail.
Would you condone someone for forcing someone else to try chocolate, if that person believed it tasted bad, but loved it as soon as they tried it? If someone mentally deranged set themselves on fire and asked you not to save them, would you? If someone is refusing cancer treatment because “Science is evil”, I at least would force the treatment on them. Would you force transhumanity on everyone who refused it, is probably a better question for LessWrong.
Preference utilitarianism works well for any situation you’ll encounter in real life, but it’s possible to propose questions it doesn’t answer very well. A popular answer to the above question on LessWrong comes from the idea of coherent extrapolated volition (The paper itself may be outdated). Essentially, it asks what we would want were we better informed, more self-aware, and more the people we wished we were. In philosophy, this is called idealized preference theory. CEV probably says we shouldn’t force someone to eat chocolate, because their preference for autonomy outweighs their extrapolated preference for chocolate. It probably says we should save the person on fire, since their non-mentally-ill extrapolated volition would want them to live, and ditto the cancer patient.
Forcing transhumanity on people is a harder question, because I’m not sure that everyone’s preferences would converge in this case. In any event, I would not personally do it, because I don’t trust my own reasoning enough.
But can you deny that there are people you think should have theirs changed?
I think all people, to the extent that they can be said to have utility functions, are wrong about what they want at least sometimes. I don’t think we should change their utility function so much as implement their ideal preferences, not their stated ones.
I’m inclined to think that a large part of this community is.
is what? Is willing to change people’s utility functions?
What does this even mean? Forcing immortality on people is at least a coherent notion, although I’m pretty sure most users around here support an individual’s right to self-terminate. But if that was what was meant, calling it ‘transhumanism’ is a little off.
On the other hand, is this referring to something handled by the fun theoretic concept of a eudaimonic rate of intelligence increase?
Yes, I know, “jargon jargon jargon buzzword buzzword rationality,” but I couldn’t think of a better way to phrase that. Sorry.
You’re right. I don’t know what Terminal Awareness meant, but I was thinking of something like uploading someone who doesn’t want to be uploaded, or increasing their intelligence (even at a eudaimonic rate) if they insist they like their current intelligence level just fine.
If it actually is coherent to speak of a “eudaimonic rate” of doing something to someone who doesn’t want it done, I need to significantly revise my understanding of the word “eudaimonic”.
I’m thinking that an eudaimonic rate of intelligence increase is one which maximizes our opportunities for learning, new insights, enjoyment, and personal growth, as opposed to an immediate jump to superintelligence. But I can imagine an exceedingly stubborn person who insists that they don’t want their intelligence increasing at all, even after being told that they will be happier and lead a more meaningful life. Once they get smarter, they’ll presumably be happier with it.
Even if we accept that Fun Theory as outlined by Eliezer really is the best thing possible for human beings, there are certainly some who would currently reject it, right?
It seems to me like your trying to enforce your values on others.
You might think your just trying to help, or do something good.
I’m just a bit skeptical of anyone trying to enforce values rather than inspire or suggest.
But when it comes to the question, “Don’t I have to want to be as happy as possible?” then the answer is simply “No. If you don’t prefer it, why go there?”
I’m not sure if we have a genuine disagreement, or if we’re disputing definitions. So without talking about eudaimonic anything, which of the following do you disagree with, if any?
What we want should be the basis for a better future, but the better future probably won’t look much like what we currently want.
CEV might point to something like uploading or dramatic intelligence enhancement that lots of people won’t currently want, though by definition it would be part of their extrapolated preferences.
A fair share of the population will probably, if polled, actively oppose what CEV says we really want.
It seems unlikely that the optimal intelligence level is the current one, but some people would probably oppose alteration to their intelligence. This isn’t a question of “Don’t I have to want to be as intelligent as possible?” so much as “Is what I currently want a good guide to my extrapolated volition?”
I know it’s been some time, but I wanted to thank you for the reply. I’ve thought considerably, and I still feel that I’m right. I’m going to try to explain again.
Sure, we all have our own utility functions. Now, if you’re trying to maximize utility for everyone, that’s no easy task, and you’ll end up with a relatively small amount of utility.
Would you condone someone for forcing someone else to try chocolate, if that person believed it tasted bad, but loved it as soon as they tried it? If someone mentally deranged set themselves on fire and asked you not to save them, would you? If someone is refusing cancer treatment because “Science is evil”, I at least would force the treatment on them. Would you force transhumanity on everyone who refused it, is probably a better question for LessWrong. I feel that, though I may violate others’ utility functions, we’re all mentally deranged, and so someone should save us. Someone should violate our utility preferences to change them. Because that would bring an enormous amount of utility.
I’m struggling how to reconcile respecting preferences with how much of society today works. Personally, I don’t think anyone should ever violate my utility preferences. But can you deny that there are people you think should have theirs changed? I’m inclined to think that a large part of this community is.
If you haven’t, you should read Yvain’s Consequentialism FAQ, which addresses some of these points in a little more detail.
Preference utilitarianism works well for any situation you’ll encounter in real life, but it’s possible to propose questions it doesn’t answer very well. A popular answer to the above question on LessWrong comes from the idea of coherent extrapolated volition (The paper itself may be outdated). Essentially, it asks what we would want were we better informed, more self-aware, and more the people we wished we were. In philosophy, this is called idealized preference theory. CEV probably says we shouldn’t force someone to eat chocolate, because their preference for autonomy outweighs their extrapolated preference for chocolate. It probably says we should save the person on fire, since their non-mentally-ill extrapolated volition would want them to live, and ditto the cancer patient.
Forcing transhumanity on people is a harder question, because I’m not sure that everyone’s preferences would converge in this case. In any event, I would not personally do it, because I don’t trust my own reasoning enough.
I think all people, to the extent that they can be said to have utility functions, are wrong about what they want at least sometimes. I don’t think we should change their utility function so much as implement their ideal preferences, not their stated ones.
is what? Is willing to change people’s utility functions?
What does this even mean? Forcing immortality on people is at least a coherent notion, although I’m pretty sure most users around here support an individual’s right to self-terminate. But if that was what was meant, calling it ‘transhumanism’ is a little off.
On the other hand, is this referring to something handled by the fun theoretic concept of a eudaimonic rate of intelligence increase?
Yes, I know, “jargon jargon jargon buzzword buzzword rationality,” but I couldn’t think of a better way to phrase that. Sorry.
You’re right. I don’t know what Terminal Awareness meant, but I was thinking of something like uploading someone who doesn’t want to be uploaded, or increasing their intelligence (even at a eudaimonic rate) if they insist they like their current intelligence level just fine.
If it actually is coherent to speak of a “eudaimonic rate” of doing something to someone who doesn’t want it done, I need to significantly revise my understanding of the word “eudaimonic”.
I’m thinking that an eudaimonic rate of intelligence increase is one which maximizes our opportunities for learning, new insights, enjoyment, and personal growth, as opposed to an immediate jump to superintelligence. But I can imagine an exceedingly stubborn person who insists that they don’t want their intelligence increasing at all, even after being told that they will be happier and lead a more meaningful life. Once they get smarter, they’ll presumably be happier with it.
Even if we accept that Fun Theory as outlined by Eliezer really is the best thing possible for human beings, there are certainly some who would currently reject it, right?
It seems to me like your trying to enforce your values on others. You might think your just trying to help, or do something good. I’m just a bit skeptical of anyone trying to enforce values rather than inspire or suggest.
Quote:
:s/happy/intelligent
I’m not sure if we have a genuine disagreement, or if we’re disputing definitions. So without talking about eudaimonic anything, which of the following do you disagree with, if any?
What we want should be the basis for a better future, but the better future probably won’t look much like what we currently want.
CEV might point to something like uploading or dramatic intelligence enhancement that lots of people won’t currently want, though by definition it would be part of their extrapolated preferences.
A fair share of the population will probably, if polled, actively oppose what CEV says we really want.
It seems unlikely that the optimal intelligence level is the current one, but some people would probably oppose alteration to their intelligence. This isn’t a question of “Don’t I have to want to be as intelligent as possible?” so much as “Is what I currently want a good guide to my extrapolated volition?”
Most of these give me the heebie-jeebies, but I don’t really disagree with them.