This is a great device for illustrating how devilishly hard it is to do anything constructive with such overwhelming power, yet not be seen as taking over the world. If you give each individual whatever they want you’ve just destroyed every variety of collectivism or traditionalism on the planet , and those who valued those philosophies will curse you. If you implement any single utopian vision everyone who wanted a different one will hate you, and if you limit yourself to any minimal level of intervention everyone who wants larger benefits than you provide will be unhappy.
Really, I doubt that there is any course you can follow that won’t draw the ire of a large minority of humanity, because too many of us are emotionally committed to inflicting various conflicting forms of coercion on each other.
If you give each individual whatever they want you’ve just destroyed every variety of collectivism or traditionalism on the planet ,
Unless, of course, anyone actually wants to participate in such systems, in which case you have (for commonly-accepted values of ‘want’ and ‘everyone’) allowed them to do so. Someone who’d rather stand in the People’s Turnip-Requisitioning Queue for six hours than have unlimited free candy is free to do so, and someone who’d rather watch everyone else do so can have a private world with millions of functionally-indistinguishable simulacra. Someone who demands that other real people participate, whether they want to or not, and can’t find enough quasi-volunteers, is wallowing so deep in their own hypocrisy that nothing within the realm of logic could be satisfactory.
If you use your unlimited power to make everyone including yourself constantly happy by design, and reprogram the minds of everybody into always approving of whatever you do, nobody will complain or hate you. Make every particle in the universe cooperate perfectly to maximize the amount of happiness in all future spacetime (and in the past as well, if time travel is possible when you have unlimited power). Then there would be no need for free will or individual autonomy for anybody anymore.
What I did was, I disproved Billy Brown’s claim that “If you implement any single utopian vision everyone who wanted a different one will hate you”. Was it wrong of me to do so?
While you are technically correct, the spirit of the original post and a charitable interpretation was, as I read it, “no matter what you decide to do with your unlimited power, someone will hate your plan”. Of course if you decide to use your unlimited power to blow up the earth, no one will complain because they’re all dead. But if you asked the population of earth what they think of your plan to blow up the earth, the response will be largely negative. The contention is that no matter what plan you try to concoct, there will be someone such that, if you told them about the plan and they could see what the outcome would be, they would hate it.
Perhaps they see you as splitting hairs between being seen as taking over the world, and actually taking over the world. In your scenario you are not seen as taking over the world because you eliminate the ability to see that—but that means that you’ve actually taken over the world (to a degree greater than anyone has ever achieved before).
But in point of fact, you’re right about the claim as stated. As for the downvotes—voting is frequently unfair, here and everywhere else.
I didn’t mean to split hairs at all. I’m surprised that so many here seem to take it for granted that, if one had unlimited power, one would choose to let other people remain having some say, and some autonomy. If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.
And besides:
Suppose I’d have less than unlimited power but still “rather complete” power over every human being, and suppose I’d create what would be a “utopia” only to some, but without changing anybody’s mind against their will, and suppose some people would then hate me for having created that “utopia”. Then why would they hate me? Because they would be unhappy. If I’d simply make them constantly happy by design—I wouldn’t even have to make them intellectually approve of my utopia to do that—they wouldn’t hate me, because a happy person doesn’t hate.
Therefore, even in a scenario where I had not only “taken over the world”, but where I would also be seen as having taken over the world, nobody would still hate me.
If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.
I recommend reading this sequence. Suffice it to say that you are wrong, and power does not bring with it morality.
a happy person doesn’t hate.
What is your support for this claim? (I smell argument by definition...)
What’s wrong with Uni’s claim? If you have unlimited power, one possible choice is to put all the other sentient beings into a state of euphoric intoxication such that they don’t hate you. Yes, that is by definition. Go figure out a state for each agent so that it doesn’t hate you and put it into that state, then you’ve got a counter example to Billy’s claim above. Maybe a given agent’s former volition would have chosen to hate you if it was aware of the state you forced it into later on, but that’s a different thing than caring if the agent itself hates you as a result of changes you make. This is a valid counter-example. I have read the coming of age sequence and don’t see what you’re referring to in there that makes your point. Perhaps you could point me back to some specific parts of those posts.
Suffice it to say that you are wrong, and power does not bring with it morality.
I have never assumed that “power brings with it morality” if we with power mean limited power. Some superhuman AI might very well be more immoral than humans are. I think unlimited power would bring with it morality. If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness. And you will do that, since you will be intelligent enough to understand that that’s what gives you the most happiness. (And, needless to say, you will also find a way to be the one to experience all that happiness.) Given hedonistic utilitarianism, this is the best thing that could happen, no matter who got the unlimited power and what was initially the moral standards of that person. If you don’t think hedonistic utilitarianism (or hedonism) is moral, it’s understandable that you think a world filled with the maximum amount of happiness might not be a moral outcome, especially if achieving that goal took killing lots of people against their will, for example. But that alone doesn’t prove I’m wrong. Much of what humans think to be very wrong is not in all circumstances wrong. To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn’t understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.
If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness.
What I got out of this sentence is that you believe someone (anyone?), given absolute power over the universe, would be imbued with knowledge of how to maximize for human happiness. Is that an accurate representation of your position? Would you be willing to provide a more detailed explanation?
And you will do that, since you will be intelligent enough to understand that that’s what gives you the most happiness.
Not everyone is a hedonistic utilitarian. What if the person/entity who ends up with ultimate power enjoys the suffering of others? Is your claim is that their value system would be rewritten to hedonistic utilitarianism upon receiving power? I do not see any reason why that should be the case. What are your reasons for believing that a being with unlimited power would understand that?
To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn’t understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.
I’m not sure about ‘proof’ but hedonistic utilitarianism can be casually dismissed out of hand as not particularly desirable and the idea that giving a being ultimate power will make them adopt such preferences is absurd.
This is a cliche and may be false but it’s assumed true:
“Power corrupts and absolute power corrupts absolutely”.
I wouldn’t want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it.
To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil.
My version of evil is the least evil I believe.
EDIT: Why did I get voted down for saying “power corrupts”—the corrollary of which is rejection of power is less corrupt whereas Eliezer gets voted up for saying exactly the same thing? Someone who voted me down should respond with their reasoning.
Given humanity’s complete lack of experience with absolute power, it seems like you can’t even take that cliche for weak evidence. Having glided through the article and comments again, I also don’t see where Eliezer said “rejection of power is less corrupt. The bit about Eliezer sighing and saying the null-actor did the right thing?
If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.
I recommend reading this sequence. Suffice it to say that you are wrong, and power does not bring with it morality.
It seems to me that the claim that Uni is making is not the same as the claim that you think e’s making, mostly because Uni is using definitions of ‘best possible decision’ and ‘right thing’ that are different from the ones that are usually used here.
It looks to me (and please correct me if I’m wrong, Uni) that Uni is basing eir definition on the idea that there is no objectively correct morality, not even one like Eliezer’s CEV—that morality and ‘the right thing to do’ are purely social ideas, defined by the people in a relevant situation.
Thus, if Uni had unlimited power, it would by definition be within eir power to cause the other people in the situation to consider eir actions correct, and e would do so.
If this is the argument that Uni is trying to make, then the standard arguments that power doesn’t cause morality are basically irrelevant, since Uni is not making the kinds of claims about an all-powerful person’s behavior that those apply to.
E appears to be claiming that an all-powerful person would always use that power to cause all relevant other people to consider their actions correct, which I suspect is incorrect, but e’s basically not making any other claims about the likely behavior of such an entity.
This is certainly true. If you have sufficient power, and if my existing values, preferences, beliefs, expectations, etc. are of little or no value to you, but my approval is, then you can choose to override my existing values, preferences, beliefs, expectations, etc. and replace them with whatever values, preferences, beliefs, expectations, etc. would cause me to approve of whatever it is you’ve done, and that achieves your goals.
Suppose you’d say it would be wrong of me to make the haters happy “against their will”. Why would that be wrong, if they would be happy to be happy once they have become happy? Should we not try to prevent suicides either? Not even the most obviously premature suicides, not even temporarily, not even only to make the suicide attempter rethink their decision a little more thoroughly?
Making a hater happy “against his will”, with the result that he stops hating, is (I think) comparable to preventing a premature suicide in order to give that person an opportunity to reevaluate his situation and come to a better decision (by himself). By respecting what a person wants right now only, you are not respecting “that person including who he will be in the future”, you are respecting only a tiny fraction of that. Strictly speaking, even the “now” we are talking about is in the future, because if you are now deciding to act in someone’s interest, you should base your decision on your expectation of what he will want by the time your action would start affecting him (which is not exactly now), rather than what he wants right now. So, whenever you respect someone’s preferences, you are (or at least should be) respecting his future preferences, not his present ones.
(Suppose for example that you strongly suspect that, in one second from now, I will prefer a painless state of mind, but that you see that right now, I’m trying to cut off a piece of wood in a way that you see will make me cut me in my leg in one second if you don’t interfere. You should then interfere, and that can be explained by (if not by anything else) your expectation of what I will want one second from now, even if right now I have no other preference than getting that piece of wood cut in two.)
I suggest one should respect another persons (expected) distant future preferences more than his “present” (that is, very close future) ones, because his future preferences are more numerous (since there is more time for them) than his “present” ones. One would arguably be respecting him more that way, because one would be respecting more of his preferences—not favoring any one of his preferences over any other one just because it happens to take place at a certain time.
This way, hedonistic utilitarianism can be seen as compatible with preference utilitarianism.
Incidentally, it is currently possible to achieve total happiness, or perhaps a close approximation. A carefully implanted electrode to the right part of the brain, will be more desirable than food to a starving rat, for example. While this part of the brain is called the “pleasure center”, it might rather be about desire and reward instead. Nevertheless, pleasure and happiness are by necessity mental states, and it should be possible to artificially create these.
Why should a man who is perfectly content, bother to get up to eat, or perhaps achieve something? He may starve to death, but would be happy to do so. And such a man will be content with his current state, which of course is contentment, and not at all resent his current state. Even a less invasive case, where a man is given almost everything he wants, yet not so much so that he does not eventually become dissatisfied with the amount of food in his belly and decide to put more in, even so there will be higher level motivations this man will lose.
While I consider myself a utilitarian, and believe the best choices are those that maximize the values of everyone, I cannot agree with the above situation. For now, this is no problem because people in their current state would not choose to artificially fulfill their desires via electrode implants, nor is it yet possible to actually fulfill everyone’s desires in the real world. I shall now go and rethink why I choose a certain path, if I cannot abide reaching the destination.
First, let me congratulate you on stopping to rethink when you realize that you’ve found a seeming contradiction in your own thinking. Most people aren’t able to see the contradictions in their beliefs, and when/if they do, they fail to actually do anything about them.
While it is theoretically possible to artificially create pleasure and happiness (which, around here, we call wirehading), converting the entire observable universe to orgasmium (maximum pleasure experiencing substance) seems to go a bit beyond that. In general, I think you’ll find most people around here are against both, even though they’d call themselves “utilitarians” or similar. This is because there’s more than one form of utilitarianism; many Less Wrongers believe other forms, like preference utilitarianism are correct, instead of the original Millsian hedonistic utilitarianism.
This is a great device for illustrating how devilishly hard it is to do anything constructive with such overwhelming power, yet not be seen as taking over the world. If you give each individual whatever they want you’ve just destroyed every variety of collectivism or traditionalism on the planet , and those who valued those philosophies will curse you. If you implement any single utopian vision everyone who wanted a different one will hate you, and if you limit yourself to any minimal level of intervention everyone who wants larger benefits than you provide will be unhappy.
Really, I doubt that there is any course you can follow that won’t draw the ire of a large minority of humanity, because too many of us are emotionally committed to inflicting various conflicting forms of coercion on each other.
Unless, of course, anyone actually wants to participate in such systems, in which case you have (for commonly-accepted values of ‘want’ and ‘everyone’) allowed them to do so. Someone who’d rather stand in the People’s Turnip-Requisitioning Queue for six hours than have unlimited free candy is free to do so, and someone who’d rather watch everyone else do so can have a private world with millions of functionally-indistinguishable simulacra. Someone who demands that other real people participate, whether they want to or not, and can’t find enough quasi-volunteers, is wallowing so deep in their own hypocrisy that nothing within the realm of logic could be satisfactory.
If you use your unlimited power to make everyone including yourself constantly happy by design, and reprogram the minds of everybody into always approving of whatever you do, nobody will complain or hate you. Make every particle in the universe cooperate perfectly to maximize the amount of happiness in all future spacetime (and in the past as well, if time travel is possible when you have unlimited power). Then there would be no need for free will or individual autonomy for anybody anymore.
Why was that downvoted by 3?
What I did was, I disproved Billy Brown’s claim that “If you implement any single utopian vision everyone who wanted a different one will hate you”. Was it wrong of me to do so?
While you are technically correct, the spirit of the original post and a charitable interpretation was, as I read it, “no matter what you decide to do with your unlimited power, someone will hate your plan”. Of course if you decide to use your unlimited power to blow up the earth, no one will complain because they’re all dead. But if you asked the population of earth what they think of your plan to blow up the earth, the response will be largely negative. The contention is that no matter what plan you try to concoct, there will be someone such that, if you told them about the plan and they could see what the outcome would be, they would hate it.
Perhaps they see you as splitting hairs between being seen as taking over the world, and actually taking over the world. In your scenario you are not seen as taking over the world because you eliminate the ability to see that—but that means that you’ve actually taken over the world (to a degree greater than anyone has ever achieved before).
But in point of fact, you’re right about the claim as stated. As for the downvotes—voting is frequently unfair, here and everywhere else.
Thanks for explaining!
I didn’t mean to split hairs at all. I’m surprised that so many here seem to take it for granted that, if one had unlimited power, one would choose to let other people remain having some say, and some autonomy. If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.
And besides:
Suppose I’d have less than unlimited power but still “rather complete” power over every human being, and suppose I’d create what would be a “utopia” only to some, but without changing anybody’s mind against their will, and suppose some people would then hate me for having created that “utopia”. Then why would they hate me? Because they would be unhappy. If I’d simply make them constantly happy by design—I wouldn’t even have to make them intellectually approve of my utopia to do that—they wouldn’t hate me, because a happy person doesn’t hate.
Therefore, even in a scenario where I had not only “taken over the world”, but where I would also be seen as having taken over the world, nobody would still hate me.
I recommend reading this sequence. Suffice it to say that you are wrong, and power does not bring with it morality.
What is your support for this claim? (I smell argument by definition...)
What’s wrong with Uni’s claim? If you have unlimited power, one possible choice is to put all the other sentient beings into a state of euphoric intoxication such that they don’t hate you. Yes, that is by definition. Go figure out a state for each agent so that it doesn’t hate you and put it into that state, then you’ve got a counter example to Billy’s claim above. Maybe a given agent’s former volition would have chosen to hate you if it was aware of the state you forced it into later on, but that’s a different thing than caring if the agent itself hates you as a result of changes you make. This is a valid counter-example. I have read the coming of age sequence and don’t see what you’re referring to in there that makes your point. Perhaps you could point me back to some specific parts of those posts.
Thanks for recommending.
I have never assumed that “power brings with it morality” if we with power mean limited power. Some superhuman AI might very well be more immoral than humans are. I think unlimited power would bring with it morality. If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness. And you will do that, since you will be intelligent enough to understand that that’s what gives you the most happiness. (And, needless to say, you will also find a way to be the one to experience all that happiness.) Given hedonistic utilitarianism, this is the best thing that could happen, no matter who got the unlimited power and what was initially the moral standards of that person. If you don’t think hedonistic utilitarianism (or hedonism) is moral, it’s understandable that you think a world filled with the maximum amount of happiness might not be a moral outcome, especially if achieving that goal took killing lots of people against their will, for example. But that alone doesn’t prove I’m wrong. Much of what humans think to be very wrong is not in all circumstances wrong. To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn’t understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.
Observation.
What I got out of this sentence is that you believe someone (anyone?), given absolute power over the universe, would be imbued with knowledge of how to maximize for human happiness. Is that an accurate representation of your position? Would you be willing to provide a more detailed explanation?
Not everyone is a hedonistic utilitarian. What if the person/entity who ends up with ultimate power enjoys the suffering of others? Is your claim is that their value system would be rewritten to hedonistic utilitarianism upon receiving power? I do not see any reason why that should be the case. What are your reasons for believing that a being with unlimited power would understand that?
I’m not sure about ‘proof’ but hedonistic utilitarianism can be casually dismissed out of hand as not particularly desirable and the idea that giving a being ultimate power will make them adopt such preferences is absurd.
I’d be interested to hear a bit more detail as to why it can be dismissed out of hand. Is there a link I could go read?
This is a cliche and may be false but it’s assumed true: “Power corrupts and absolute power corrupts absolutely”.
I wouldn’t want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it.
To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil.
My version of evil is the least evil I believe.
EDIT: Why did I get voted down for saying “power corrupts”—the corrollary of which is rejection of power is less corrupt whereas Eliezer gets voted up for saying exactly the same thing? Someone who voted me down should respond with their reasoning.
Given humanity’s complete lack of experience with absolute power, it seems like you can’t even take that cliche for weak evidence. Having glided through the article and comments again, I also don’t see where Eliezer said “rejection of power is less corrupt. The bit about Eliezer sighing and saying the null-actor did the right thing?
(No, I wasn’t the one who downvoted)
It seems to me that the claim that Uni is making is not the same as the claim that you think e’s making, mostly because Uni is using definitions of ‘best possible decision’ and ‘right thing’ that are different from the ones that are usually used here.
It looks to me (and please correct me if I’m wrong, Uni) that Uni is basing eir definition on the idea that there is no objectively correct morality, not even one like Eliezer’s CEV—that morality and ‘the right thing to do’ are purely social ideas, defined by the people in a relevant situation.
Thus, if Uni had unlimited power, it would by definition be within eir power to cause the other people in the situation to consider eir actions correct, and e would do so.
If this is the argument that Uni is trying to make, then the standard arguments that power doesn’t cause morality are basically irrelevant, since Uni is not making the kinds of claims about an all-powerful person’s behavior that those apply to.
E appears to be claiming that an all-powerful person would always use that power to cause all relevant other people to consider their actions correct, which I suspect is incorrect, but e’s basically not making any other claims about the likely behavior of such an entity.
This is certainly true. If you have sufficient power, and if my existing values, preferences, beliefs, expectations, etc. are of little or no value to you, but my approval is, then you can choose to override my existing values, preferences, beliefs, expectations, etc. and replace them with whatever values, preferences, beliefs, expectations, etc. would cause me to approve of whatever it is you’ve done, and that achieves your goals.
Suppose you’d say it would be wrong of me to make the haters happy “against their will”. Why would that be wrong, if they would be happy to be happy once they have become happy? Should we not try to prevent suicides either? Not even the most obviously premature suicides, not even temporarily, not even only to make the suicide attempter rethink their decision a little more thoroughly?
Making a hater happy “against his will”, with the result that he stops hating, is (I think) comparable to preventing a premature suicide in order to give that person an opportunity to reevaluate his situation and come to a better decision (by himself). By respecting what a person wants right now only, you are not respecting “that person including who he will be in the future”, you are respecting only a tiny fraction of that. Strictly speaking, even the “now” we are talking about is in the future, because if you are now deciding to act in someone’s interest, you should base your decision on your expectation of what he will want by the time your action would start affecting him (which is not exactly now), rather than what he wants right now. So, whenever you respect someone’s preferences, you are (or at least should be) respecting his future preferences, not his present ones.
(Suppose for example that you strongly suspect that, in one second from now, I will prefer a painless state of mind, but that you see that right now, I’m trying to cut off a piece of wood in a way that you see will make me cut me in my leg in one second if you don’t interfere. You should then interfere, and that can be explained by (if not by anything else) your expectation of what I will want one second from now, even if right now I have no other preference than getting that piece of wood cut in two.)
I suggest one should respect another persons (expected) distant future preferences more than his “present” (that is, very close future) ones, because his future preferences are more numerous (since there is more time for them) than his “present” ones. One would arguably be respecting him more that way, because one would be respecting more of his preferences—not favoring any one of his preferences over any other one just because it happens to take place at a certain time.
This way, hedonistic utilitarianism can be seen as compatible with preference utilitarianism.
Incidentally, it is currently possible to achieve total happiness, or perhaps a close approximation. A carefully implanted electrode to the right part of the brain, will be more desirable than food to a starving rat, for example. While this part of the brain is called the “pleasure center”, it might rather be about desire and reward instead. Nevertheless, pleasure and happiness are by necessity mental states, and it should be possible to artificially create these.
Why should a man who is perfectly content, bother to get up to eat, or perhaps achieve something? He may starve to death, but would be happy to do so. And such a man will be content with his current state, which of course is contentment, and not at all resent his current state. Even a less invasive case, where a man is given almost everything he wants, yet not so much so that he does not eventually become dissatisfied with the amount of food in his belly and decide to put more in, even so there will be higher level motivations this man will lose.
While I consider myself a utilitarian, and believe the best choices are those that maximize the values of everyone, I cannot agree with the above situation. For now, this is no problem because people in their current state would not choose to artificially fulfill their desires via electrode implants, nor is it yet possible to actually fulfill everyone’s desires in the real world. I shall now go and rethink why I choose a certain path, if I cannot abide reaching the destination.
Welcome to Less Wrong!
First, let me congratulate you on stopping to rethink when you realize that you’ve found a seeming contradiction in your own thinking. Most people aren’t able to see the contradictions in their beliefs, and when/if they do, they fail to actually do anything about them.
While it is theoretically possible to artificially create pleasure and happiness (which, around here, we call wirehading), converting the entire observable universe to orgasmium (maximum pleasure experiencing substance) seems to go a bit beyond that. In general, I think you’ll find most people around here are against both, even though they’d call themselves “utilitarians” or similar. This is because there’s more than one form of utilitarianism; many Less Wrongers believe other forms, like preference utilitarianism are correct, instead of the original Millsian hedonistic utilitarianism.
Edit: fixed link formatting