If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.
I recommend reading this sequence. Suffice it to say that you are wrong, and power does not bring with it morality.
a happy person doesn’t hate.
What is your support for this claim? (I smell argument by definition...)
What’s wrong with Uni’s claim? If you have unlimited power, one possible choice is to put all the other sentient beings into a state of euphoric intoxication such that they don’t hate you. Yes, that is by definition. Go figure out a state for each agent so that it doesn’t hate you and put it into that state, then you’ve got a counter example to Billy’s claim above. Maybe a given agent’s former volition would have chosen to hate you if it was aware of the state you forced it into later on, but that’s a different thing than caring if the agent itself hates you as a result of changes you make. This is a valid counter-example. I have read the coming of age sequence and don’t see what you’re referring to in there that makes your point. Perhaps you could point me back to some specific parts of those posts.
Suffice it to say that you are wrong, and power does not bring with it morality.
I have never assumed that “power brings with it morality” if we with power mean limited power. Some superhuman AI might very well be more immoral than humans are. I think unlimited power would bring with it morality. If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness. And you will do that, since you will be intelligent enough to understand that that’s what gives you the most happiness. (And, needless to say, you will also find a way to be the one to experience all that happiness.) Given hedonistic utilitarianism, this is the best thing that could happen, no matter who got the unlimited power and what was initially the moral standards of that person. If you don’t think hedonistic utilitarianism (or hedonism) is moral, it’s understandable that you think a world filled with the maximum amount of happiness might not be a moral outcome, especially if achieving that goal took killing lots of people against their will, for example. But that alone doesn’t prove I’m wrong. Much of what humans think to be very wrong is not in all circumstances wrong. To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn’t understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.
If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness.
What I got out of this sentence is that you believe someone (anyone?), given absolute power over the universe, would be imbued with knowledge of how to maximize for human happiness. Is that an accurate representation of your position? Would you be willing to provide a more detailed explanation?
And you will do that, since you will be intelligent enough to understand that that’s what gives you the most happiness.
Not everyone is a hedonistic utilitarian. What if the person/entity who ends up with ultimate power enjoys the suffering of others? Is your claim is that their value system would be rewritten to hedonistic utilitarianism upon receiving power? I do not see any reason why that should be the case. What are your reasons for believing that a being with unlimited power would understand that?
To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn’t understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.
I’m not sure about ‘proof’ but hedonistic utilitarianism can be casually dismissed out of hand as not particularly desirable and the idea that giving a being ultimate power will make them adopt such preferences is absurd.
This is a cliche and may be false but it’s assumed true:
“Power corrupts and absolute power corrupts absolutely”.
I wouldn’t want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it.
To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil.
My version of evil is the least evil I believe.
EDIT: Why did I get voted down for saying “power corrupts”—the corrollary of which is rejection of power is less corrupt whereas Eliezer gets voted up for saying exactly the same thing? Someone who voted me down should respond with their reasoning.
Given humanity’s complete lack of experience with absolute power, it seems like you can’t even take that cliche for weak evidence. Having glided through the article and comments again, I also don’t see where Eliezer said “rejection of power is less corrupt. The bit about Eliezer sighing and saying the null-actor did the right thing?
If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.
I recommend reading this sequence. Suffice it to say that you are wrong, and power does not bring with it morality.
It seems to me that the claim that Uni is making is not the same as the claim that you think e’s making, mostly because Uni is using definitions of ‘best possible decision’ and ‘right thing’ that are different from the ones that are usually used here.
It looks to me (and please correct me if I’m wrong, Uni) that Uni is basing eir definition on the idea that there is no objectively correct morality, not even one like Eliezer’s CEV—that morality and ‘the right thing to do’ are purely social ideas, defined by the people in a relevant situation.
Thus, if Uni had unlimited power, it would by definition be within eir power to cause the other people in the situation to consider eir actions correct, and e would do so.
If this is the argument that Uni is trying to make, then the standard arguments that power doesn’t cause morality are basically irrelevant, since Uni is not making the kinds of claims about an all-powerful person’s behavior that those apply to.
E appears to be claiming that an all-powerful person would always use that power to cause all relevant other people to consider their actions correct, which I suspect is incorrect, but e’s basically not making any other claims about the likely behavior of such an entity.
I recommend reading this sequence. Suffice it to say that you are wrong, and power does not bring with it morality.
What is your support for this claim? (I smell argument by definition...)
What’s wrong with Uni’s claim? If you have unlimited power, one possible choice is to put all the other sentient beings into a state of euphoric intoxication such that they don’t hate you. Yes, that is by definition. Go figure out a state for each agent so that it doesn’t hate you and put it into that state, then you’ve got a counter example to Billy’s claim above. Maybe a given agent’s former volition would have chosen to hate you if it was aware of the state you forced it into later on, but that’s a different thing than caring if the agent itself hates you as a result of changes you make. This is a valid counter-example. I have read the coming of age sequence and don’t see what you’re referring to in there that makes your point. Perhaps you could point me back to some specific parts of those posts.
Thanks for recommending.
I have never assumed that “power brings with it morality” if we with power mean limited power. Some superhuman AI might very well be more immoral than humans are. I think unlimited power would bring with it morality. If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness. And you will do that, since you will be intelligent enough to understand that that’s what gives you the most happiness. (And, needless to say, you will also find a way to be the one to experience all that happiness.) Given hedonistic utilitarianism, this is the best thing that could happen, no matter who got the unlimited power and what was initially the moral standards of that person. If you don’t think hedonistic utilitarianism (or hedonism) is moral, it’s understandable that you think a world filled with the maximum amount of happiness might not be a moral outcome, especially if achieving that goal took killing lots of people against their will, for example. But that alone doesn’t prove I’m wrong. Much of what humans think to be very wrong is not in all circumstances wrong. To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn’t understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.
Observation.
What I got out of this sentence is that you believe someone (anyone?), given absolute power over the universe, would be imbued with knowledge of how to maximize for human happiness. Is that an accurate representation of your position? Would you be willing to provide a more detailed explanation?
Not everyone is a hedonistic utilitarian. What if the person/entity who ends up with ultimate power enjoys the suffering of others? Is your claim is that their value system would be rewritten to hedonistic utilitarianism upon receiving power? I do not see any reason why that should be the case. What are your reasons for believing that a being with unlimited power would understand that?
I’m not sure about ‘proof’ but hedonistic utilitarianism can be casually dismissed out of hand as not particularly desirable and the idea that giving a being ultimate power will make them adopt such preferences is absurd.
I’d be interested to hear a bit more detail as to why it can be dismissed out of hand. Is there a link I could go read?
This is a cliche and may be false but it’s assumed true: “Power corrupts and absolute power corrupts absolutely”.
I wouldn’t want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it.
To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil.
My version of evil is the least evil I believe.
EDIT: Why did I get voted down for saying “power corrupts”—the corrollary of which is rejection of power is less corrupt whereas Eliezer gets voted up for saying exactly the same thing? Someone who voted me down should respond with their reasoning.
Given humanity’s complete lack of experience with absolute power, it seems like you can’t even take that cliche for weak evidence. Having glided through the article and comments again, I also don’t see where Eliezer said “rejection of power is less corrupt. The bit about Eliezer sighing and saying the null-actor did the right thing?
(No, I wasn’t the one who downvoted)
It seems to me that the claim that Uni is making is not the same as the claim that you think e’s making, mostly because Uni is using definitions of ‘best possible decision’ and ‘right thing’ that are different from the ones that are usually used here.
It looks to me (and please correct me if I’m wrong, Uni) that Uni is basing eir definition on the idea that there is no objectively correct morality, not even one like Eliezer’s CEV—that morality and ‘the right thing to do’ are purely social ideas, defined by the people in a relevant situation.
Thus, if Uni had unlimited power, it would by definition be within eir power to cause the other people in the situation to consider eir actions correct, and e would do so.
If this is the argument that Uni is trying to make, then the standard arguments that power doesn’t cause morality are basically irrelevant, since Uni is not making the kinds of claims about an all-powerful person’s behavior that those apply to.
E appears to be claiming that an all-powerful person would always use that power to cause all relevant other people to consider their actions correct, which I suspect is incorrect, but e’s basically not making any other claims about the likely behavior of such an entity.