Suffice it to say that you are wrong, and power does not bring with it morality.
I have never assumed that “power brings with it morality” if we with power mean limited power. Some superhuman AI might very well be more immoral than humans are. I think unlimited power would bring with it morality. If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness. And you will do that, since you will be intelligent enough to understand that that’s what gives you the most happiness. (And, needless to say, you will also find a way to be the one to experience all that happiness.) Given hedonistic utilitarianism, this is the best thing that could happen, no matter who got the unlimited power and what was initially the moral standards of that person. If you don’t think hedonistic utilitarianism (or hedonism) is moral, it’s understandable that you think a world filled with the maximum amount of happiness might not be a moral outcome, especially if achieving that goal took killing lots of people against their will, for example. But that alone doesn’t prove I’m wrong. Much of what humans think to be very wrong is not in all circumstances wrong. To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn’t understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.
If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness.
What I got out of this sentence is that you believe someone (anyone?), given absolute power over the universe, would be imbued with knowledge of how to maximize for human happiness. Is that an accurate representation of your position? Would you be willing to provide a more detailed explanation?
And you will do that, since you will be intelligent enough to understand that that’s what gives you the most happiness.
Not everyone is a hedonistic utilitarian. What if the person/entity who ends up with ultimate power enjoys the suffering of others? Is your claim is that their value system would be rewritten to hedonistic utilitarianism upon receiving power? I do not see any reason why that should be the case. What are your reasons for believing that a being with unlimited power would understand that?
To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn’t understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.
I’m not sure about ‘proof’ but hedonistic utilitarianism can be casually dismissed out of hand as not particularly desirable and the idea that giving a being ultimate power will make them adopt such preferences is absurd.
This is a cliche and may be false but it’s assumed true:
“Power corrupts and absolute power corrupts absolutely”.
I wouldn’t want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it.
To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil.
My version of evil is the least evil I believe.
EDIT: Why did I get voted down for saying “power corrupts”—the corrollary of which is rejection of power is less corrupt whereas Eliezer gets voted up for saying exactly the same thing? Someone who voted me down should respond with their reasoning.
Given humanity’s complete lack of experience with absolute power, it seems like you can’t even take that cliche for weak evidence. Having glided through the article and comments again, I also don’t see where Eliezer said “rejection of power is less corrupt. The bit about Eliezer sighing and saying the null-actor did the right thing?
Thanks for recommending.
I have never assumed that “power brings with it morality” if we with power mean limited power. Some superhuman AI might very well be more immoral than humans are. I think unlimited power would bring with it morality. If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness. And you will do that, since you will be intelligent enough to understand that that’s what gives you the most happiness. (And, needless to say, you will also find a way to be the one to experience all that happiness.) Given hedonistic utilitarianism, this is the best thing that could happen, no matter who got the unlimited power and what was initially the moral standards of that person. If you don’t think hedonistic utilitarianism (or hedonism) is moral, it’s understandable that you think a world filled with the maximum amount of happiness might not be a moral outcome, especially if achieving that goal took killing lots of people against their will, for example. But that alone doesn’t prove I’m wrong. Much of what humans think to be very wrong is not in all circumstances wrong. To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn’t understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.
Observation.
What I got out of this sentence is that you believe someone (anyone?), given absolute power over the universe, would be imbued with knowledge of how to maximize for human happiness. Is that an accurate representation of your position? Would you be willing to provide a more detailed explanation?
Not everyone is a hedonistic utilitarian. What if the person/entity who ends up with ultimate power enjoys the suffering of others? Is your claim is that their value system would be rewritten to hedonistic utilitarianism upon receiving power? I do not see any reason why that should be the case. What are your reasons for believing that a being with unlimited power would understand that?
I’m not sure about ‘proof’ but hedonistic utilitarianism can be casually dismissed out of hand as not particularly desirable and the idea that giving a being ultimate power will make them adopt such preferences is absurd.
I’d be interested to hear a bit more detail as to why it can be dismissed out of hand. Is there a link I could go read?
This is a cliche and may be false but it’s assumed true: “Power corrupts and absolute power corrupts absolutely”.
I wouldn’t want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it.
To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil.
My version of evil is the least evil I believe.
EDIT: Why did I get voted down for saying “power corrupts”—the corrollary of which is rejection of power is less corrupt whereas Eliezer gets voted up for saying exactly the same thing? Someone who voted me down should respond with their reasoning.
Given humanity’s complete lack of experience with absolute power, it seems like you can’t even take that cliche for weak evidence. Having glided through the article and comments again, I also don’t see where Eliezer said “rejection of power is less corrupt. The bit about Eliezer sighing and saying the null-actor did the right thing?
(No, I wasn’t the one who downvoted)