I imagine a Friendly AI, I imagine a hands-off benefactor who permits people to do anything they wish to which won’t result in harm to others.
Yeah, I like personal freedom, too, but you have to realize that this is massively, massively underspecified. What exactly constitutes “harm”, and what specific mechanisms are in place to prevent it? Presumably a punch in the face is “harm”; what about an unexpected pat on the back? What about all other possible forms of physical contact that you don’t know how to consider in advance? If loud verbal abuse is harm, what about polite criticism? What about all other possible ways of affecting someone via sound waves that you don’t know how to consider in advance? &c., ad infinitum.
Does anybody envisage a Friendly AI which doesn’t correspond more or less directly with their own political beliefs?
I’m starting to think this entire idea of “having political beliefs” is crazy. There are all sorts of possible forms of human social organization, which result in various outcomes for the humans involved; how am I supposed to know which one is best for people? From what I know about economics, I can point out some reasons to believe that market-like systems have some useful properties, but that doesn’t mean I should run around shouting “Yay Libertarianism Forever!” because then what happens when someone implements some form of libertarianism, and it turns out to be terrible?
There are many more ways to arrange things in a defective manner than an effective one. I’d consider deviations from the status quo to be harmful until proven otherwise.
All formulations of human value are massively underspecified.
I agree that expecting humans to know what sorts of things would be good for humans in general is terrible. The problem is that we also can’t get an honest report of what people think would be good for them personally because lying is too useful/humans value things hypocritically.
There are all sorts of possible forms of human social organization, which result in various outcomes for the humans involved; how am I supposed to know which one is best for people?
with:
what happens when someone implements some form of libertarianism, and it turns out to be terrible?
It was pretty clearly a hypothetical. As in, he doesn’t see enough evidence to justify high confidence that libertarianism would not be terrible, which is perfectly in line with his statement that he doesn’t know which system is best.
It’s hypothetical about libertarianism. Other approaches have been tried, so the single data point does not generalise into anything like “no one ever has any evidential basis for choosing a political system or party”. To look at it from the other extreme, someone voting in a typical democracy is typically choosing between N parties (for a small N) each of which has been in power within living memory.
Yeah, I like personal freedom, too, but you have to realize that this is massively, massively underspecified. What exactly constitutes “harm”, and what specific mechanisms are in place to prevent it? Presumably a punch in the face is “harm”; what about an unexpected pat on the back? What about all other possible forms of physical contact that you don’t know how to consider in advance? If loud verbal abuse is harm, what about polite criticism? What about all other possible ways of affecting someone via sound waves that you don’t know how to consider in advance? &c., ad infinitum.
I’m starting to think this entire idea of “having political beliefs” is crazy. There are all sorts of possible forms of human social organization, which result in various outcomes for the humans involved; how am I supposed to know which one is best for people? From what I know about economics, I can point out some reasons to believe that market-like systems have some useful properties, but that doesn’t mean I should run around shouting “Yay Libertarianism Forever!” because then what happens when someone implements some form of libertarianism, and it turns out to be terrible?
Most of my “political beliefs” is awareness of specific failures in other people’s beliefs.
That’s fairly common, and rarely realized, I think.
Fairly common among rational (I don’t mean LW-style) people. But I also know people who really believe things, and it’s kind of scary.
These examples also only compare things with status quo. Status quo is most likely itself “harm” when compared to many of the alternatives.
There are many more ways to arrange things in a defective manner than an effective one. I’d consider deviations from the status quo to be harmful until proven otherwise.
Or in other words: most mutations are harmful.
(Fixed the wording to better match the intended meaning: “compared to the many alternatives” → “compared to many of the alternatives”.)
All formulations of human value are massively underspecified.
I agree that expecting humans to know what sorts of things would be good for humans in general is terrible. The problem is that we also can’t get an honest report of what people think would be good for them personally because lying is too useful/humans value things hypocritically.
Compare:
with:
It was pretty clearly a hypothetical. As in, he doesn’t see enough evidence to justify high confidence that libertarianism would not be terrible, which is perfectly in line with his statement that he doesn’t know which system is best.
It’s hypothetical about libertarianism. Other approaches have been tried, so the single data point does not generalise into anything like “no one ever has any evidential basis for choosing a political system or party”. To look at it from the other extreme, someone voting in a typical democracy is typically choosing between N parties (for a small N) each of which has been in power within living memory.