An important part of the quote, it seems, is “may be” the most oppressive. Only if the goodness of these “omnipotent moral busybodies” is actually so different from our own that we suffer under it is there an issue; a goodness well-executed would perhaps never even be called a tyranny at all.
Having a good factual model of a person would be necessary, and perhaps sufficient, for making that judgment favourably. When moving beyond making people more equal and free in their means, the model should be significantly better than their self-model. After that, the analyst would probably value the thus observed people caring about self-determination in the territory (so no deceiving them to think they’re self-determining), and act accordingly.
If people declare that analysing people well enough to know their moral values is itself being a busybody, it becomes harder. First I would note that using the internet without unusual data protection already means a (possibly begrudging) acceptance of such busybodies, up to a point. But in a more inconvenient world, consent or prevention of acute danger are as far as I would be willing to go in just a comment.
Having a good factual model of a person would be necessary, and perhaps sufficient, for making that judgment favourably.
For a single person, yes, but it takes a significant investment of time to build an accurate, factual model of a single person. It becomes impractical to do so when making decisions that affect even a mere hundred people.
How would you recommend scaling this up for large groups?
Sociology and psychology. Determine patterns in human desires and behaviour, and determine universal rules. Either that, or scale up your resources and get yourself an fAI.
Willingness to be critiqued? Self-examination and scrupulous quantities of doubt? This seems kind of like the wrong question, actually. “Actual good” is a fuzzy concept, if it even exists at all; a benevolent tyrant cares whether or not they are fulfilling their values (which, presumably, includes “provide others with things I think are good”). The question I would ask is how you tell the difference between actually achieving the manifestation of your values and only making a big show of it; presumably it’s the latter that causes the problem (or at least the problem that you care about).
Then again, this comes from a moral non-realist who doesn’t see a contradiction in having a moral clause saying it’s good to enforce your morality on others to some extent, so your framework’s results may vary.
Willingness to be critiqued? Self-examination and scrupulous quantities of doubt?
Both of these will help. A lot.
“Actual good” is a fuzzy concept
True. One could go with “that which causes the greatest happiness”, but one shouldn’t be putting mood-controlling chemicals in the water. One could go with “that which best protects human life”, but one shouldn’t put humanity into a (very safe) zoo where nothing dangerous or interesting can ever happen to them.
This is therefore a major problem for someone actually trying to be a benevolent leader—how to go about it?
The question I would ask is how you tell the difference between actually achieving the manifestation of your values and only making a big show of it
I’d suggest having some metric by which your values can be measured, and measuring it on a regular basis. For example, if you think that a benevolent leader would do best by reducing crime, then you can measure that by tracking crime statistics.
You may not be able to make a horse drink, but you can still lead it to water rather than merely point out it’s thirsty. Teaching is a thing that people do with demonstrated beneficial results across a wide range of topics. Why would this be an exception?
I think you overestimate the extent to which many LW users comment to help others understand things, as opposed to (say) gain social status at their expense.
Be careful when defining the winner as someone other than the one currently sitting on a mound of utility.
Most lesswrong users at least profess to want to be above social status games, so calling people out on it increases expected comment quality and personal social status/karma, at least a little.
In the analogy, water represents the point of the quote (possibly as applied to CEV). You’re saying there is no point. I don’t understand what you’re trying to say in a way that is meaningful, but I won’t bother asking because ‘you can’t do my thinking for me’.
As best I understood it, the point was that one’s belief in one’s own goodness is a source of drive—and if that goodness is false, the drive is misaimed, and the greater drive makes for greater ill consequences.
I think we agree that belief in one’s own goodness has the capability to go quite wrong, in such cases as the quote describes more wrong than an all-other-things-being-equal belief in one’s own evil. Where we seem to disagree is on the inevitability of this failure mode—I acknowledge that the failure mode exists and we should be cautious about it (although that may not have come across), whereas you seem to be implying that the failure mode is so prevalent that it would be better not to try to be a good overlord at all.
Partially. Yes, I would assert that the failure mode you’re talking about is prevalent (and point to a LOT of history to support that assertion; no one is evil is his own story). However the main point in the quote we’re talking about isn’t quite that, I think. Instead, consider such concepts as “autonomy”, “individuality”, and “diversity”.
An important part of the quote, it seems, is “may be” the most oppressive. Only if the goodness of these “omnipotent moral busybodies” is actually so different from our own that we suffer under it is there an issue; a goodness well-executed would perhaps never even be called a tyranny at all.
But, from the inside, how to you tell the difference between doing actual good for others or being an omnipotent moral busybody?
“She’s the sort of woman who lives for others—you can tell the others by their hunted expression.”
Having a good factual model of a person would be necessary, and perhaps sufficient, for making that judgment favourably. When moving beyond making people more equal and free in their means, the model should be significantly better than their self-model. After that, the analyst would probably value the thus observed people caring about self-determination in the territory (so no deceiving them to think they’re self-determining), and act accordingly.
If people declare that analysing people well enough to know their moral values is itself being a busybody, it becomes harder. First I would note that using the internet without unusual data protection already means a (possibly begrudging) acceptance of such busybodies, up to a point. But in a more inconvenient world, consent or prevention of acute danger are as far as I would be willing to go in just a comment.
For a single person, yes, but it takes a significant investment of time to build an accurate, factual model of a single person. It becomes impractical to do so when making decisions that affect even a mere hundred people.
How would you recommend scaling this up for large groups?
Sociology and psychology. Determine patterns in human desires and behaviour, and determine universal rules. Either that, or scale up your resources and get yourself an fAI.
This is a difficult problem, which very few people (if any) have ever solved properly. It’s (probably) not insoluble, but it’s also not easy...
Good luck.
If someone clearly wants you to stop bothering them, then stop bothering them.
“Quit bothering me, officer, I’m super busy here.”
Willingness to be critiqued? Self-examination and scrupulous quantities of doubt? This seems kind of like the wrong question, actually. “Actual good” is a fuzzy concept, if it even exists at all; a benevolent tyrant cares whether or not they are fulfilling their values (which, presumably, includes “provide others with things I think are good”). The question I would ask is how you tell the difference between actually achieving the manifestation of your values and only making a big show of it; presumably it’s the latter that causes the problem (or at least the problem that you care about).
Then again, this comes from a moral non-realist who doesn’t see a contradiction in having a moral clause saying it’s good to enforce your morality on others to some extent, so your framework’s results may vary.
Both of these will help. A lot.
True. One could go with “that which causes the greatest happiness”, but one shouldn’t be putting mood-controlling chemicals in the water. One could go with “that which best protects human life”, but one shouldn’t put humanity into a (very safe) zoo where nothing dangerous or interesting can ever happen to them.
This is therefore a major problem for someone actually trying to be a benevolent leader—how to go about it?
I’d suggest having some metric by which your values can be measured, and measuring it on a regular basis. For example, if you think that a benevolent leader would do best by reducing crime, then you can measure that by tracking crime statistics.
I think you entirely missed the point.
I don’t think that helps AndHisHorse figure out the point.
I can’t do his thinking for him.
You may not be able to make a horse drink, but you can still lead it to water rather than merely point out it’s thirsty. Teaching is a thing that people do with demonstrated beneficial results across a wide range of topics. Why would this be an exception?
I think you overestimate the extent to which many LW users comment to help others understand things, as opposed to (say) gain social status at their expense.
Be careful when defining the winner as someone other than the one currently sitting on a mound of utility.
Most lesswrong users at least profess to want to be above social status games, so calling people out on it increases expected comment quality and personal social status/karma, at least a little.
Unfortunately, professing something does not make it true any more than putting a sign saying “Cold” on a refrigerator that isn’t plugged in will make it cold.
I’m not pointing out it’s thirsty, I’m pointing out there is no water where it thinks to drink.
In the analogy, water represents the point of the quote (possibly as applied to CEV). You’re saying there is no point. I don’t understand what you’re trying to say in a way that is meaningful, but I won’t bother asking because ‘you can’t do my thinking for me’.
Edit: fiiiine, what do you mean?
As best I understood it, the point was that one’s belief in one’s own goodness is a source of drive—and if that goodness is false, the drive is misaimed, and the greater drive makes for greater ill consequences.
I think we agree that belief in one’s own goodness has the capability to go quite wrong, in such cases as the quote describes more wrong than an all-other-things-being-equal belief in one’s own evil. Where we seem to disagree is on the inevitability of this failure mode—I acknowledge that the failure mode exists and we should be cautious about it (although that may not have come across), whereas you seem to be implying that the failure mode is so prevalent that it would be better not to try to be a good overlord at all.
Am I understanding your position correctly?
Partially. Yes, I would assert that the failure mode you’re talking about is prevalent (and point to a LOT of history to support that assertion; no one is evil is his own story). However the main point in the quote we’re talking about isn’t quite that, I think. Instead, consider such concepts as “autonomy”, “individuality”, and “diversity”.