Having a good factual model of a person would be necessary, and perhaps sufficient, for making that judgment favourably. When moving beyond making people more equal and free in their means, the model should be significantly better than their self-model. After that, the analyst would probably value the thus observed people caring about self-determination in the territory (so no deceiving them to think they’re self-determining), and act accordingly.
If people declare that analysing people well enough to know their moral values is itself being a busybody, it becomes harder. First I would note that using the internet without unusual data protection already means a (possibly begrudging) acceptance of such busybodies, up to a point. But in a more inconvenient world, consent or prevention of acute danger are as far as I would be willing to go in just a comment.
Having a good factual model of a person would be necessary, and perhaps sufficient, for making that judgment favourably.
For a single person, yes, but it takes a significant investment of time to build an accurate, factual model of a single person. It becomes impractical to do so when making decisions that affect even a mere hundred people.
How would you recommend scaling this up for large groups?
Sociology and psychology. Determine patterns in human desires and behaviour, and determine universal rules. Either that, or scale up your resources and get yourself an fAI.
Willingness to be critiqued? Self-examination and scrupulous quantities of doubt? This seems kind of like the wrong question, actually. “Actual good” is a fuzzy concept, if it even exists at all; a benevolent tyrant cares whether or not they are fulfilling their values (which, presumably, includes “provide others with things I think are good”). The question I would ask is how you tell the difference between actually achieving the manifestation of your values and only making a big show of it; presumably it’s the latter that causes the problem (or at least the problem that you care about).
Then again, this comes from a moral non-realist who doesn’t see a contradiction in having a moral clause saying it’s good to enforce your morality on others to some extent, so your framework’s results may vary.
Willingness to be critiqued? Self-examination and scrupulous quantities of doubt?
Both of these will help. A lot.
“Actual good” is a fuzzy concept
True. One could go with “that which causes the greatest happiness”, but one shouldn’t be putting mood-controlling chemicals in the water. One could go with “that which best protects human life”, but one shouldn’t put humanity into a (very safe) zoo where nothing dangerous or interesting can ever happen to them.
This is therefore a major problem for someone actually trying to be a benevolent leader—how to go about it?
The question I would ask is how you tell the difference between actually achieving the manifestation of your values and only making a big show of it
I’d suggest having some metric by which your values can be measured, and measuring it on a regular basis. For example, if you think that a benevolent leader would do best by reducing crime, then you can measure that by tracking crime statistics.
But, from the inside, how to you tell the difference between doing actual good for others or being an omnipotent moral busybody?
“She’s the sort of woman who lives for others—you can tell the others by their hunted expression.”
Having a good factual model of a person would be necessary, and perhaps sufficient, for making that judgment favourably. When moving beyond making people more equal and free in their means, the model should be significantly better than their self-model. After that, the analyst would probably value the thus observed people caring about self-determination in the territory (so no deceiving them to think they’re self-determining), and act accordingly.
If people declare that analysing people well enough to know their moral values is itself being a busybody, it becomes harder. First I would note that using the internet without unusual data protection already means a (possibly begrudging) acceptance of such busybodies, up to a point. But in a more inconvenient world, consent or prevention of acute danger are as far as I would be willing to go in just a comment.
For a single person, yes, but it takes a significant investment of time to build an accurate, factual model of a single person. It becomes impractical to do so when making decisions that affect even a mere hundred people.
How would you recommend scaling this up for large groups?
Sociology and psychology. Determine patterns in human desires and behaviour, and determine universal rules. Either that, or scale up your resources and get yourself an fAI.
This is a difficult problem, which very few people (if any) have ever solved properly. It’s (probably) not insoluble, but it’s also not easy...
Good luck.
If someone clearly wants you to stop bothering them, then stop bothering them.
“Quit bothering me, officer, I’m super busy here.”
Willingness to be critiqued? Self-examination and scrupulous quantities of doubt? This seems kind of like the wrong question, actually. “Actual good” is a fuzzy concept, if it even exists at all; a benevolent tyrant cares whether or not they are fulfilling their values (which, presumably, includes “provide others with things I think are good”). The question I would ask is how you tell the difference between actually achieving the manifestation of your values and only making a big show of it; presumably it’s the latter that causes the problem (or at least the problem that you care about).
Then again, this comes from a moral non-realist who doesn’t see a contradiction in having a moral clause saying it’s good to enforce your morality on others to some extent, so your framework’s results may vary.
Both of these will help. A lot.
True. One could go with “that which causes the greatest happiness”, but one shouldn’t be putting mood-controlling chemicals in the water. One could go with “that which best protects human life”, but one shouldn’t put humanity into a (very safe) zoo where nothing dangerous or interesting can ever happen to them.
This is therefore a major problem for someone actually trying to be a benevolent leader—how to go about it?
I’d suggest having some metric by which your values can be measured, and measuring it on a regular basis. For example, if you think that a benevolent leader would do best by reducing crime, then you can measure that by tracking crime statistics.