Well, as I said in this same thread, things like egalitarianism, female rights, minority rights, etc. have been found to be normatively binding due to the falsification of the normativity of certain social structures, usually patriarchy, royalty, and religious rule. Upon finding that those things are unjustified, we revert to the default that everyone is equal simply because there needs to be a reason to ascribe difference!
Pop quiz: explain to me why I should program my FAI to consider materially-different humans to have different ethical weight, to have their values and cognitive-algorithms compose differently-weighted portions of the AI’s utility function.
Well, frankly, because I happen to be human, and because once you get out of animals you cease to see any mental functioning that I could even call subjective valuation. Even if I’m choosing to be omnicompassionate you need to at least restrict to consciously aware creatures.
Not doing so might leave your AI to be vulnerable to a slower/milder version of this. Basically, if you enter a strictly egalitarian weighting, you are providing vindication to those who thoughtlessly brought out children into the world and disincentivizing, in a timeless , acausal sense, those who’re acting sensibly today and restricting reproduction to children they can bring up properly.
I’m not very certain of this answer, but it is my best attempt at the qn.
Good grief. You know, we already have nation-states for this sort of thing. If people form coherent separate “groups”, such that mixing the groups results in a zero-sum conflict over resources (including “utility function voting space”), then you just keep the groups separate in the first place.
Poland had used a version of that when arguing with the European union about the share in some commision, I’m not remembering what. It mentioned how much Poland’s population might have been had they not been under attack from 2 fronts, the nazis and the communists.
Upon finding that those things are unjustified, we revert to the default that everyone is equal simply because there needs to be a reason to ascribe difference!
You mean like the fact that people have different strength, intelligence, personality, ability, etc.
Those are not ethical traits. Honestly, there are arguments you could be using that you’re failing to use here. Instead, you and your comrades seem to enjoy using the downvote button as a form of evidence.
You know, there are actual investigations into these things.
Seeking the specific case, not the general case.
Well, as I said in this same thread, things like egalitarianism, female rights, minority rights, etc. have been found to be normatively binding due to the falsification of the normativity of certain social structures, usually patriarchy, royalty, and religious rule. Upon finding that those things are unjustified, we revert to the default that everyone is equal simply because there needs to be a reason to ascribe difference!
On what grey planet are you living on that “everyone is simply equal” is the “default”?
Ethically equal does not mean materially the same. For God’s sakes, this is so simple and obvious there are children’s books that know it.
God probably being the central word in that sentence.
Pop quiz: explain to me why I should program my FAI to consider materially-different humans to have different ethical weight, to have their values and cognitive-algorithms compose differently-weighted portions of the AI’s utility function.
It’s not something easy to answer. I think it might be even on MIRI’s open problems list.
Then why restrict to humans? Or animals or that matter?
Well, frankly, because I happen to be human, and because once you get out of animals you cease to see any mental functioning that I could even call subjective valuation. Even if I’m choosing to be omnicompassionate you need to at least restrict to consciously aware creatures.
Not doing so might leave your AI to be vulnerable to a slower/milder version of this. Basically, if you enter a strictly egalitarian weighting, you are providing vindication to those who thoughtlessly brought out children into the world and disincentivizing, in a timeless , acausal sense, those who’re acting sensibly today and restricting reproduction to children they can bring up properly.
I’m not very certain of this answer, but it is my best attempt at the qn.
Good grief. You know, we already have nation-states for this sort of thing. If people form coherent separate “groups”, such that mixing the groups results in a zero-sum conflict over resources (including “utility function voting space”), then you just keep the groups separate in the first place.
EDIT: Ah, the correct word here is clusters.
So, is my understanding correct that your FAI is going to consider only your group/cluster’s values?
Of course not.
Not to mention those who prosecuted and genocided ideological opponents.
Yes, that too.
Poland had used a version of that when arguing with the European union about the share in some commision, I’m not remembering what. It mentioned how much Poland’s population might have been had they not been under attack from 2 fronts, the nazis and the communists.
This is one of the funnier things I’ve read this year.
You mean like the fact that people have different strength, intelligence, personality, ability, etc.
Those are not ethical traits. Honestly, there are arguments you could be using that you’re failing to use here. Instead, you and your comrades seem to enjoy using the downvote button as a form of evidence.