Pop quiz: explain to me why I should program my FAI to consider materially-different humans to have different ethical weight, to have their values and cognitive-algorithms compose differently-weighted portions of the AI’s utility function.
Well, frankly, because I happen to be human, and because once you get out of animals you cease to see any mental functioning that I could even call subjective valuation. Even if I’m choosing to be omnicompassionate you need to at least restrict to consciously aware creatures.
Not doing so might leave your AI to be vulnerable to a slower/milder version of this. Basically, if you enter a strictly egalitarian weighting, you are providing vindication to those who thoughtlessly brought out children into the world and disincentivizing, in a timeless , acausal sense, those who’re acting sensibly today and restricting reproduction to children they can bring up properly.
I’m not very certain of this answer, but it is my best attempt at the qn.
Good grief. You know, we already have nation-states for this sort of thing. If people form coherent separate “groups”, such that mixing the groups results in a zero-sum conflict over resources (including “utility function voting space”), then you just keep the groups separate in the first place.
Poland had used a version of that when arguing with the European union about the share in some commision, I’m not remembering what. It mentioned how much Poland’s population might have been had they not been under attack from 2 fronts, the nazis and the communists.
Ethically equal does not mean materially the same. For God’s sakes, this is so simple and obvious there are children’s books that know it.
God probably being the central word in that sentence.
Pop quiz: explain to me why I should program my FAI to consider materially-different humans to have different ethical weight, to have their values and cognitive-algorithms compose differently-weighted portions of the AI’s utility function.
It’s not something easy to answer. I think it might be even on MIRI’s open problems list.
Then why restrict to humans? Or animals or that matter?
Well, frankly, because I happen to be human, and because once you get out of animals you cease to see any mental functioning that I could even call subjective valuation. Even if I’m choosing to be omnicompassionate you need to at least restrict to consciously aware creatures.
Not doing so might leave your AI to be vulnerable to a slower/milder version of this. Basically, if you enter a strictly egalitarian weighting, you are providing vindication to those who thoughtlessly brought out children into the world and disincentivizing, in a timeless , acausal sense, those who’re acting sensibly today and restricting reproduction to children they can bring up properly.
I’m not very certain of this answer, but it is my best attempt at the qn.
Good grief. You know, we already have nation-states for this sort of thing. If people form coherent separate “groups”, such that mixing the groups results in a zero-sum conflict over resources (including “utility function voting space”), then you just keep the groups separate in the first place.
EDIT: Ah, the correct word here is clusters.
So, is my understanding correct that your FAI is going to consider only your group/cluster’s values?
Of course not.
Not to mention those who prosecuted and genocided ideological opponents.
Yes, that too.
Poland had used a version of that when arguing with the European union about the share in some commision, I’m not remembering what. It mentioned how much Poland’s population might have been had they not been under attack from 2 fronts, the nazis and the communists.