I think it’s that any basis set I define in a super high dimensional space could be said to be value laden, though it might be tacit and I have little idea what it is. If I care about ‘causal structure’ or something that’s still relative to the sorts of affordances that are relevant to me in the space?
Is this the same value payload that makes activists fight over language to make human biases work for their side? I don’t think this problem translates to AI: If the AGIs find that some metric induces some bias, each can compensate for it.
I think it’s that any basis set I define in a super high dimensional space could be said to be value laden, though it might be tacit and I have little idea what it is. If I care about ‘causal structure’ or something that’s still relative to the sorts of affordances that are relevant to me in the space?
Is this the same value payload that makes activists fight over language to make human biases work for their side? I don’t think this problem translates to AI: If the AGIs find that some metric induces some bias, each can compensate for it.