I’m not convinced conceptual distance metrics must be value-laden. Represent each utility function by an AGI. Almost all of them should be able to agree on a metric such that each could adopt that metric in its thinking losing only negligible value. The same could not be said for agreeing on a utility function. (The same could be said for agreeing on a utility-parametrized AGI design.)
Represent each utility function by an AGI. Almost all of them should be able to agree on a metric such that each could adopt that metric in its thinking losing only negligible value.
This implies a measure over utility functions. Its propably true under the solomonoff measure, but abstract though they are, this is values.
I think it’s that any basis set I define in a super high dimensional space could be said to be value laden, though it might be tacit and I have little idea what it is. If I care about ‘causal structure’ or something that’s still relative to the sorts of affordances that are relevant to me in the space?
Is this the same value payload that makes activists fight over language to make human biases work for their side? I don’t think this problem translates to AI: If the AGIs find that some metric induces some bias, each can compensate for it.
I’m not convinced conceptual distance metrics must be value-laden. Represent each utility function by an AGI. Almost all of them should be able to agree on a metric such that each could adopt that metric in its thinking losing only negligible value. The same could not be said for agreeing on a utility function. (The same could be said for agreeing on a utility-parametrized AGI design.)
This implies a measure over utility functions. Its propably true under the solomonoff measure, but abstract though they are, this is values.
I think it’s that any basis set I define in a super high dimensional space could be said to be value laden, though it might be tacit and I have little idea what it is. If I care about ‘causal structure’ or something that’s still relative to the sorts of affordances that are relevant to me in the space?
Is this the same value payload that makes activists fight over language to make human biases work for their side? I don’t think this problem translates to AI: If the AGIs find that some metric induces some bias, each can compensate for it.