My point exactly. Only if we are sure agents are best representing themselves, can we be sure their values are not the same. If an agent is unsure of zir values, or extrapolates them incorrectly, then there will be disagreement that doesn’t imply different values.
With seven billion people, none of which are best representing themselves (they certainly aren’t perfect bayesians!) then we should expect massive disagreement. This is not an argument for fundamentally different values.
yes
They will agree on what values they have, and what the best action is relative to those values, but they still might have different values.
My point exactly. Only if we are sure agents are best representing themselves, can we be sure their values are not the same. If an agent is unsure of zir values, or extrapolates them incorrectly, then there will be disagreement that doesn’t imply different values.
With seven billion people, none of which are best representing themselves (they certainly aren’t perfect bayesians!) then we should expect massive disagreement. This is not an argument for fundamentally different values.