Indeed. If we were perfect bayesians, who had unlimited introspective access, and we STILL couldn’t agree after an unconscionable amount of argument and discussion, then we’d have a bigger problem.
My point exactly. Only if we are sure agents are best representing themselves, can we be sure their values are not the same. If an agent is unsure of zir values, or extrapolates them incorrectly, then there will be disagreement that doesn’t imply different values.
With seven billion people, none of which are best representing themselves (they certainly aren’t perfect bayesians!) then we should expect massive disagreement. This is not an argument for fundamentally different values.
Indeed. If we were perfect bayesians, who had unlimited introspective access, and we STILL couldn’t agree after an unconscionable amount of argument and discussion, then we’d have a bigger problem.
Are perfect Bayesians with unlimited introspective access more inclined to agree on matters of first principles?
I’m not sure. I’ve never met one, much less two.
yes
They will agree on what values they have, and what the best action is relative to those values, but they still might have different values.
My point exactly. Only if we are sure agents are best representing themselves, can we be sure their values are not the same. If an agent is unsure of zir values, or extrapolates them incorrectly, then there will be disagreement that doesn’t imply different values.
With seven billion people, none of which are best representing themselves (they certainly aren’t perfect bayesians!) then we should expect massive disagreement. This is not an argument for fundamentally different values.