“Insofar as people are rational and honest, they should expect to agree on questions of simple fact, and insofar as they differ on questions of value, then surely they should be able to reach some sort of game-theoretic compromise superior to the default outcome. If you can anticipate disagreeing even after extended interaction, something has gone horribly wrong.”
Why would people with different motives agree? Surely they should signal holding opinions consistent with their aims, and frequently fail to update them in response to reasoned arguments—in order to signal how confident they are in their views—thereby hoping to convince others that they are correct—and to side with them.
But what does this mean? Beliefs are about the world; goals are about what you would do with the world if you could rewrite it atom by atom. They’re totally different things; practically any goal is compatible with any belief, unless you’re infinitely convinced that some goal is literally impossible. Perhaps you’re saying that agents will dishonestly argue that the world is such that their goals will be easier to achieve than they are in fact? I can think of some situations where agents would find that useful. For myself, I care about honesty.
“Insofar as people are rational and honest, they should expect to agree on questions of simple fact, and insofar as they differ on questions of value, then surely they should be able to reach some sort of game-theoretic compromise superior to the default outcome. If you can anticipate disagreeing even after extended interaction, something has gone horribly wrong.”
Why would people with different motives agree? Surely they should signal holding opinions consistent with their aims, and frequently fail to update them in response to reasoned arguments—in order to signal how confident they are in their views—thereby hoping to convince others that they are correct—and to side with them.
Notice that I did say “rational and honest.”
But what does this mean? Beliefs are about the world; goals are about what you would do with the world if you could rewrite it atom by atom. They’re totally different things; practically any goal is compatible with any belief, unless you’re infinitely convinced that some goal is literally impossible. Perhaps you’re saying that agents will dishonestly argue that the world is such that their goals will be easier to achieve than they are in fact? I can think of some situations where agents would find that useful. For myself, I care about honesty.
The way I read it ‘rational’ and ‘honest’ referred to the first clause of the sentence only.
For an example of an opinion consistent with an aim, consider a big tobacco sales exec who believes that cigarettes do not cause cancer.
We probably don’t actually disagree.