It helps you hold off on proposing diagnoses. As tempting as it may be to dismiss people as crazy or stupid, this is a dangerous label for us biased creatures. Fewer people than you are tempted to call these things are genuinely worth writing off as thoroughly as this kind of name-calling may tempt you to do.
Yes, this parallels why I’ve been finding hostility in argument increasingly disturbing lately. Insofar as people are rational and honest, they should expect to agree on questions of simple fact, and insofar as they differ on questions of value, then surely they should be able to reach some sort of game-theoretic compromise superior to the default outcome. If you can anticipate disagreeing even after extended interaction, something has gone horribly wrong. I read people’s snarky swipes at the psychological motivations of their opponents, and it almost hurts—don’t they see the symmetries of the situation? Instead of rushing to call the other mad, why don’t they just jump to the meta level and ask, What do I (think I) know that they don’t? What do they (think they) know that I don’t?
Really, it should all be so simple. Figure out what questions you want to investigate, and update your model of the world based on incoming evidence, including the arguments of others. If you end up disagreeing with someone, just say: I think you’re mistaken about these-and-such specific issues because of such-and-these specific reasons. That’s it. That’s all you have to do. Anger and indignation aren’t helping you acquire the map that reflects the territory, so what would be the point?
I suppose I’ve lost a little bit of my humanity along the Way. What could be more traditionally wholesome than a delicious bout of righteous anger? But on reflection … it’s just not worth it. The sanctity of my map is too important. I’ll get my kicks some other way.
“Insofar as people are rational and honest, they should expect to agree on questions of simple fact, and insofar as they differ on questions of value, then surely they should be able to reach some sort of game-theoretic compromise superior to the default outcome. If you can anticipate disagreeing even after extended interaction, something has gone horribly wrong.”
Why would people with different motives agree? Surely they should signal holding opinions consistent with their aims, and frequently fail to update them in response to reasoned arguments—in order to signal how confident they are in their views—thereby hoping to convince others that they are correct—and to side with them.
But what does this mean? Beliefs are about the world; goals are about what you would do with the world if you could rewrite it atom by atom. They’re totally different things; practically any goal is compatible with any belief, unless you’re infinitely convinced that some goal is literally impossible. Perhaps you’re saying that agents will dishonestly argue that the world is such that their goals will be easier to achieve than they are in fact? I can think of some situations where agents would find that useful. For myself, I care about honesty.
It’s probably a bad idea to get so caught up in trappings of rationality that you lose your ability to empathize with humans and understand why, for example, they have pointless arguments.
Yes, this parallels why I’ve been finding hostility in argument increasingly disturbing lately. Insofar as people are rational and honest, they should expect to agree on questions of simple fact, and insofar as they differ on questions of value, then surely they should be able to reach some sort of game-theoretic compromise superior to the default outcome. If you can anticipate disagreeing even after extended interaction, something has gone horribly wrong. I read people’s snarky swipes at the psychological motivations of their opponents, and it almost hurts—don’t they see the symmetries of the situation? Instead of rushing to call the other mad, why don’t they just jump to the meta level and ask, What do I (think I) know that they don’t? What do they (think they) know that I don’t?
Really, it should all be so simple. Figure out what questions you want to investigate, and update your model of the world based on incoming evidence, including the arguments of others. If you end up disagreeing with someone, just say: I think you’re mistaken about these-and-such specific issues because of such-and-these specific reasons. That’s it. That’s all you have to do. Anger and indignation aren’t helping you acquire the map that reflects the territory, so what would be the point?
I suppose I’ve lost a little bit of my humanity along the Way. What could be more traditionally wholesome than a delicious bout of righteous anger? But on reflection … it’s just not worth it. The sanctity of my map is too important. I’ll get my kicks some other way.
“Insofar as people are rational and honest, they should expect to agree on questions of simple fact, and insofar as they differ on questions of value, then surely they should be able to reach some sort of game-theoretic compromise superior to the default outcome. If you can anticipate disagreeing even after extended interaction, something has gone horribly wrong.”
Why would people with different motives agree? Surely they should signal holding opinions consistent with their aims, and frequently fail to update them in response to reasoned arguments—in order to signal how confident they are in their views—thereby hoping to convince others that they are correct—and to side with them.
Notice that I did say “rational and honest.”
But what does this mean? Beliefs are about the world; goals are about what you would do with the world if you could rewrite it atom by atom. They’re totally different things; practically any goal is compatible with any belief, unless you’re infinitely convinced that some goal is literally impossible. Perhaps you’re saying that agents will dishonestly argue that the world is such that their goals will be easier to achieve than they are in fact? I can think of some situations where agents would find that useful. For myself, I care about honesty.
The way I read it ‘rational’ and ‘honest’ referred to the first clause of the sentence only.
For an example of an opinion consistent with an aim, consider a big tobacco sales exec who believes that cigarettes do not cause cancer.
We probably don’t actually disagree.
It’s probably a bad idea to get so caught up in trappings of rationality that you lose your ability to empathize with humans and understand why, for example, they have pointless arguments.
You give me far too much credit.