I think that there are other sources of social incentives for accuracy.
I imagined the “running late for a noon meeting” scenario, with myself in the role of the person who is waiting for you to arrive. At least in my mental simulation, I have a better impression of you if you say “ETA 12:10″ and then show up at 12:10 than if you say “ETA 12:05” and then show up at 12:10. I see you as more reliable when your ETA is accurate; the hit in perceived reliability that you take by being later than the originally scheduled time is significantly smaller than the hit you take by being later than your ETA. So my judgments of people seem to be encouraging accurate estimates (at least in my simulation of this scenario). This effect becomes even stronger if I imagine similar interactions with you happening more than once.
Thinking about why this happens, while aiming for reasons that might generalize to other sorts of time estimation and broader sets of biases, I’ve come up with 4 factors:
Repeated interactions. When someone has a track record, patterns become evident. Oh really, traffic was surprisingly bad yet again? Or, in statistical terms, as the number of data points increases the average noise shrinks and systematic errors stand out.
The process that you’re using to make your estimate is partially transparent. Just as you give off signs when you try to lie, you also give off signs when you try to think things through to give an accurate estimate, or when you try to set things up so that you won’t be blamed for them going badly. e.g., If I come to you with a task and you say that it will take you 3 hours to do it, the questions that you asked before making that estimate will give me some clue about whether you’re using an accuracy-seeking process.
Your stance towards me is partially transparent. When you’re running late, is your aim “I value his time and don’t want him to have to sit there waiting” or “I want him to think that this mostly wasn’t my fault”? If you come to me with a project that needs my approval to move forward, are you thinking “How can I convince him to approve this?” or “Let’s think this through together to figure out if it’s worth doing”? The cooperative approach, where you see me as an ally and fellow agent in trying to create good outcomes, is well-served by accurate estimates which are communicated clearly. And people give off various signs of whether they have that approach.
Accurate estimation is a valued ability. If someone can consistently say things like “I’ll be there in 8 minutes” or “that will take about 3 hours” and be very close to correct, that is an impressive ability which will lead me to rely more on their judgment. If they make systematic errors, or if their estimates are very noisy, then I will rely on them less. Trying to hide some bias in the noise is not such a great strategy, since noise also reflects a lack of skill at prediction. Interval estimates (e.g., “I’ll be there in 20-25 minutes”) help make the ability to give unbiased low-noise estimates more apparent.
These 4 factors don’t solve the problem. Just as there are incentives to become a skilled liar, there are incentives to convincingly pretend that you value someone’s input or to make your track record look better than your judgment is. But these 4 factors do seem to help, especially in smallish communities where the same people have repeated interactions (and can gather more dating by sharing impressions with others). To the extent that the rationality community (sometimes) succeeds at encouraging accuracy-seeking, I suspect that much of it comes from creating a social context where these factors are more strongly at play.
The relevant comparison is if you know you are going to arrive at either 12:15 or 12:05 equiprobably—do you say “12:10” or “12:07″? Or, if you are giving a distribution, do you say that the two are equiprobable, or claim a 2⁄3 chance of 12:05?
Consciously, I am thinking “Let’s think this through together to figure out if it’s worth doing,” not “how can I convince him to approve this?” I’m not at all convinced that the difficulty of lying extends to the difficulty of maintaining a mismatch between conscious reasoning and various subconscious processes that feed into estimates.
Consciously, I am thinking “Let’s think this through together to figure out if it’s worth doing,” not “how can I convince him to approve this?” I’m not at all convinced that the difficulty of lying extends to the difficulty of maintaining a mismatch between conscious reasoning and various subconscious processes that feed into estimates.
I’m imagining signs during the conversation like: If it starts to look like some other project would be more valuable than the idea you came in with, do you seem excited or frustrated? Or: If a new consideration comes up which might imply that your project idea is not worth doing, do you pursue that line of thought with the same sort of curiosity and deftness that you bring to other topics?
These are different from the kinds of tells that a person gives when lying, but they do point to the general rule of thumb that one’s mental processes are typically neither perfectly opaque nor perfectly transparent to others. They do seem to depend on the processes that are actually driving your behavior; merely thinking “Let’s think this through together” will probably not make you excited/curious/etc. if your subconscious processes aren’t in accord with that thought.
The relevant comparison is if you know you are going to arrive at either 12:15 or 12:05 equiprobably—do you say “12:10” or “12:07″? Or, if you are giving a distribution, do you say that the two are equiprobable, or claim a 2⁄3 chance of 12:05?
These are subtle enough differences so that I don’t have clear intuitions on which ETA would lead me to have the most positive impression of the person who showed up late.
I agree with your broader point that there are social incentives which favor various sorts of inaccuracy, and that accuracy won’t always create the best impression. My broader point is that there are also social incentives for accuracy, and various indicators of whether a person is seeking accuracy, and it’s possible to build a community that strengthens those relative to the incentives for inaccuracy.
I think that there are other sources of social incentives for accuracy.
I imagined the “running late for a noon meeting” scenario, with myself in the role of the person who is waiting for you to arrive. At least in my mental simulation, I have a better impression of you if you say “ETA 12:10″ and then show up at 12:10 than if you say “ETA 12:05” and then show up at 12:10. I see you as more reliable when your ETA is accurate; the hit in perceived reliability that you take by being later than the originally scheduled time is significantly smaller than the hit you take by being later than your ETA. So my judgments of people seem to be encouraging accurate estimates (at least in my simulation of this scenario). This effect becomes even stronger if I imagine similar interactions with you happening more than once.
Thinking about why this happens, while aiming for reasons that might generalize to other sorts of time estimation and broader sets of biases, I’ve come up with 4 factors:
Repeated interactions. When someone has a track record, patterns become evident. Oh really, traffic was surprisingly bad yet again? Or, in statistical terms, as the number of data points increases the average noise shrinks and systematic errors stand out.
The process that you’re using to make your estimate is partially transparent. Just as you give off signs when you try to lie, you also give off signs when you try to think things through to give an accurate estimate, or when you try to set things up so that you won’t be blamed for them going badly. e.g., If I come to you with a task and you say that it will take you 3 hours to do it, the questions that you asked before making that estimate will give me some clue about whether you’re using an accuracy-seeking process.
Your stance towards me is partially transparent. When you’re running late, is your aim “I value his time and don’t want him to have to sit there waiting” or “I want him to think that this mostly wasn’t my fault”? If you come to me with a project that needs my approval to move forward, are you thinking “How can I convince him to approve this?” or “Let’s think this through together to figure out if it’s worth doing”? The cooperative approach, where you see me as an ally and fellow agent in trying to create good outcomes, is well-served by accurate estimates which are communicated clearly. And people give off various signs of whether they have that approach.
Accurate estimation is a valued ability. If someone can consistently say things like “I’ll be there in 8 minutes” or “that will take about 3 hours” and be very close to correct, that is an impressive ability which will lead me to rely more on their judgment. If they make systematic errors, or if their estimates are very noisy, then I will rely on them less. Trying to hide some bias in the noise is not such a great strategy, since noise also reflects a lack of skill at prediction. Interval estimates (e.g., “I’ll be there in 20-25 minutes”) help make the ability to give unbiased low-noise estimates more apparent.
These 4 factors don’t solve the problem. Just as there are incentives to become a skilled liar, there are incentives to convincingly pretend that you value someone’s input or to make your track record look better than your judgment is. But these 4 factors do seem to help, especially in smallish communities where the same people have repeated interactions (and can gather more dating by sharing impressions with others). To the extent that the rationality community (sometimes) succeeds at encouraging accuracy-seeking, I suspect that much of it comes from creating a social context where these factors are more strongly at play.
The relevant comparison is if you know you are going to arrive at either 12:15 or 12:05 equiprobably—do you say “12:10” or “12:07″? Or, if you are giving a distribution, do you say that the two are equiprobable, or claim a 2⁄3 chance of 12:05?
Consciously, I am thinking “Let’s think this through together to figure out if it’s worth doing,” not “how can I convince him to approve this?” I’m not at all convinced that the difficulty of lying extends to the difficulty of maintaining a mismatch between conscious reasoning and various subconscious processes that feed into estimates.
I’m imagining signs during the conversation like: If it starts to look like some other project would be more valuable than the idea you came in with, do you seem excited or frustrated? Or: If a new consideration comes up which might imply that your project idea is not worth doing, do you pursue that line of thought with the same sort of curiosity and deftness that you bring to other topics?
These are different from the kinds of tells that a person gives when lying, but they do point to the general rule of thumb that one’s mental processes are typically neither perfectly opaque nor perfectly transparent to others. They do seem to depend on the processes that are actually driving your behavior; merely thinking “Let’s think this through together” will probably not make you excited/curious/etc. if your subconscious processes aren’t in accord with that thought.
These are subtle enough differences so that I don’t have clear intuitions on which ETA would lead me to have the most positive impression of the person who showed up late.
I agree with your broader point that there are social incentives which favor various sorts of inaccuracy, and that accuracy won’t always create the best impression. My broader point is that there are also social incentives for accuracy, and various indicators of whether a person is seeking accuracy, and it’s possible to build a community that strengthens those relative to the incentives for inaccuracy.