That calibration question actually raised a question for me. I haven’t seen this question raised, although I suspect it has and I just haven’t found the discussion as yet; if so, please point me in the right direction.
The point, in short, is that my gut reaction to honing my subjective probability estimations is “I’m not interested in trying to be a computer.” As far as I know, all the value I’ve gotten out of rationalist practice has been much more qualitative than probability calculations, such as becoming aware of and learning to notice the subjective feeling of protecting a treasured belief from facts. (Insert the standard caveats, please.) I’m not sure what I would gain by taking the time to train myself to be better at generating numerical estimates of my likelihood of being correct about my guesses.
In this particular case, I guessed 1650 based on my knowledge that Newton was doing his thing in the mid to late seventeenth century. I put a 65% confidence on the 15-year interval, figuring that it might be a decent bit later than 1650 but not terribly much. It turns out that I was wrong about that interval, so I know now that based on that situation and the amount of confidence I felt, I should probably try a smaller confidence estimate. How much smaller I really don’t know; I suspect that’s what this “HonoreDB’s rule” others are talking about addresses.
But the thing is, I’m not sure what I would gain if it turned out that 65% was really the right guess to make. If my confidence estimates turn out to be really, really well-honed, so what?
On the flipside, if it turned out that 60% was the right guess to make and I spend some time calibrating my own probability estimates, what do I gain by doing that? That might be valuable for building an AI, but I’m a social scientist, not an AI researcher. I find it valuable to notice the planning fallacy in action, but I don’t find it terribly helpful to be skilled at, say, knowing that my guess of being 85% likely to arrive with a 5-minute margin of when I say is very likely to be accurate. What’s helpful is knowing that a given trip tends to take about 25 minutes and that somewhat regularly but seemingly randomly traffic will make it take 40 minutes, so if I need to be on time I leave 40 minutes before I need to be there but with things I can do for 15 minutes once I’m there in case traffic is smooth. No explicit probabilities needed.
But I see this business about learning to make better conscious probability estimates coming up fairly often on Less Wrong. It was even a theme in the Boot Camps if I understand correctly. So either quite a number of very smart people who spend their time thinking about rationality all managed to converge on valuing something that doesn’t actually matter that much to rationality, or I’m missing something.
Would someone do me the courtesy of helping me see what I’m apparently blind to here?
ETA: I just realized that the rule everyone is talking about must be what HonoreDB outlined in the lead post of this discussion. Sorry for not noticing that. But my core question remains!
That calibration question actually raised a question for me. I haven’t seen this question raised, although I suspect it has and I just haven’t found the discussion as yet; if so, please point me in the right direction.
The point, in short, is that my gut reaction to honing my subjective probability estimations is “I’m not interested in trying to be a computer.” As far as I know, all the value I’ve gotten out of rationalist practice has been much more qualitative than probability calculations, such as becoming aware of and learning to notice the subjective feeling of protecting a treasured belief from facts. (Insert the standard caveats, please.) I’m not sure what I would gain by taking the time to train myself to be better at generating numerical estimates of my likelihood of being correct about my guesses.
In this particular case, I guessed 1650 based on my knowledge that Newton was doing his thing in the mid to late seventeenth century. I put a 65% confidence on the 15-year interval, figuring that it might be a decent bit later than 1650 but not terribly much. It turns out that I was wrong about that interval, so I know now that based on that situation and the amount of confidence I felt, I should probably try a smaller confidence estimate. How much smaller I really don’t know; I suspect that’s what this “HonoreDB’s rule” others are talking about addresses.
But the thing is, I’m not sure what I would gain if it turned out that 65% was really the right guess to make. If my confidence estimates turn out to be really, really well-honed, so what?
On the flipside, if it turned out that 60% was the right guess to make and I spend some time calibrating my own probability estimates, what do I gain by doing that? That might be valuable for building an AI, but I’m a social scientist, not an AI researcher. I find it valuable to notice the planning fallacy in action, but I don’t find it terribly helpful to be skilled at, say, knowing that my guess of being 85% likely to arrive with a 5-minute margin of when I say is very likely to be accurate. What’s helpful is knowing that a given trip tends to take about 25 minutes and that somewhat regularly but seemingly randomly traffic will make it take 40 minutes, so if I need to be on time I leave 40 minutes before I need to be there but with things I can do for 15 minutes once I’m there in case traffic is smooth. No explicit probabilities needed.
But I see this business about learning to make better conscious probability estimates coming up fairly often on Less Wrong. It was even a theme in the Boot Camps if I understand correctly. So either quite a number of very smart people who spend their time thinking about rationality all managed to converge on valuing something that doesn’t actually matter that much to rationality, or I’m missing something.
Would someone do me the courtesy of helping me see what I’m apparently blind to here?
ETA: I just realized that the rule everyone is talking about must be what HonoreDB outlined in the lead post of this discussion. Sorry for not noticing that. But my core question remains!