I guess we’re not disagreeing about much, at this point, though I think that you’re basically more optimistic than I, and this might cause us to form different conceptions of the “overcoming bias” enterprise. I agree that we’re not Eurisko (and suddenly I’m remembering Lenat’s talk at IJCAI-77, explaining AM’s fixed-heuristics problem that then led him to Eurisko...I was a graduate student) but my feeling is that we don’t in general even have the choice of using a given heuristic less: we don’t in general have the choice of becoming a less initially biased person. Sometimes we do, and it’s worth a try, I’ll admit that. In general, however, I don’t think much of my own rationality in speech or action or even writing: it’s mainly in proofreading, especially shared proofreading, that we have the chance to overcome our biases. For this purpose, it’s perfectly possible to say “this is a valuation by prototype” or whatever, and then think a meta-thought about errors found in association with that heuristic. (Nor do I really believe that we commonly have heuristics that aren’t associated with bias—systematic error—it’s just a question of identification and of doing the best we can. Not error-free, but error-correction.
Of course in order to do that, you need to be conscious of your heuristics, which isn’t always possible either, but when you try to explain your opinions to somebody else, sometimes you notice the rule of inference you’re applying, and then take a step backwards. And another. And another, until the metaphor falls off the cliff. :-) But until transhumanism actually works, or until Lenat successfully mixes Eurisko and Cyc (and, as he said in 1977: “It’s our last problem. They’ll handle the next one”), I think it’s the best we can do, and I get the feeling you think we can do better. But I have no confidence in such feelings.
I guess we’re not disagreeing about much, at this point, though I think that you’re basically more optimistic than I, and this might cause us to form different conceptions of the “overcoming bias” enterprise. I agree that we’re not Eurisko (and suddenly I’m remembering Lenat’s talk at IJCAI-77, explaining AM’s fixed-heuristics problem that then led him to Eurisko...I was a graduate student) but my feeling is that we don’t in general even have the choice of using a given heuristic less: we don’t in general have the choice of becoming a less initially biased person. Sometimes we do, and it’s worth a try, I’ll admit that. In general, however, I don’t think much of my own rationality in speech or action or even writing: it’s mainly in proofreading, especially shared proofreading, that we have the chance to overcome our biases. For this purpose, it’s perfectly possible to say “this is a valuation by prototype” or whatever, and then think a meta-thought about errors found in association with that heuristic. (Nor do I really believe that we commonly have heuristics that aren’t associated with bias—systematic error—it’s just a question of identification and of doing the best we can. Not error-free, but error-correction.
Of course in order to do that, you need to be conscious of your heuristics, which isn’t always possible either, but when you try to explain your opinions to somebody else, sometimes you notice the rule of inference you’re applying, and then take a step backwards. And another. And another, until the metaphor falls off the cliff. :-) But until transhumanism actually works, or until Lenat successfully mixes Eurisko and Cyc (and, as he said in 1977: “It’s our last problem. They’ll handle the next one”), I think it’s the best we can do, and I get the feeling you think we can do better. But I have no confidence in such feelings.