I’d like to see a post on this, especially if you have any insights or knowledge on how we can make those black-box circuits work better, or how to best combine formal probability calculations with those black-box circuits.
Well, that would be a very ambitious idea for an article! One angle I think might be worth exploring would be a classification of problems with regards to how the outputs of the black-box circuits (i.e. our intuitions) perform compared to the formal models we have. Clearly, among the problems we face in practice, we can point out great extremes in all four directions: problems can be trivial for both intuition and formal models, or altogether intractable, or easily solvable with formal models but awfully counterintuitive (e.g. the Monty Hall problem), or easily handled by intuition but outside of the reach of our present formal models (e.g. many AI-complete problems). I think a systematic classification along these lines might open the way for some general insight about how to best reconcile, and perhaps even combine productively, our intuitions with the best available formal calculations. But this is just a half-baked idea I have, which may or may not evolve into more systematic thoughts worth posting.
I’d like to see a post on this, especially if you have any insights or knowledge on how we can make those black-box circuits work better, or how to best combine formal probability calculations with those black-box circuits.
Well, that would be a very ambitious idea for an article! One angle I think might be worth exploring would be a classification of problems with regards to how the outputs of the black-box circuits (i.e. our intuitions) perform compared to the formal models we have. Clearly, among the problems we face in practice, we can point out great extremes in all four directions: problems can be trivial for both intuition and formal models, or altogether intractable, or easily solvable with formal models but awfully counterintuitive (e.g. the Monty Hall problem), or easily handled by intuition but outside of the reach of our present formal models (e.g. many AI-complete problems). I think a systematic classification along these lines might open the way for some general insight about how to best reconcile, and perhaps even combine productively, our intuitions with the best available formal calculations. But this is just a half-baked idea I have, which may or may not evolve into more systematic thoughts worth posting.