Upvoted mostly for surprising examples about obstetrics and CF treatment and for a cool choice of topic. I think your question, “when is one like the doctors saving CF patients and when is one like the doctors doing super-radical mastectomies?” is an important one to ask, and distinct from questions about modest epistomology.
Say there is a set A of available actions of which a subset A′⊂A have been studied intensively enough that their utility is known with high degree of certainty, but that the utility of the other available actions in A is uncertain. Then your ability to surpass the performance of an agent who chooses actions only from A′ essentially comes down to a combination of whether choosing uncertain-utility actions from A∖A′ precludes also picking high-utility actions from A′, and what the expected payoff is from choosing uncertain-utility actions in A∖A′ according to your best information.
I think you could theoretically model many domains like this, and work things out just by maximizing your expected utility. But it would be nice to have some better heuristics to use in daily life. I think the most important questions to ask yourself are really (i) how likely are you to horribly screw things up by picking an uncertain-utility action, and (ii) do you care enough about the problem you’re looking at to take lots of actions that have a low chance of being harmful, but a small chance of being positive.
This formulation reminds me somewhat of the Bayesian approach to the likelihood of research being true from Ionnidis 2005 (Why Most Published Research Findings Are False).
Upvoted mostly for surprising examples about obstetrics and CF treatment and for a cool choice of topic. I think your question, “when is one like the doctors saving CF patients and when is one like the doctors doing super-radical mastectomies?” is an important one to ask, and distinct from questions about modest epistomology.
Say there is a set A of available actions of which a subset A′⊂A have been studied intensively enough that their utility is known with high degree of certainty, but that the utility of the other available actions in A is uncertain. Then your ability to surpass the performance of an agent who chooses actions only from A′ essentially comes down to a combination of whether choosing uncertain-utility actions from A∖A′ precludes also picking high-utility actions from A′, and what the expected payoff is from choosing uncertain-utility actions in A∖A′ according to your best information.
I think you could theoretically model many domains like this, and work things out just by maximizing your expected utility. But it would be nice to have some better heuristics to use in daily life. I think the most important questions to ask yourself are really (i) how likely are you to horribly screw things up by picking an uncertain-utility action, and (ii) do you care enough about the problem you’re looking at to take lots of actions that have a low chance of being harmful, but a small chance of being positive.
This formulation reminds me somewhat of the Bayesian approach to the likelihood of research being true from Ionnidis 2005 (Why Most Published Research Findings Are False).