“inside view” and “outside view” seem misleading labels for things that are actually “bayesian reasoning” and “bayesian reasoning deliberately ignoring some evidence to account for flawed cognitive machinery”. The only reason for applying the “outside view” is to compensate for our flawed machinery, so to attack an “inside view”, one needs to actually give a reasonable argument that the inside view has fallen prey to bias. This argument should come first, it should not be assumed.
Obviously the distinction depends on being able to distinguish inside from outside considerations in any particular context. But given such a distinction there is no asymmetry—both views are not full views, but instead focus on their respective considerations.
Well an ideal Bayesian would unashamedly use all available evidence. It’s only our flawed cognitive machinery that suggests ignoring some evidence might sometimes be beneficial. But the burden of proof should be on the one who suggests that a particular situation warrants throwing away some evidence, rather than on the one who reasons earnestly from all evidence.
I don’t think ideal Bayesian’s use burden of proof either. Who has the burden of proof in demonstrating that burden of proof is required in a particular instance?
In which case there’s some specific amount of distinguishing evidence that promotes the hypothesis over the less complicated one, in which case, I suppose, the other would acquire this “burden of proof” of which you speak?
Not sure that I understand (I’m not being insolent, I just haven’t had my coffee this morning). Claiming that “humans are likely to over-estimate the chance of a hard-takeoff singularity in the next 50 years and should therefore discount inside view arguments on this topic” requires evidence, and I’m not convinced that the standard optimism bias literature applies here. In the absence of such evidence one should accept all arguments on their merits and just do Bayesian updating.
If we are going to have any heuristics that say that some kinds of evidence tend to be overused or underused, we have to be able to talk about sets of evidence that are less than than the total set. The whole point here is to warn people about our evidence that suggests people tend to over-rely on inside evidence relative to outside evidence.
Agreed. My objection is to cases where inside view arguments are discounted completely on the basis of experiments that have shown optimism bias among humans, but where it isn’t clear that optimism bias actually applies to the subject matter at hand. So my disagreement is about degrees rather than absolutes: How widely can the empirical support for optimism bias be generalized? How much should inside view arguments be discounted? My answers would be, roughly, “not very widely” and “not much outside traditional forecasting situations”. I think these are tangible (even empirical) questions and I will try to write a top-level post on this topic.
What would you call the classic drug testing example where you use the outside view as a prior and update based on the test results?
If the test is sufficiently powerful, it seems like you’d call it using the “inside view” for sure, even though it really uses both, and is a full view.
I think the issue is not that one ignores the outside view when using the inside view- I think it’s that in many cases the outside view only makes very weak predictions that are easily dwarfed by the amount of information one has at hand for using the inside view.
In these cases, it only makes sense to believe something close to the outside view if you don’t trust your ability to use more information without shooting yourself in the foot- which is alexflint’s point.
I really can’t see why a prior would correspond more to an outside view. The issue is not when the evidence arrived, it is more about whether the evidence is based on a track record or reasoning about process details.
Well, you can switch around the order in which you update anyway, so that’s not really the important part.
My point was that in most cases, the outside view gives a much weaker prediction than the inside view taken at face value. In these cases using both views is pretty much the same as using the inside view by itself, so advocating “use the outside view!” would be better translated as “don’t trust yourself to use the inside view!”
Okay, rephrase: Suppose I pull a crazy idea out of my hat and scream “I am 100% confident that every human being on earth will grow a tail in the next five minutes!” Then I am making a very forceful claim, which is not well-supported by the evidence.
The idea is that the outside view generally makes less forceful claims than the inside view—allowing for a wider range of possible outcomes, not being very detailed or precise or claiming a great deal of confidence. If we were to take both outside view and inside view perfectly at face value, giving them equal credence, the sum of the outside view and the inside view would be mostly the inside view. So saying that the sum of the outside view and the inside view equals mostly the outside view must imply that we think the inside view is not to be trusted in the strength it says its claims should have, which is indeed the argument being made.
“inside view” and “outside view” seem misleading labels for things that are actually “bayesian reasoning” and “bayesian reasoning deliberately ignoring some evidence to account for flawed cognitive machinery”. The only reason for applying the “outside view” is to compensate for our flawed machinery, so to attack an “inside view”, one needs to actually give a reasonable argument that the inside view has fallen prey to bias. This argument should come first, it should not be assumed.
Obviously the distinction depends on being able to distinguish inside from outside considerations in any particular context. But given such a distinction there is no asymmetry—both views are not full views, but instead focus on their respective considerations.
Well an ideal Bayesian would unashamedly use all available evidence. It’s only our flawed cognitive machinery that suggests ignoring some evidence might sometimes be beneficial. But the burden of proof should be on the one who suggests that a particular situation warrants throwing away some evidence, rather than on the one who reasons earnestly from all evidence.
I don’t think ideal Bayesian’s use burden of proof either. Who has the burden of proof in demonstrating that burden of proof is required in a particular instance?
Occams razor: the more complicated hypothesis acquires a burden of proof
In which case there’s some specific amount of distinguishing evidence that promotes the hypothesis over the less complicated one, in which case, I suppose, the other would acquire this “burden of proof” of which you speak?
Not sure that I understand (I’m not being insolent, I just haven’t had my coffee this morning). Claiming that “humans are likely to over-estimate the chance of a hard-takeoff singularity in the next 50 years and should therefore discount inside view arguments on this topic” requires evidence, and I’m not convinced that the standard optimism bias literature applies here. In the absence of such evidence one should accept all arguments on their merits and just do Bayesian updating.
If we are going to have any heuristics that say that some kinds of evidence tend to be overused or underused, we have to be able to talk about sets of evidence that are less than than the total set. The whole point here is to warn people about our evidence that suggests people tend to over-rely on inside evidence relative to outside evidence.
Agreed. My objection is to cases where inside view arguments are discounted completely on the basis of experiments that have shown optimism bias among humans, but where it isn’t clear that optimism bias actually applies to the subject matter at hand. So my disagreement is about degrees rather than absolutes: How widely can the empirical support for optimism bias be generalized? How much should inside view arguments be discounted? My answers would be, roughly, “not very widely” and “not much outside traditional forecasting situations”. I think these are tangible (even empirical) questions and I will try to write a top-level post on this topic.
What would you call the classic drug testing example where you use the outside view as a prior and update based on the test results?
If the test is sufficiently powerful, it seems like you’d call it using the “inside view” for sure, even though it really uses both, and is a full view.
I think the issue is not that one ignores the outside view when using the inside view- I think it’s that in many cases the outside view only makes very weak predictions that are easily dwarfed by the amount of information one has at hand for using the inside view.
In these cases, it only makes sense to believe something close to the outside view if you don’t trust your ability to use more information without shooting yourself in the foot- which is alexflint’s point.
I really can’t see why a prior would correspond more to an outside view. The issue is not when the evidence arrived, it is more about whether the evidence is based on a track record or reasoning about process details.
Well, you can switch around the order in which you update anyway, so that’s not really the important part.
My point was that in most cases, the outside view gives a much weaker prediction than the inside view taken at face value. In these cases using both views is pretty much the same as using the inside view by itself, so advocating “use the outside view!” would be better translated as “don’t trust yourself to use the inside view!”
I can’t imagine what evidence you think there is for your claim “in most cases, the outside view gives a much weaker prediction.”
Weakness as in the force of the claim, not how well-supported the claim may be.
This confuses me. What force of a claim should I feel, that does not come from it being well-supported?
Okay, rephrase: Suppose I pull a crazy idea out of my hat and scream “I am 100% confident that every human being on earth will grow a tail in the next five minutes!” Then I am making a very forceful claim, which is not well-supported by the evidence.
The idea is that the outside view generally makes less forceful claims than the inside view—allowing for a wider range of possible outcomes, not being very detailed or precise or claiming a great deal of confidence. If we were to take both outside view and inside view perfectly at face value, giving them equal credence, the sum of the outside view and the inside view would be mostly the inside view. So saying that the sum of the outside view and the inside view equals mostly the outside view must imply that we think the inside view is not to be trusted in the strength it says its claims should have, which is indeed the argument being made.
Thank you, I understand that much better.