While I think your reading is consistent with a very generous application of the principle of charity, I’m not certain it’s appropriate in this case to so apply. Do you have any evidence that Warren was reasoning in this way rather than the less-charitable version, and if so, why didn’t he say so explicitly?
It really seems like the simpler explanation is fear plus poor thinking.
Do you have any evidence that Warren was reasoning in this way rather than the less-charitable version, and if so, why didn’t he say so explicitly
Sorry for taking so long to reply to this.
I think that a close and strict reading supports my interpretation. I don’t see the need for an unduly charitable reading.
First, I assume the following context for the quote: Warren had argued for (or maybe only claimed) a high probability for the proposition that there is a Japanese fifth column within the US. Let R be this italicized proposition. Then Warren has argued that p(R) >> 0.
Given that context, here is how I parse the quote, line-by-line:
[A] questioner pointed out that there had been no sabotage or any other type of espionage by the Japanese-Americans up to that time.
I take the questioner to be asserting that there has been no observed sabotage or any other type of espionage by Japanese-Americans up to that time. Let E be this proposition.
Warren responds:
I take the view that this lack [of subversive activity] is the most ominous sign in our whole situation.
I take Warren to be saying that the expected cost of not interring Japanese-Americans is significantly higher after we update on E than it was before we updated on E. Letting D be the “default” action in which we don’t inter Japanese-Americans, Warren is asserting that EU(D | E) << EU(D).
The above assertion is the conclusion of Warren’s reasoning. If we can show that this conclusion follows from correct Bayesian reasoning from a psychologically realistic prior, plus whatever evidence he explicitly adduces, then the quote cannot serve as an example of the fallacy that Eliezer describes in this post.
Now, we may think that that “psychologically realistic prior” is very probably based in turn on “fear plus poor thinking”. But Warren doesn’t explicitly show us where his prior came from, so the quote in and of itself is not an example of an explicit error in Bayesian reasoning. Whatever fallacious reasoning occurred, it happened “behind the scenes”, prior to the reasoning on display in the quote.
Continuing with my parsing, Warren goes on to say:
It [this lack of subversive activity] convinces me more than perhaps any other factor that the sabotage we are to get, the Fifth Column activities are to get, are timed just like Pearl Harbor was timed… I believe we are just being lulled into a false sense of security.
Let Q be the proposition that there is a Japanese fifth column in America, and it will perform a timed attack, but right now it is lulling us into a false sense of security.
I take Warren to be claiming that p(Q | E) >> p(Q), and that p(Q | E) is sufficiently large to justify saying “I believe Q”.
It remains only to give a psychologically realistic prior distribution p such the claims above follow — that is, we need that
p(R) is sufficiently large to justify saying “R has high probability”,
p(E) = 1 - epsilon,
EU(D | E) << EU(D),
p(Q | E) >> p(Q),
p(Q | E) is sufficiently large to justify saying “I believe Q”.
This will suffice to invalidate the Warren quote as an example for this post.
It is a mathematical fact that such priors exist in an abstract sense. Do you think it unlikely that such a prior is psychologically realistic for someone in Warren’s position? I think that selection effects and standard human biases make it very plausible that someone in his position would have such a prior.
If you’re still skeptical, we can discuss which priors are psychologically realistic for someone in Warren’s position.
While I think your reading is consistent with a very generous application of the principle of charity, I’m not certain it’s appropriate in this case to so apply. Do you have any evidence that Warren was reasoning in this way rather than the less-charitable version, and if so, why didn’t he say so explicitly?
It really seems like the simpler explanation is fear plus poor thinking.
Sorry for taking so long to reply to this.
I think that a close and strict reading supports my interpretation. I don’t see the need for an unduly charitable reading.
First, I assume the following context for the quote: Warren had argued for (or maybe only claimed) a high probability for the proposition that there is a Japanese fifth column within the US. Let R be this italicized proposition. Then Warren has argued that p(R) >> 0.
Given that context, here is how I parse the quote, line-by-line:
I take the questioner to be asserting that there has been no observed sabotage or any other type of espionage by Japanese-Americans up to that time. Let E be this proposition.
Warren responds:
I take Warren to be saying that the expected cost of not interring Japanese-Americans is significantly higher after we update on E than it was before we updated on E. Letting D be the “default” action in which we don’t inter Japanese-Americans, Warren is asserting that EU(D | E) << EU(D).
The above assertion is the conclusion of Warren’s reasoning. If we can show that this conclusion follows from correct Bayesian reasoning from a psychologically realistic prior, plus whatever evidence he explicitly adduces, then the quote cannot serve as an example of the fallacy that Eliezer describes in this post.
Now, we may think that that “psychologically realistic prior” is very probably based in turn on “fear plus poor thinking”. But Warren doesn’t explicitly show us where his prior came from, so the quote in and of itself is not an example of an explicit error in Bayesian reasoning. Whatever fallacious reasoning occurred, it happened “behind the scenes”, prior to the reasoning on display in the quote.
Continuing with my parsing, Warren goes on to say:
Let Q be the proposition that there is a Japanese fifth column in America, and it will perform a timed attack, but right now it is lulling us into a false sense of security.
I take Warren to be claiming that p(Q | E) >> p(Q), and that p(Q | E) is sufficiently large to justify saying “I believe Q”.
It remains only to give a psychologically realistic prior distribution p such the claims above follow — that is, we need that
p(R) is sufficiently large to justify saying “R has high probability”,
p(E) = 1 - epsilon,
EU(D | E) << EU(D),
p(Q | E) >> p(Q),
p(Q | E) is sufficiently large to justify saying “I believe Q”.
This will suffice to invalidate the Warren quote as an example for this post.
It is a mathematical fact that such priors exist in an abstract sense. Do you think it unlikely that such a prior is psychologically realistic for someone in Warren’s position? I think that selection effects and standard human biases make it very plausible that someone in his position would have such a prior.
If you’re still skeptical, we can discuss which priors are psychologically realistic for someone in Warren’s position.