I’m not sure we’re using ‘scrutiny’ in the same way. One potential usage is “if I can think of a counterargument, I can exclude that argument from my analysis,” which is one I don’t endorse and it sounds like you no longer endorse.
Yes. I wasn’t literally discarding arguments whenever I thought of counterarguments, but I strongly tended in that direction, and I don’t endorse this.
What I think scrutiny is useful for is determining the likelihood ratio of an argument. To use the first argument given in support for the quantitative major, you might estimate the likelihood ratio to be, say, 2:1 in support, and then after correcting for the counterargument of native ability, estimate the effect to be 3:2 in support. (Previously, this would look like revising the 2:1 estimate down to a 1:1 estimate.)
I think that these likelihood ratios are too hard to determine with such high precision.
And so in the Penrose example, his suggestion that quantum effects might have something to do with consciousness is, say, 10:1 evidence in favor, because of your esteem for Penrose’s ability to think. But when Tegmark comes along and runs the numbers, and finds that it doesn’t pan out, I would revise that down to the neighborhood of 101:100.
Metaphorically, I agree with this, my skepticism about determining precise numerical estimates not withstanding.
The confidence level in the range of ~ 0.5% sounds about right, up to an order of magnitude in either direction. The issue was that I was implicitly discarding that probability entirely, as if it it was sufficiently small so that it should play no role whatsoever in my thinking.
Lots of smart people speculate things could be the case, and then the math doesn’t work out.
As far I know, Penrose hasn’t fully retracted his position. If so, this should be given some weight.
And so if you have a precise mathematical model of scrutiny, you can incorporate this evidence together without having to deal with rules of thumb like “give weight to arguments that don’t stand up to scrutiny,” which Eliezer is rightly complaining will often lead you astray.
I don’t think that it’s fruitful to numerically quantify things in this way, because I think that the initial estimates are poor, and that making up a number makes epistemology worse rather than better, because of anchoring biases. Certainly when I myself have tried to do this in the past, I’ve had this experience. But maybe I just haven’t seen it done right.
My impression from Eliezer’s comment is that he’s implicitly reasoning in the same way that I was (discarding arguments that have ~ 1% probability of being true, as if they were too unlikely for it to be worthwhile to give any weight to.)
We’re using different standards for cleverness, but the reason I worded things that way is because everyone has access to the same logic.
I think that the difference is significant. There’s a dearth of public knowledge concerning the depth of the achievements of the best mathematicians and physicists (as well a sa dearth of public knowledge as to who the best mathematicians and physicists are). I think that the benefits to people’s epistemology if they appreciated this would be nonnegligible.
But the degree that his intuitions are evidence depends on his skill in that particular area, and if he’s able to articulate the argument, then you can evaluate the argument on its own, and then it doesn’t matter who made it.
Here again lies the key point of contention. The point is that there’s a small but non-negligible probability that Penrose isn’t able to articulate the argument despite attempting to do so, or that he communicates under bad implicit assumptions about the language that his readers think in, or there’s another possibility that I haven’t thought of that’s consistent with his views being sound.
I’m reminded of the student who wrote to Feynman complaining that she got a test question wrong because she followed his book, which contained a mistake. Feynman responded with “yep, I goofed, and you goofed by trusting me. You should have believed your teacher’s argument, because it’s correct.”
I’m certainly not saying that one should believe Penrose’s views with 50+% probability (the level of confidence that the student in the story seems to have had). I’m saying that one should give the possibility enough credence so that one’s world view isn’t turned upside down if one learns that one of the hypotheticals that I give above prevails.
My claim is that “the chance that classical computers aren’t capable of intelligence is negligible” is an inferior epistemic position to “it seems extremely likely that classical computers are capable of intelligence, but Roger Penrose is one of the greatest scientists of the 20th century, has thought about these things, and disagrees, so one could imagine believing otherwise in the future.”
Yes. I wasn’t literally discarding arguments whenever I thought of counterarguments, but I strongly tended in that direction, and I don’t endorse this.
I think that these likelihood ratios are too hard to determine with such high precision.
Metaphorically, I agree with this, my skepticism about determining precise numerical estimates not withstanding.
The confidence level in the range of ~ 0.5% sounds about right, up to an order of magnitude in either direction. The issue was that I was implicitly discarding that probability entirely, as if it it was sufficiently small so that it should play no role whatsoever in my thinking.
As far I know, Penrose hasn’t fully retracted his position. If so, this should be given some weight.
I don’t think that it’s fruitful to numerically quantify things in this way, because I think that the initial estimates are poor, and that making up a number makes epistemology worse rather than better, because of anchoring biases. Certainly when I myself have tried to do this in the past, I’ve had this experience. But maybe I just haven’t seen it done right.
My impression from Eliezer’s comment is that he’s implicitly reasoning in the same way that I was (discarding arguments that have ~ 1% probability of being true, as if they were too unlikely for it to be worthwhile to give any weight to.)
I think that the difference is significant. There’s a dearth of public knowledge concerning the depth of the achievements of the best mathematicians and physicists (as well a sa dearth of public knowledge as to who the best mathematicians and physicists are). I think that the benefits to people’s epistemology if they appreciated this would be nonnegligible.
Here again lies the key point of contention. The point is that there’s a small but non-negligible probability that Penrose isn’t able to articulate the argument despite attempting to do so, or that he communicates under bad implicit assumptions about the language that his readers think in, or there’s another possibility that I haven’t thought of that’s consistent with his views being sound.
I’m certainly not saying that one should believe Penrose’s views with 50+% probability (the level of confidence that the student in the story seems to have had). I’m saying that one should give the possibility enough credence so that one’s world view isn’t turned upside down if one learns that one of the hypotheticals that I give above prevails.
My claim is that “the chance that classical computers aren’t capable of intelligence is negligible” is an inferior epistemic position to “it seems extremely likely that classical computers are capable of intelligence, but Roger Penrose is one of the greatest scientists of the 20th century, has thought about these things, and disagrees, so one could imagine believing otherwise in the future.”