Definitely agreed. It’s basically a variation on the old (very old) “Get a distracted or otherwise impaired person to agree to a bunch of obviously true statements, and then slip in a false one to trip them up” trick. I can’t see that it has any relevance to the philosophical issue at hand.
Yeah. When I try to do the “can I make a hundred statements yadda yadda” test I typically think in terms of one statement a day for a hundred days. Or more often, “if I make a statement in this class every day, how long do I expect it to take before I get one wrong?”
I believe that was part of the mistake, answering whether or not the numbers were prime, when the original question, last repeated several minutes earlier, was whether or not to accept a deal.
Except it’s not the same trick. What you describe relies on the mark getting into the rhythm of replying “yes” to every question; the actual example described has the mark checking each number, but making a mistake eventually, because the odds they will make a mistake is not zero.
I won’t say “Thus I refute” but it is certainly a cautionary tale.
I think we need to be very careful what system we’re actually describing.
If someone asks me, “are you 99.99% sure 3 is prime? what about 5? what about 7? what about 9? what about 11?”, my model does not actually consider these to be separate-and-independent facts, each with its own assigned prior. My mind “chunks” them together into a single meta-question: “am I 99.99% sure that, if asked X questions of the nature ‘is {N} prime’, my answer will conform to their own?”
This question itself depends on many sub-systems, each with its own probability:
P(1). How likely is my prime number detection heuristic to return a false positive?
P(2). How prone is my ability to utilize my prime number detection heuristic to error?
P(3). How lossy is the channel by which I communicate the results of my utilization of my prime number detection heuristic?
P(4). How likely is it that the apparently-communicated question ‘is {N} prime?’ actually refers to a different thing that I mean when I utilize my prime number detection heuristic?
So the meta-question “am I 99.99% sure that, if asked X questions of the nature ‘is {N} prime’, my answer will conform to their own?” is at LEAST bounded by ([1 - P(1)] [1 - P(2)] [1 - P(3)] * [1 - P(4)] ) < 0.999 ^ X.
Why I feel this is important:
When first asked a series of “Is {N} prime?” questions, my mind will immediately recognize the meta-question represented by P(1). It will NOT intuitively consider P(2), P(3) or P(4) relevant to the final bounds, so it will compute those bounds as ([1 - P(1)] 1 1 * 1) < 0.999 ^ X.
Then, later, when P(2) turns out to be non-zero due to mental fatigue, I will explain away my failure as “I was tired” without REALLY recognizing that the original failure was in recognizing P(2) as a confounding input in the first place. (I.e.: in my personal case, especially if I was tired, I feel that I’d be likely to ACTUALLY use the heuristic “does my sense of memory recognition ping off when I see these numbers in the contexts of ‘numbers with factorizations’ rather than the heuristic ‘perform Archimedes’ sieve rigorously and check all potential factors”, and not even realize that I was not performing the heuristic that I was originally claiming 99.99% confidence in.
I think a lot of people are making this mistake, since they seem to phrase their objections as “how could I have a prior that math is false?”—when the actual composite prior is “could math be false OR my ability to properly perform that math be compromised OR my ability to properly communicate my answer be compromised OR my understanding of what question is actually being asked be faulty”?, which is what your example actually illustrates.
EDIT: did I explain this poorly? Or is it just really stupid?
Sure, P(I’m mistaken about whether 53 is prime) is non-negligible (I’ve had far worse brain farts myself).
But P(I’m mistaken about whether 53 is prime|I’m not sleep-deprived, I haven’t answered a dozen similar questions in the last five minutes, and I’ve spent more than ten seconds thinking about this) is several orders of magnitude smaller.
And P(I’m mistaken about whether 53 is prime|[the above], and I’m looking at a printed list of prime numbers and at the output of factor 53) is almost at the blue tentacle level.
I should perhaps include within the text a more direct link to Peter de Blanc’s anecdote here:
http://www.spaceandgames.com/?p=27
I won’t say “Thus I refute” but it is certainly a cautionary tale.
It seems to me to be mostly a cautionary tale about the dangers of taking a long series of bets when you’re tired.
Definitely agreed. It’s basically a variation on the old (very old) “Get a distracted or otherwise impaired person to agree to a bunch of obviously true statements, and then slip in a false one to trip them up” trick. I can’t see that it has any relevance to the philosophical issue at hand.
Yeah. When I try to do the “can I make a hundred statements yadda yadda” test I typically think in terms of one statement a day for a hundred days. Or more often, “if I make a statement in this class every day, how long do I expect it to take before I get one wrong?”
Not quite, as SquallMage had correctly answered that 27, 33, 39 and 49 were not prime.
I believe that was part of the mistake, answering whether or not the numbers were prime, when the original question, last repeated several minutes earlier, was whether or not to accept a deal.
The point is, it’s fundamentally the same trick, and is just that: a trick.
Except it’s not the same trick. What you describe relies on the mark getting into the rhythm of replying “yes” to every question; the actual example described has the mark checking each number, but making a mistake eventually, because the odds they will make a mistake is not zero.
I think we need to be very careful what system we’re actually describing.
If someone asks me, “are you 99.99% sure 3 is prime? what about 5? what about 7? what about 9? what about 11?”, my model does not actually consider these to be separate-and-independent facts, each with its own assigned prior. My mind “chunks” them together into a single meta-question: “am I 99.99% sure that, if asked X questions of the nature ‘is {N} prime’, my answer will conform to their own?”
This question itself depends on many sub-systems, each with its own probability:
P(1). How likely is my prime number detection heuristic to return a false positive?
P(2). How prone is my ability to utilize my prime number detection heuristic to error?
P(3). How lossy is the channel by which I communicate the results of my utilization of my prime number detection heuristic?
P(4). How likely is it that the apparently-communicated question ‘is {N} prime?’ actually refers to a different thing that I mean when I utilize my prime number detection heuristic?
So the meta-question “am I 99.99% sure that, if asked X questions of the nature ‘is {N} prime’, my answer will conform to their own?” is at LEAST bounded by ([1 - P(1)] [1 - P(2)] [1 - P(3)] * [1 - P(4)] ) < 0.999 ^ X.
Why I feel this is important:
When first asked a series of “Is {N} prime?” questions, my mind will immediately recognize the meta-question represented by P(1). It will NOT intuitively consider P(2), P(3) or P(4) relevant to the final bounds, so it will compute those bounds as ([1 - P(1)] 1 1 * 1) < 0.999 ^ X.
Then, later, when P(2) turns out to be non-zero due to mental fatigue, I will explain away my failure as “I was tired” without REALLY recognizing that the original failure was in recognizing P(2) as a confounding input in the first place. (I.e.: in my personal case, especially if I was tired, I feel that I’d be likely to ACTUALLY use the heuristic “does my sense of memory recognition ping off when I see these numbers in the contexts of ‘numbers with factorizations’ rather than the heuristic ‘perform Archimedes’ sieve rigorously and check all potential factors”, and not even realize that I was not performing the heuristic that I was originally claiming 99.99% confidence in.
I think a lot of people are making this mistake, since they seem to phrase their objections as “how could I have a prior that math is false?”—when the actual composite prior is “could math be false OR my ability to properly perform that math be compromised OR my ability to properly communicate my answer be compromised OR my understanding of what question is actually being asked be faulty”?, which is what your example actually illustrates.
EDIT: did I explain this poorly? Or is it just really stupid?
Sure, P(I’m mistaken about whether 53 is prime) is non-negligible (I’ve had far worse brain farts myself).
But P(I’m mistaken about whether 53 is prime|I’m not sleep-deprived, I haven’t answered a dozen similar questions in the last five minutes, and I’ve spent more than ten seconds thinking about this) is several orders of magnitude smaller.
And P(I’m mistaken about whether 53 is prime|[the above], and I’m looking at a printed list of prime numbers and at the output of
factor 53
) is almost at the blue tentacle level.