Yes, see response to Dagon. But, 0.99999999 seems overconfident to me. You have to account not only for “I might be insane” (what are the base rates on that?), but simpler things like “I misread the question or had a brain fart.”
Like, there’s an old LW chat log where someone claims they can be 99.999% confident about whether a low-digit number is prime. Then someone challenges them to answer “prime or not?” for ~100 numbers. And then like 25 questions in they get one wrong. 0.99999999 is a Really God Damn Confident.
I was curious to re-read the chat log, and had to do some digging on archive.org to find it. The guy made 17 bets about numbers being prime, and lost the bet on the 17th bet.
If it’s not clear why this doesn’t follow consider the anecdote Eliezer references in the quote above, which runs as follows: A gets B to agree that if 7 is not prime, B will give A $100. B then makes the same agreement for 11, 13, 17, 19, and 23. Then A asks about 27. B refuses. What about 29? Sure. 31? Yes. 33? No. 37? Yes. 39? No. 41? Yes. 43? Yes. 47? Yes. 49? No. 51? Yes. And suddenly B is $100 poorer.
Now, B claimed to be 100% sure about 7 being prime, which I don’t agree with. But that’s not what lost him his $100. What lost him his $100 is that, as the game went on, he got careless. If he’d taken the time to ask himself, “am I really as sure about 51 as I am about 7?” he’d probably have realized the answer was “no.” He probably didn’t check he primality of 51 as carefully as I checked the primality of 53 at the beginning of this post. (From the provided chat transcript, sleep deprivation may have also had something to do with it.)
If you tried to make 10,000 statements with 99.99% certainty, sooner or later you would get careless. Heck, before I started writing this post, I tried typing up a list of statements I was sure of, and it wasn’t long before I’d typed 1 + 0 = 10 (I’d meant to type 1 + 9 = 10. Oops.) But the fact that, as the exercise went on, you’d start including statements that weren’t really as certain as the first statement doesn’t mean you couldn’t be justified in being 99.99% certain of that first statement.
I do think this is an important counterpoint, but still, while I agree that if a person actually thought carefully about each prime number, they’d have made it much farther than a 1-out-of-17 failure rate, I’d still bet against them successfully making 10,000 careful statements without ever screwing up in some dumb way.
Anecdata: In the mobile game Golf Rivals, it is trivial to sink a putt from any distance on the green, with a little bit of care. I (and opponents) miss about 1 in 1000 times
Yes, see response to Dagon. But, 0.99999999 seems overconfident to me. You have to account not only for “I might be insane” (what are the base rates on that?), but simpler things like “I misread the question or had a brain fart.”
Not so. “X is guilty” is a very specific hypothesis and 0.99999999 is Very Confident, so general increases in uncertainty should make you think it’s less likely that “X is guilty” is true. For example, if I’m told I misread the question, since I will not be 0.99999999 confident on nearly every question, since I now have non-trivial probability mass on other questions, I should become less confident.
The result is that it takes a specific misreading to make you more confident and that most misreadings will make you less confident, so you should become less confident.
In the log-odds space, both directions look the same. You can wander up as easily as down.
I don’t know what probability space you have in mind for the set of all possible phenomena leading to an error, that would give a basis for saying that most errors will lie in one direction.
When I calculated the odds for the Euromillions lottery, my first calculation omitted to divide by a factor to account for there being no ordering on the chosen numbers, giving a probability for winning that was too small by a factor of 240. The true value is about 140 million to 1.
I have noted before that ordinary people, too ignorant to know that clever people think it impossible, manage to collect huge jackpots. It is literally news when they do not.
It’s not a random walk among probabilities, it’s a random walk among questions, which have associated probabilities. This results in a non-random walk downwards in probability.
The underlying distribution might be described best as “nearly all questions cannot be decided with probabilities that are as certain as 0.999999”.
There is a difference in “error in calculation” versus “error in interpreting the question”. The former affects the result in such a way that makes it roughly as likely to go up as down. If you err in interpreting the question, you’re placing higher probability mass on other questions, which you are less than 0.999999 certain about on average. Roughly, I’m saying that you expect regression to the mean effects to apply in proportion to the uncertainty. E.g. If I tell you I scored an 90% on my test for which the average was a 70%, then you expect me to score a bit lower on a test of equal difficulty. However, if I tell you that I guessed on half the questions, then you should expect me to score a lot lower than you did if you assumed I guessed on 0 questions.
I don’t know why the last comment is relevant. I agree that 1 in a million odds happen 1 in a million times. I also agree that people win the lottery. My interpretation is that it means “sometimes people say impossible when they really mean extremely unlikely”, which I agree is true.
I don’t know why the last comment is relevant. I agree that 1 in a million odds happen 1 in a million times. I also agree that people win the lottery. My interpretation is that it means “sometimes people say impossible when they really mean extremely unlikely”, which I agree is true.
The point was not that people win the lottery. It’s that when they do, they are able to update against the over 100 million-to-one odds that this has happened. “No, no,” say the clever people who think the human mind is incapable of such a shift in log-odds, “far more likely that you’ve made a mistake, or the lottery doesn’t even exist, or you’ve had a hallucination.” The clever people are wrong.
Anecdata: people who win large lotteries often express verbal disbelief, and ask others to confirm that they are not hallucinating. In fact, some even express disbelief while sitting in the mansion they bought with their winnings!
Right, but they don’t update to that from a single data point (looking at the winning numbers and their ticket once), they seek out additional data until they have enough subjective evidence to update to the very, very, unlikely event (and they are able to do this because the event actually happened). Probably hundreds of people think they won any given lottery at first, but when they double-check, they discover that they did not.
Seems like what matters is “if you make 1000000 claims that you’re .999999 confident in, will you be right 999999 times?” Yes, insanity and brain farts could go in any direction, but it goes in sufficiently many directions (at least two) such that I bet you if you try to make a hundred 99.9999% confidence claims you’ll screw up at least once.
Yes, see response to Dagon. But, 0.99999999 seems overconfident to me. You have to account not only for “I might be insane” (what are the base rates on that?), but simpler things like “I misread the question or had a brain fart.”
Like, there’s an old LW chat log where someone claims they can be 99.999% confident about whether a low-digit number is prime. Then someone challenges them to answer “prime or not?” for ~100 numbers. And then like 25 questions in they get one wrong. 0.99999999 is a Really God Damn Confident.
I was curious to re-read the chat log, and had to do some digging on archive.org to find it. The guy made 17 bets about numbers being prime, and lost the bet on the 17th bet.
Transcript here
Sequence article that referenced it here.
Interesting followup by Chris Halliquist here:
I do think this is an important counterpoint, but still, while I agree that if a person actually thought carefully about each prime number, they’d have made it much farther than a 1-out-of-17 failure rate, I’d still bet against them successfully making 10,000 careful statements without ever screwing up in some dumb way.
Anecdata: In the mobile game Golf Rivals, it is trivial to sink a putt from any distance on the green, with a little bit of care. I (and opponents) miss about 1 in 1000 times
+3 for the concrete example.
Those could go either way.
Not so. “X is guilty” is a very specific hypothesis and 0.99999999 is Very Confident, so general increases in uncertainty should make you think it’s less likely that “X is guilty” is true. For example, if I’m told I misread the question, since I will not be 0.99999999 confident on nearly every question, since I now have non-trivial probability mass on other questions, I should become less confident.
The result is that it takes a specific misreading to make you more confident and that most misreadings will make you less confident, so you should become less confident.
In the log-odds space, both directions look the same. You can wander up as easily as down.
I don’t know what probability space you have in mind for the set of all possible phenomena leading to an error, that would give a basis for saying that most errors will lie in one direction.
When I calculated the odds for the Euromillions lottery, my first calculation omitted to divide by a factor to account for there being no ordering on the chosen numbers, giving a probability for winning that was too small by a factor of 240. The true value is about 140 million to 1.
I have noted before that ordinary people, too ignorant to know that clever people think it impossible, manage to collect huge jackpots. It is literally news when they do not.
It’s not a random walk among probabilities, it’s a random walk among questions, which have associated probabilities. This results in a non-random walk downwards in probability.
The underlying distribution might be described best as “nearly all questions cannot be decided with probabilities that are as certain as 0.999999”.
There is a difference in “error in calculation” versus “error in interpreting the question”. The former affects the result in such a way that makes it roughly as likely to go up as down. If you err in interpreting the question, you’re placing higher probability mass on other questions, which you are less than 0.999999 certain about on average. Roughly, I’m saying that you expect regression to the mean effects to apply in proportion to the uncertainty. E.g. If I tell you I scored an 90% on my test for which the average was a 70%, then you expect me to score a bit lower on a test of equal difficulty. However, if I tell you that I guessed on half the questions, then you should expect me to score a lot lower than you did if you assumed I guessed on 0 questions.
I don’t know why the last comment is relevant. I agree that 1 in a million odds happen 1 in a million times. I also agree that people win the lottery. My interpretation is that it means “sometimes people say impossible when they really mean extremely unlikely”, which I agree is true.
The point was not that people win the lottery. It’s that when they do, they are able to update against the over 100 million-to-one odds that this has happened. “No, no,” say the clever people who think the human mind is incapable of such a shift in log-odds, “far more likely that you’ve made a mistake, or the lottery doesn’t even exist, or you’ve had a hallucination.” The clever people are wrong.
Anecdata: people who win large lotteries often express verbal disbelief, and ask others to confirm that they are not hallucinating. In fact, some even express disbelief while sitting in the mansion they bought with their winnings!
And yet, despite saying “Inconceivable!” they did collect their winnings and buy the mansion.
Right, but they don’t update to that from a single data point (looking at the winning numbers and their ticket once), they seek out additional data until they have enough subjective evidence to update to the very, very, unlikely event (and they are able to do this because the event actually happened). Probably hundreds of people think they won any given lottery at first, but when they double-check, they discover that they did not.
Seems like what matters is “if you make 1000000 claims that you’re .999999 confident in, will you be right 999999 times?” Yes, insanity and brain farts could go in any direction, but it goes in sufficiently many directions (at least two) such that I bet you if you try to make a hundred 99.9999% confidence claims you’ll screw up at least once.