The solution of the puzzle is to point out that the assumptions contain a contradiction. People (well, children) sometimes get into shouting matches based on alternative arguments focusing on, or emphasizing, one aspect of the problem over another.
If we read the post as trying to balance two absolutes, with words like “anosognosia”, “absolute denial macro”, “doublethink”, and “denial-of-denial” supporting one side, and words like “redundant”, “AI”, “well-calibrated”, “99.9% sure” supporting the other side, then any answer that favors one absolute over the other is clearly wrong.
However, because the author of the post presumably has a point, and is not merely creating nonsense puzzles to amuse us, the readers, the analogy leads us to focus on the parts of the post which do not fit.
As far as I can tell, the primary aspect that does not fit is the “99.9%”. If we assume that all the other factors are intended to be absolutes, then the post becomes a query for claims that you presently do not believe, but you would believe, given a particular degree of evidence. If we assume that you would revise your degree of belief upwards by a Bayes factor of 1000, the post becomes a simple question “What claims would you give odds of 1:1000 for?”
Of course, there are plenty of beliefs such as “I will roll precisely the sequence “345″ on the next three rolls of this 10-sided die.” which do not fit the form required by the problem. Specifically, the statement needs to be generic enough that it could be targetted by species-wide brain features.
A possible strategy for testing these might be: Suppose you had a bundle of almost 700 equally plausible claims. Would you give even odds for something in the bundle being correct? If so, you’re at the one-in-one-thousand level. If not, you’re above or below it.
You’re mistaking the probability for the hypothesis given the AI’s knowledge for the likelihood ratio of the data on the hypothesis given your own prior knowledge.
AI is a truth-detector that is wrong 1 time in 1000. If the detector says “true”, I shift my certainty upwards by a factor of 1000. “AI’s knowledge” doesn’t enter this picture.
So if someone rolls a 10^6-sided die and tells you they’re 99.9% sure the number was 749,763, you would only assign it a posterior probability of 10^-3?
So if someone rolls a 10^6-sided die and tells you they’re 99.9% sure the number was 749,763, you would only assign it a posterior probability of 10^-3?
I see. I used a wrong state space to model this. The answer above is right if I expect a statement of the form “I’m 99.9% sure that N was/wasn’t the number”, and have no knowledge about how N is related to the number on the die. Such statements would be correct 99.9% of the time, and I would only expect to hear positive statements 0.1% of the time, 99.9% of them incorrect.
The correct model is to expect a statement of the form “I’m 99.9% sure that N was the number”, with no option for negative, only with options for N. For such statements to be correct 99.9% of the time, N needs to be the right answer 99.9% of the time, as expected.
One way to illuminate this post is by analogy to the old immovable object and unstoppable force puzzle. See: http://en.wikipedia.org/wiki/Irresistible_force_paradox
The solution of the puzzle is to point out that the assumptions contain a contradiction. People (well, children) sometimes get into shouting matches based on alternative arguments focusing on, or emphasizing, one aspect of the problem over another.
If we read the post as trying to balance two absolutes, with words like “anosognosia”, “absolute denial macro”, “doublethink”, and “denial-of-denial” supporting one side, and words like “redundant”, “AI”, “well-calibrated”, “99.9% sure” supporting the other side, then any answer that favors one absolute over the other is clearly wrong.
However, because the author of the post presumably has a point, and is not merely creating nonsense puzzles to amuse us, the readers, the analogy leads us to focus on the parts of the post which do not fit.
As far as I can tell, the primary aspect that does not fit is the “99.9%”. If we assume that all the other factors are intended to be absolutes, then the post becomes a query for claims that you presently do not believe, but you would believe, given a particular degree of evidence. If we assume that you would revise your degree of belief upwards by a Bayes factor of 1000, the post becomes a simple question “What claims would you give odds of 1:1000 for?”
Of course, there are plenty of beliefs such as “I will roll precisely the sequence “345″ on the next three rolls of this 10-sided die.” which do not fit the form required by the problem. Specifically, the statement needs to be generic enough that it could be targetted by species-wide brain features.
A possible strategy for testing these might be: Suppose you had a bundle of almost 700 equally plausible claims. Would you give even odds for something in the bundle being correct? If so, you’re at the one-in-one-thousand level. If not, you’re above or below it.
You’re mistaking the probability for the hypothesis given the AI’s knowledge for the likelihood ratio of the data on the hypothesis given your own prior knowledge.
AI is a truth-detector that is wrong 1 time in 1000. If the detector says “true”, I shift my certainty upwards by a factor of 1000. “AI’s knowledge” doesn’t enter this picture.
So if someone rolls a 10^6-sided die and tells you they’re 99.9% sure the number was 749,763, you would only assign it a posterior probability of 10^-3?
I see. I used a wrong state space to model this. The answer above is right if I expect a statement of the form “I’m 99.9% sure that N was/wasn’t the number”, and have no knowledge about how N is related to the number on the die. Such statements would be correct 99.9% of the time, and I would only expect to hear positive statements 0.1% of the time, 99.9% of them incorrect.
The correct model is to expect a statement of the form “I’m 99.9% sure that N was the number”, with no option for negative, only with options for N. For such statements to be correct 99.9% of the time, N needs to be the right answer 99.9% of the time, as expected.