It does not follow that if I have X% confidence in a belief that I can make K statements in which I repose equal confidence, and be wrong only ((100-X)/100)*K times.
You’re correct that a statement at a confidence level of p does not imply the existence of other statements at the level of p. But given that the statement is illustrative, that seems fine.
Do note that there is the question of whether the confidence level came from in the first place. If I don’t have a set of 10k statements that look the same to me, of which about one is wrong, from where comes my confidence level of .9999? How did I distinguish it from a confidence level of .999?
Not 1 - epsilon (where epsilon is an arbitrarily small number), but a probability of 1.
Do you count a Boltzmann brain as ‘existing’ in the meaningful sense?
Note that a utility value of infinity implies a probability of 1, because of the probabilistic interpretation of utilities. Rather than assigning probability 0 to the statement, you can simply return a type error when they say “infinite utility” like you would return a type error if they said “probability 1″ and the rejection would work fine.
Do note that there is the question of whether the confidence level came from in the first place. If I don’t have a set of 10k statements that look the same to me, of which about one is wrong, from where comes my confidence level of .9999? How did I distinguish it from a confidence level of .999?
I may have updated from priors to arrive at that level. It may be my prior.
I can make infinitely many true statements (using the method I outlined above). If I want to reach a certain number of true statements say Y, I can make Y true statements, and make false statements for the rest to reach the desired confidence level.
If I want to reach a certain number of true statements say Y, I can make Y true statements, and make false statements for the rest to reach the desired confidence level.
If you’re doing the thing correctly, you view the statements as equally probable; otherwise it doesn’t make sense to group them together. It’s not “A=A, B=B, C=C, and D=E, all at confidence level 75%” because I can tell the difference, and would be better off saying “the first three at confidence level 1-epsilon, the last at confidence level epsilon.”
You’re correct that a statement at a confidence level of p does not imply the existence of other statements at the level of p. But given that the statement is illustrative, that seems fine.
Do note that there is the question of whether the confidence level came from in the first place. If I don’t have a set of 10k statements that look the same to me, of which about one is wrong, from where comes my confidence level of .9999? How did I distinguish it from a confidence level of .999?
Do you count a Boltzmann brain as ‘existing’ in the meaningful sense?
Note that a utility value of infinity implies a probability of 1, because of the probabilistic interpretation of utilities. Rather than assigning probability 0 to the statement, you can simply return a type error when they say “infinite utility” like you would return a type error if they said “probability 1″ and the rejection would work fine.
I may have updated from priors to arrive at that level.
It may be my prior.
I can make infinitely many true statements (using the method I outlined above). If I want to reach a certain number of true statements say Y, I can make Y true statements, and make false statements for the rest to reach the desired confidence level.
If you’re doing the thing correctly, you view the statements as equally probable; otherwise it doesn’t make sense to group them together. It’s not “A=A, B=B, C=C, and D=E, all at confidence level 75%” because I can tell the difference, and would be better off saying “the first three at confidence level 1-epsilon, the last at confidence level epsilon.”
I see.