Basically the problem is that a Bayesian should not be able to change its probabilities without new evidence, and if you assign a probability other than 1 to a mathematical truth, you will run into problems when you deduce that it follows of necessity from other things that have a probability of 1.
Why can’t the deduction be the evidence? If I start with a 50-50 prior that 4 is prime, I can then use the subsequent observation that I’ve found a factor to update downwards. This feels like it relies on the reasoner’s embedding though, so maybe it’s cheating, but it’s not clear and non-confusing to me why it doesn’t count.
How do you express, Fermat’s last theorem for instance, as a boolean combination of the language I gave, or as a boolean combination of programs? Boolean algebra is not strong enough to derive, or even express all of math.
edit: Let’s start simple. How do you express 1 + 1 = 2 in the language I gave, or as a boolean combination of programs?
Probability that there are two elephants given one on the left and one on the right.
In any case, if your language can’t express Fermat’s last theorem then of course you don’t assign a probability of 1 to it, not because you assign it a different probability, but because you don’t assign it a probability at all.
I agree. I am saying that we need not assign it a probability at all. Your solution assumes that there is a way to express “two” in the language. Also, the proposition you made is more like “one elephant and another elephant makes two elephants” not “1 + 1 = 2”.
I think we’d be better off trying to find a way to express 1 + 1 = 2 as a boolean function on programs.
Basically the problem is that a Bayesian should not be able to change its probabilities without new evidence, and if you assign a probability other than 1 to a mathematical truth, you will run into problems when you deduce that it follows of necessity from other things that have a probability of 1.
Why can’t the deduction be the evidence? If I start with a 50-50 prior that 4 is prime, I can then use the subsequent observation that I’ve found a factor to update downwards. This feels like it relies on the reasoner’s embedding though, so maybe it’s cheating, but it’s not clear and non-confusing to me why it doesn’t count.
How do you express, Fermat’s last theorem for instance, as a boolean combination of the language I gave, or as a boolean combination of programs? Boolean algebra is not strong enough to derive, or even express all of math.
edit: Let’s start simple. How do you express 1 + 1 = 2 in the language I gave, or as a boolean combination of programs?
Probability that there are two elephants given one on the left and one on the right.
In any case, if your language can’t express Fermat’s last theorem then of course you don’t assign a probability of 1 to it, not because you assign it a different probability, but because you don’t assign it a probability at all.
I agree. I am saying that we need not assign it a probability at all. Your solution assumes that there is a way to express “two” in the language. Also, the proposition you made is more like “one elephant and another elephant makes two elephants” not “1 + 1 = 2”.
I think we’d be better off trying to find a way to express 1 + 1 = 2 as a boolean function on programs.
This goes into the “shit LW people say” collection :-)
Upvoted for cracking me up.