The monte-carlo algorithm is only one way of viewing the distribution. You can also imagine expanding the tree of possibilities in a more static way, computing the probabilities down each branch in a way that reflects the probabilistic generation.
A sufficiently clever implementation could save a lot of work.
Knowing arithmetic but not knowing the trillionth prime is fairly simple; this means that you are conditioning the probability distribution on the peano axioms, but your consistency check isn’t going deep enough to get you the information about the trillionth prime. You’ll get a “reasonable” distribution over the digits of the trillionth prime, IE, obeying any laws your consistency check does know, and giving simple theories high probability.
Presumably the consistency check will go deep enough to know that 2+2=4. :)
Oh, so the point of gradual failure as you go to limited computing power is that the theories you postulate get more and more inconsistent? I like it :D But I still feel like the part of the procedure that searches for your statement A among the logical theories still doesn’t fail gracefully when resources are limited. If you don’t have much time, it’s just not going to find A by any neutral or theorem-proving sort of search. And if you try just tossing in A at the last minute and checking for consistency, a situation where your consistency-checker isn’t very strong will just lead to your logical probability depending on the pattern with which you generate and add in A (presumably this would be set up to return P=0.5).
Or to put it another way, even if the search proves that “the trillionth prime ends in 0” has probability 0, it might not be able to use that to show that “the trillionth prime ends in 3″ has any probability other than 0.5, if it’s having to toss it in at the end and do a consistency check against already-added theorems. Which is to say, I think there are weaknesses to doing sampling rather than doing probabilistic logic.
For a shameless plug of what I think is a slightly clever implementation, I wrote a post :P (It’s only slightly clever because it doesn’t do pattern-recognition, it only does probabilistic logic—the question is why and how pattern recognition helps) I’m still thinking about whether this corresponds to your algorithm for some choice of the logical-theory-generator and certain resources.
Thanks for the pointer to the post. :) I stress again that the random sampling is only one way of computing the probability; it’s equally possible to use more explicit probabilistic reasoning.
The monte-carlo algorithm is only one way of viewing the distribution. You can also imagine expanding the tree of possibilities in a more static way, computing the probabilities down each branch in a way that reflects the probabilistic generation.
A sufficiently clever implementation could save a lot of work.
Knowing arithmetic but not knowing the trillionth prime is fairly simple; this means that you are conditioning the probability distribution on the peano axioms, but your consistency check isn’t going deep enough to get you the information about the trillionth prime. You’ll get a “reasonable” distribution over the digits of the trillionth prime, IE, obeying any laws your consistency check does know, and giving simple theories high probability.
Presumably the consistency check will go deep enough to know that 2+2=4. :)
Oh, so the point of gradual failure as you go to limited computing power is that the theories you postulate get more and more inconsistent? I like it :D But I still feel like the part of the procedure that searches for your statement A among the logical theories still doesn’t fail gracefully when resources are limited. If you don’t have much time, it’s just not going to find A by any neutral or theorem-proving sort of search. And if you try just tossing in A at the last minute and checking for consistency, a situation where your consistency-checker isn’t very strong will just lead to your logical probability depending on the pattern with which you generate and add in A (presumably this would be set up to return P=0.5).
Or to put it another way, even if the search proves that “the trillionth prime ends in 0” has probability 0, it might not be able to use that to show that “the trillionth prime ends in 3″ has any probability other than 0.5, if it’s having to toss it in at the end and do a consistency check against already-added theorems. Which is to say, I think there are weaknesses to doing sampling rather than doing probabilistic logic.
For a shameless plug of what I think is a slightly clever implementation, I wrote a post :P (It’s only slightly clever because it doesn’t do pattern-recognition, it only does probabilistic logic—the question is why and how pattern recognition helps) I’m still thinking about whether this corresponds to your algorithm for some choice of the logical-theory-generator and certain resources.
Thanks for the pointer to the post. :) I stress again that the random sampling is only one way of computing the probability; it’s equally possible to use more explicit probabilistic reasoning.