Oh, so the point of gradual failure as you go to limited computing power is that the theories you postulate get more and more inconsistent? I like it :D But I still feel like the part of the procedure that searches for your statement A among the logical theories still doesn’t fail gracefully when resources are limited. If you don’t have much time, it’s just not going to find A by any neutral or theorem-proving sort of search. And if you try just tossing in A at the last minute and checking for consistency, a situation where your consistency-checker isn’t very strong will just lead to your logical probability depending on the pattern with which you generate and add in A (presumably this would be set up to return P=0.5).
Or to put it another way, even if the search proves that “the trillionth prime ends in 0” has probability 0, it might not be able to use that to show that “the trillionth prime ends in 3″ has any probability other than 0.5, if it’s having to toss it in at the end and do a consistency check against already-added theorems. Which is to say, I think there are weaknesses to doing sampling rather than doing probabilistic logic.
For a shameless plug of what I think is a slightly clever implementation, I wrote a post :P (It’s only slightly clever because it doesn’t do pattern-recognition, it only does probabilistic logic—the question is why and how pattern recognition helps) I’m still thinking about whether this corresponds to your algorithm for some choice of the logical-theory-generator and certain resources.
Thanks for the pointer to the post. :) I stress again that the random sampling is only one way of computing the probability; it’s equally possible to use more explicit probabilistic reasoning.
Oh, so the point of gradual failure as you go to limited computing power is that the theories you postulate get more and more inconsistent? I like it :D But I still feel like the part of the procedure that searches for your statement A among the logical theories still doesn’t fail gracefully when resources are limited. If you don’t have much time, it’s just not going to find A by any neutral or theorem-proving sort of search. And if you try just tossing in A at the last minute and checking for consistency, a situation where your consistency-checker isn’t very strong will just lead to your logical probability depending on the pattern with which you generate and add in A (presumably this would be set up to return P=0.5).
Or to put it another way, even if the search proves that “the trillionth prime ends in 0” has probability 0, it might not be able to use that to show that “the trillionth prime ends in 3″ has any probability other than 0.5, if it’s having to toss it in at the end and do a consistency check against already-added theorems. Which is to say, I think there are weaknesses to doing sampling rather than doing probabilistic logic.
For a shameless plug of what I think is a slightly clever implementation, I wrote a post :P (It’s only slightly clever because it doesn’t do pattern-recognition, it only does probabilistic logic—the question is why and how pattern recognition helps) I’m still thinking about whether this corresponds to your algorithm for some choice of the logical-theory-generator and certain resources.
Thanks for the pointer to the post. :) I stress again that the random sampling is only one way of computing the probability; it’s equally possible to use more explicit probabilistic reasoning.