Whoops. Well, I may have figured out what you mean. But just in case, would you be willing to try and explain what sort of extra reasoning the robot should be doing in certain cases as if I had no idea what you meant?
What I’m saying is that the robot will actually do well in the given example, because when you use Bayesian inference on primes it works for some reason.
Other questions in math have weird answers; for example, for every natural number n, the number of smooth structures on R^n is exactly 1… except for n=4, in which case there are uncountably many.
Probability distributions can continuously approximate this, but only with a lot of difficulty and inaccuracy. It’s an answer that isn’t going to nicely appear out of doing Bayesian thinking. Once an agent has seen one dimension with an infinite number, they’ll be hard pressed to accept that no other dimension has more than 1… or seeing an infinite number with 1, how will they accept one with infinitely many? Basically your priors have to be weirdly formalized. This may be a problem that has been addressed.
So my point is that just “throw Bayes at it” is a totally reasonable way of making probabilistic estimates for theorems about countable and compact sets. It’s just that that method of reasoning will vary wildly in how useful it is, because some mathematical results look like probabilities and some don’t.
I don’t have idea for extra reasoning processes I just want to urge caution that the problem isn’t solved because it can handle some situations.
Hm. I think you think the robot works differently than it actually does.
My guess for what you were going to say was that you wanted the robot to work in this different way, but nopes.
A robot that predicts the next number in a series is something like a minimum message-length predictor. The robot outlined here actually can’t do that.
Instead, the robot attacks each problem based on theorems that require or rule out different combinations of relevant possibilities. So, for example, in the question “how many smooth structures are there in n dimensions” (thank you for making that example clear btw), the robot would for each separate case try to prove what was going on, and if it failed, it would likely do something dumb (like run out of time, or pick the first thing it hadn’t proven a theorem about) because it doesn’t handle infinity-option problems very well. What it wouldn’t do is try and predict the answer based on the answer for smaller dimensions, unless it could prove theorems connecting the two.
Okay, but say the robot in ten seconds manages to prove the theorem: “For all dimensions n not equal to 4, there is exactly one smooth structure on R^n.” But is unable to produce a result regarding n=4? Or more realistically, say it is able to construct 1 smooth structure on R^n for any n, and can prove that no others exist unless n=4. How does it make guesses in the case n=4?
If we want to live in the least convenient possible world assume that in second 1 it constructs the smooth structures; it takes three seconds to prove that there are no more for n>5, three seconds to prove no more for n=1,2, and three more seconds to prove there are no more for n=3 and runs out of time. These results are obtained incidentally from inequalities that arise when pursuing a proof for n=4, which is the central value of some equation at the core of the proofs. (so the proofs really say “if another smooth structure exists, it exists for n<5, 2<n<5, 3<n<5.”)
If it really can’t prove any theorems that directly include the translation of “the number of smooth structures for n=4 is,” it simply won’t ever update that.
Whoops. Well, I may have figured out what you mean. But just in case, would you be willing to try and explain what sort of extra reasoning the robot should be doing in certain cases as if I had no idea what you meant?
What I’m saying is that the robot will actually do well in the given example, because when you use Bayesian inference on primes it works for some reason.
Other questions in math have weird answers; for example, for every natural number n, the number of smooth structures on R^n is exactly 1… except for n=4, in which case there are uncountably many.
Probability distributions can continuously approximate this, but only with a lot of difficulty and inaccuracy. It’s an answer that isn’t going to nicely appear out of doing Bayesian thinking. Once an agent has seen one dimension with an infinite number, they’ll be hard pressed to accept that no other dimension has more than 1… or seeing an infinite number with 1, how will they accept one with infinitely many? Basically your priors have to be weirdly formalized. This may be a problem that has been addressed.
So my point is that just “throw Bayes at it” is a totally reasonable way of making probabilistic estimates for theorems about countable and compact sets. It’s just that that method of reasoning will vary wildly in how useful it is, because some mathematical results look like probabilities and some don’t.
I don’t have idea for extra reasoning processes I just want to urge caution that the problem isn’t solved because it can handle some situations.
Hm. I think you think the robot works differently than it actually does.
My guess for what you were going to say was that you wanted the robot to work in this different way, but nopes.
A robot that predicts the next number in a series is something like a minimum message-length predictor. The robot outlined here actually can’t do that.
Instead, the robot attacks each problem based on theorems that require or rule out different combinations of relevant possibilities. So, for example, in the question “how many smooth structures are there in n dimensions” (thank you for making that example clear btw), the robot would for each separate case try to prove what was going on, and if it failed, it would likely do something dumb (like run out of time, or pick the first thing it hadn’t proven a theorem about) because it doesn’t handle infinity-option problems very well. What it wouldn’t do is try and predict the answer based on the answer for smaller dimensions, unless it could prove theorems connecting the two.
Okay, but say the robot in ten seconds manages to prove the theorem: “For all dimensions n not equal to 4, there is exactly one smooth structure on R^n.” But is unable to produce a result regarding n=4? Or more realistically, say it is able to construct 1 smooth structure on R^n for any n, and can prove that no others exist unless n=4. How does it make guesses in the case n=4?
If we want to live in the least convenient possible world assume that in second 1 it constructs the smooth structures; it takes three seconds to prove that there are no more for n>5, three seconds to prove no more for n=1,2, and three more seconds to prove there are no more for n=3 and runs out of time. These results are obtained incidentally from inequalities that arise when pursuing a proof for n=4, which is the central value of some equation at the core of the proofs. (so the proofs really say “if another smooth structure exists, it exists for n<5, 2<n<5, 3<n<5.”)
If it really can’t prove any theorems that directly include the translation of “the number of smooth structures for n=4 is,” it simply won’t ever update that.
Well it can prove “the number of structures for n=4 is at least 1.”