one trouble might be that this robot isn’t equipped to handle improper distributions. So if you hand it an infinitude of finite simple groups and tell it to choose one, it assigns everything a probability of zero and chooses according to whatever algorithm it uses to choose between things that have identical utility.
I don’t think this is too hard to fix. The robot could have a unit of improper prior (epsilon) that it remembers is larger than zero but smaller than any positive real.
Of course, this doesn’t tell you what to do when asked to guess whether the order of the correct finite simple group is even, which might be a pretty big drawback.
For the robot as described, this will actually happen (sort of like Wei Dai’s comment—I’m learning a lot from discussing with you guys :D ) - it only actually lowers something’s probability once it proves something about it specifically, so it just lowers the probability of most of its infinite options by some big exponential, and then, er, runs out of time trying to pick the option with highest utility. Okay, so there might be a small flaw.
I don’t think this is too hard to fix. The robot could have a unit of improper prior (epsilon) that it remembers is larger than zero but smaller than any positive real.
Of course, this doesn’t tell you what to do when asked to guess whether the order of the correct finite simple group is even, which might be a pretty big drawback.
For the robot as described, this will actually happen (sort of like Wei Dai’s comment—I’m learning a lot from discussing with you guys :D ) - it only actually lowers something’s probability once it proves something about it specifically, so it just lowers the probability of most of its infinite options by some big exponential, and then, er, runs out of time trying to pick the option with highest utility. Okay, so there might be a small flaw.