You could use the Solomonoff prior (on a discretized version of this), but that way lies madness. It’s uncomputable, and most of the functions that fit the data may contain agents that try to get you to do their bidding, all sorts of problems.
This seems false if you’re interacting with a computable universe, and don’t need to model yourself or copies of yourself. Computability of the prior also seems irrelevant if I have infinite compute. Therefore in this prediction task, I don’t see the problem in just using the first thing you mentioned.
This seems false if you’re interacting with a computable universe, and don’t need to model yourself or copies of yourself
Reasonable people disagree. Why should I care about the “limit of large data” instead of finite-data performance?
This seems false if you’re interacting with a computable universe, and don’t need to model yourself or copies of yourself. Computability of the prior also seems irrelevant if I have infinite compute. Therefore in this prediction task, I don’t see the problem in just using the first thing you mentioned.
Reasonable people disagree. Why should I care about the “limit of large data” instead of finite-data performance?