I’m also sceptical of optimality results. When you’re doing subjective probability, any method you come up with will be proven optimal relative to its own prior—the difference between different subjective methods is only in their ontology, and the optimality results don’t protect you against mistakes there. Also, when you’re doing subjectivism, and it turns out the methods required to reach some optimality condition aren’t subjectively optimal, you say “Don’t be a stupid frequentist and do the subjectively optimal thing instead”. So, your bottom line is written. If the optimality condition does come out in your favour, you can’t be more sure because of it—that holds even under the radical version of expected evidence conservation. I also suspect that as subjectivism gets more “radical”, there will be fewer optimality results besides the one relative to prior.
This sounds like doing optimality results poorly. Unfortunately, there is a lot of that (EG how the different optimality notions for CDT and EDT don’t help decide between them).
In particular, the “don’t be a stupid frequentist” move has blinded Bayesians (although frequentists have also been blinded in a different way).
Solomonoff induction has a relatively good optimality notion (that it doesn’t do too much worse than any computable prediction).
AIXI has a relatively poor one (you only guarantee that you take the subjectively best action according to Solomonoff induction; but this is hardly any guarantee at all in terms of reward gained, which is supposed to be the objective). (There are variants of AIXI which have other optimality guarantees, but none very compelling afaik.)
An example of a less trivial optimality notion is the infrabayes idea, where if the world fits within the constraints of one of your partial hypotheses, then you will eventually learn to do at least as well (reward-wise) as that hypothesis implies you can do.
This sounds like doing optimality results poorly. Unfortunately, there is a lot of that (EG how the different optimality notions for CDT and EDT don’t help decide between them).
In particular, the “don’t be a stupid frequentist” move has blinded Bayesians (although frequentists have also been blinded in a different way).
Solomonoff induction has a relatively good optimality notion (that it doesn’t do too much worse than any computable prediction).
AIXI has a relatively poor one (you only guarantee that you take the subjectively best action according to Solomonoff induction; but this is hardly any guarantee at all in terms of reward gained, which is supposed to be the objective). (There are variants of AIXI which have other optimality guarantees, but none very compelling afaik.)
An example of a less trivial optimality notion is the infrabayes idea, where if the world fits within the constraints of one of your partial hypotheses, then you will eventually learn to do at least as well (reward-wise) as that hypothesis implies you can do.