Maybe you’ve heard this before, but the usual story is that the goal is to clarify conceptual questions that exist in both the abstract and more practical settings. We are moving towards considering such things though—the point of the post I linked was to reexamine old philosophical questions using logical inductors, which are computable.
Further, my intuition from studying logical induction is that practical systems will be “close enough” to satisfying the logical induction critereon that many things will carry over (much of this is just intuitions one could also get from online learning theory). E.g. in the logical induction decision theory post, I expect the individual points made using logical inductors to mostly or all apply to practical systems, and you can use the fact that logical inductors are well-defined to test further ideas building on these.
When computations have costs I think the nature of the problems change drastically. I’ve argued that we need to go up to meta-decision theories because of it here.
The idea of solomonov induction is not needed for building Neural networks (or useful for reasoning about them). So my pragmatic heart is cold towards a theory of logical induction as well.
Maybe you’ve heard this before, but the usual story is that the goal is to clarify conceptual questions that exist in both the abstract and more practical settings. We are moving towards considering such things though—the point of the post I linked was to reexamine old philosophical questions using logical inductors, which are computable.
Further, my intuition from studying logical induction is that practical systems will be “close enough” to satisfying the logical induction critereon that many things will carry over (much of this is just intuitions one could also get from online learning theory). E.g. in the logical induction decision theory post, I expect the individual points made using logical inductors to mostly or all apply to practical systems, and you can use the fact that logical inductors are well-defined to test further ideas building on these.
When computations have costs I think the nature of the problems change drastically. I’ve argued that we need to go up to meta-decision theories because of it here.
The idea of solomonov induction is not needed for building Neural networks (or useful for reasoning about them). So my pragmatic heart is cold towards a theory of logical induction as well.