extension of Solomonoff induction to anthropic reasoning and higher-order logic – why ideal rational agents still seem to need anthropic assumptions.
I would say it lacks a rationale. AFAIK, intelligent agents just maximise some measure of utility. Anthropic issues are dealt with automatically as part of this process.
Much the same is true of this one:
Theory of logical uncertainty in temporal bounded agents.
Again, this is a sub-problem of solving the maximisation problem.
Breaking a problem down into sub-problems is valuable—of course. On the other hand you don’t want to mistake one problem for three problems—or state a simple problem in a complicated way.
I would say it lacks a rationale. AFAIK, intelligent agents just maximise some measure of utility. Anthropic issues are dealt with automatically as part of this process.
Much the same is true of this one:
Again, this is a sub-problem of solving the maximisation problem.
Breaking a problem down into sub-problems is valuable—of course. On the other hand you don’t want to mistake one problem for three problems—or state a simple problem in a complicated way.