When people analyze algorithms, they generally don’t speak of “hostile” inputs that give you license to lay down and die. Instead they speak of average-case and worst-case performance. If you can’t show that your favorite agent (e.g. Solomonoff) wins all games, the next best thing is to show that it outperforms other agents on weighted average across all games. But that would require us to invent and justify some prior over all possible input sequences including uncomputable ones, which is difficult and might even be impossible, as I pointed out in the post.
Game 3 leads to a question: Is there some way we can meaningfully define a sequence as non-hostile?
When people analyze algorithms, they generally don’t speak of “hostile” inputs that give you license to lay down and die. Instead they speak of average-case and worst-case performance. If you can’t show that your favorite agent (e.g. Solomonoff) wins all games, the next best thing is to show that it outperforms other agents on weighted average across all games. But that would require us to invent and justify some prior over all possible input sequences including uncomputable ones, which is difficult and might even be impossible, as I pointed out in the post.