Eliezer has expressed the idea that using a Solomonoff-type prior over all programs doesn’t mean you believe the universe to be computable—it just means you’re trying to outperform all other computable agents.
Outperform at generating “predictions”, but why is that interesting? Especially if universe is not computable, so that “predictions” don’t in fact have anything to do with the universe? (Which again assumes that “universe” is interesting.)
Outperform at generating “predictions”, but why is that interesting? Especially if universe is not computable, so that “predictions” don’t in fact have anything to do with the universe? (Which again assumes that “universe” is interesting.)