Eliezer has expressed the idea that using a Solomonoff-type prior over all programs doesn’t mean you believe the universe to be computable—it just means you’re trying to outperform all other (ETA: strike the word “other”) computable agents. This position took me a lot of time to parse, but now I consider it completely correct. Unfortunately the reason it’s correct is not easy to express in words, it’s just some sort of free-floating math idea in my head.
Not sure how exactly this position meshes with UDT, though.
Eliezer has expressed the idea that using a Solomonoff-type prior over all programs doesn’t mean you believe the universe to be computable—it just means you’re trying to outperform all other computable agents.
Outperform at generating “predictions”, but why is that interesting? Especially if universe is not computable, so that “predictions” don’t in fact have anything to do with the universe? (Which again assumes that “universe” is interesting.)
Eliezer has expressed the idea that using a Solomonoff-type prior over all programs doesn’t mean you believe the universe to be computable—it just means you’re trying to outperform all other (ETA: strike the word “other”) computable agents. This position took me a lot of time to parse, but now I consider it completely correct. Unfortunately the reason it’s correct is not easy to express in words, it’s just some sort of free-floating math idea in my head.
Not sure how exactly this position meshes with UDT, though.
Also, if the universe is not computable, there may be hyperturing agents running around. You might even want to become one.
Outperform at generating “predictions”, but why is that interesting? Especially if universe is not computable, so that “predictions” don’t in fact have anything to do with the universe? (Which again assumes that “universe” is interesting.)
Why do you say “all other computable agents”? Solomonoff induction is not computable.
Right, sorry. My brain must’ve had a hiccup. It’s scary how much this happens. Amended the comment.