Very accurate and general Predictors may be based on Solomonoff’s theory of universal induction. Very powerful Predictors are unsafe in a rather surprising way: when given sufficient data about the real world, they exhibit goal-seeking behavior, i.e. they calculate a distribution over future data in a way that brings about certain real-world states. This is surprising, since a Predictor is theoretically just a very large and expensive application of Bayes’ law, not even performing a search over its possible outputs.
I am not yet convinced by this argument. Think about a computable approximation to Solomonoff induction—like Levin search. Why does it “want” its predictions to be right any more than it “wants” them to be wrong? Superficially, correct and incorrect predictions are treated symmetrically by such systems.
I am not yet convinced by this argument. Think about a computable approximation to Solomonoff induction—like Levin search. Why does it “want” its predictions to be right any more than it “wants” them to be wrong? Superficially, correct and incorrect predictions are treated symmetrically by such systems.
The original argument appears to lack defenders or supporters. Perhaps this is because it is not very strong.