Ooh, this looks right. A predictor that “notices” itself in the outside world can output predictions that make themselves true, e.g. by stopping us from preventing predicted events, or something even more weird. Thanks!
(At first I thought Solomonoff induction doesn’t have this problem, because it’s uncomputable and thus cannot include a model of itself. But it seems that a computable approximation to Solomonoff induction may well exhibit such “UDT-ish” behavior, because it’s computable.)
This idea is probably hard to notice at first, since it requires recognizing that a future with a fixed definition can still be controlled by other things with fixed definitions (you don’t need to replace the question in order to control its answer). So even if a “predictor” doesn’t “act”, it still does determine facts that control other facts, and anything that we’d call intelligent cares about certain facts. For a predictor, this would be the fact that its prediction is accurate, and this fact could conceivably be controlled by its predictions, or even by some internal calculations not visible to its builders. With acausal control, air-tight isolation is more difficult.
I am pretty sure that Solomonoff induction doesn’t have this problem. Not because it is uncomputable, but because it’s not attempting to minimise its error rate.
Ooh, this looks right. A predictor that “notices” itself in the outside world can output predictions that make themselves true, e.g. by stopping us from preventing predicted events, or something even more weird. Thanks!
(At first I thought Solomonoff induction doesn’t have this problem, because it’s uncomputable and thus cannot include a model of itself. But it seems that a computable approximation to Solomonoff induction may well exhibit such “UDT-ish” behavior, because it’s computable.)
This idea is probably hard to notice at first, since it requires recognizing that a future with a fixed definition can still be controlled by other things with fixed definitions (you don’t need to replace the question in order to control its answer). So even if a “predictor” doesn’t “act”, it still does determine facts that control other facts, and anything that we’d call intelligent cares about certain facts. For a predictor, this would be the fact that its prediction is accurate, and this fact could conceivably be controlled by its predictions, or even by some internal calculations not visible to its builders. With acausal control, air-tight isolation is more difficult.
I am pretty sure that Solomonoff induction doesn’t have this problem.
Not because it is uncomputable, but because it’s not attempting to minimise its error rate. It doesn’t care if its predictions don’t match reality.
If reality ~ computable, then minimizing error rate ~ matching reality.
(Retracted because I misread your comment. Will think more.)
I am pretty sure that Solomonoff induction doesn’t have this problem. Not because it is uncomputable, but because it’s not attempting to minimise its error rate.