The donor example was to show how such a predictor could end up moving you far in the positive or negative direction. If you were optimising for income rather than accuracy, the choice is obvious.
The £(P±1) is a continuous model of a discontinuous reality. The model has a self-confirming prediction, and it turns out “reality” (the discretised version) has one too. Unless derivatives get extremely high, a continuous model implies a self-confirming prediction implies a close-to-self-confirming prediction in the discretised model.
I think I’m still confused—a naive sequence predictor is _OF COURSE_ broken by perverse or adversarial unmodelled (because of the naivety of the predictor) behaviors. And such a predictor cannot unlock new corners of strategy space, or generate self-reinforcing predictions, because the past sequence on which it’s trained won’t have those features.
And such a predictor cannot unlock new corners of strategy space, or generate self-reinforcing predictions, because the past sequence on which it’s trained won’t have those features.
See my last paragraph above; I don’t think we can rely on predictors not unlocking new corners of strategy space, because it may be able to learn gradually how to do so.
A self-confirming prediction is what an oracle that was a naive sequence predictor (or that was rewarded on results) would give. https://www.lesswrong.com/posts/i2dNFgbjnqZBfeitT/oracles-sequence-predictors-and-self-confirming-predictions
The donor example was to show how such a predictor could end up moving you far in the positive or negative direction. If you were optimising for income rather than accuracy, the choice is obvious.
The £(P±1) is a continuous model of a discontinuous reality. The model has a self-confirming prediction, and it turns out “reality” (the discretised version) has one too. Unless derivatives get extremely high, a continuous model implies a self-confirming prediction implies a close-to-self-confirming prediction in the discretised model.
I think I’m still confused—a naive sequence predictor is _OF COURSE_ broken by perverse or adversarial unmodelled (because of the naivety of the predictor) behaviors. And such a predictor cannot unlock new corners of strategy space, or generate self-reinforcing predictions, because the past sequence on which it’s trained won’t have those features.
See my last paragraph above; I don’t think we can rely on predictors not unlocking new corners of strategy space, because it may be able to learn gradually how to do so.