Very powerful Predictors are unsafe in a rather surprising way: when given sufficient data about the real world, they exhibit goal-seeking behavior, i.e. they calculate a distribution over future data in a way that brings about certain real-world states.
This isn’t necessarily the case. What the loopiness of predictors shows is that a simple predictor is incomplete. How correct a prediction is depends on the prediction that is made (i.e. is loopy), so you need another criterion in order to actually program the predictor to resolve this case. One way to do this is by programming it to “select” predictions based on some sort of accuracy metric, but this is only one way. There may be other safer ways that make the predictor less agenty, such as answering “LOOPY” when its would-be predictions vary too much depending on which answer it could give.
This isn’t necessarily the case. What the loopiness of predictors shows is that a simple predictor is incomplete. How correct a prediction is depends on the prediction that is made (i.e. is loopy), so you need another criterion in order to actually program the predictor to resolve this case. One way to do this is by programming it to “select” predictions based on some sort of accuracy metric, but this is only one way. There may be other safer ways that make the predictor less agenty, such as answering “LOOPY” when its would-be predictions vary too much depending on which answer it could give.