One possibility is that it’s able to find a useful outside view model such as “the Predict-O-Matic has a history of making negative self-fulfilling prophecies”. This could lead to the Predict-O-Matic making a negative prophecy (“the Predict-O-Matic will continue to make negative prophecies which result in terrible outcomes”), but this prophecy wouldn’t be selected for being self-fulfilling. And we might usefully ask the Predict-O-Matic whether the terrible self-fulfilling prophecies will continue conditional on us taking Action A.
Maybe I misunderstood what you mean by dualism, but I don’t think that’s true. Say the Predict-O-Matic has an outside view model (of itself) like “The metal box on your desk (the Predict-O-Matic) will make a self-fullfilling prophecy that maximizes the number of paperclips”. Then you ask it how likely it is that your digital records will survive for 100 years. It notices that that depends significantly on how much effort you make to secure them. It notices that that significantly depends on what the metal box on your desk tells you. It uses it’s low-model resolution of what the box says. To work that out, it checks which outputs would be self-fulfilling, and then which of these leads to the most paperclips. The more unsecure your digital records are, the more you will invest in paper, and the more paperclips you will need. Therefore the metal box will tell you the lowest self-fulfilling propability for your question. Since that number is *self-fulfilling*, it is in fact the correct answer, and the Predict-O-Matic will answer with it.
I think this avoids your argument that
I contend that Predict-O-Matic doesn’t know it will predict P = A at the relevant time. It would require time travel—to know whether it will predict P = A, it will have to have made a prediction already, and but it’s still formulating its prediction as it thinks about what it will predict.
because it doesn’t have to simulate itself in detail to know what the metal box (it) will do. The low-resolution model provides a shortcut around that, but it will be accurate despite the low resolution, because by believing it is simple, it becomes simple.
Can you usefully ask for conditionals? Maybe. The answer to the conditional depends on what worlds you are likely to take Action A in. It might be that in most worlds where you do A, you do it because of a prediction from the metal box, and since we know those maximize paperclips, there’s a good chance the action will fail to prevent it in those cricumstances. But if that’s not the case, for example because it’s certain you won’t ask the box any more questions between this one and the event it tries to predict.
It might be possible to avoid any problems of this sort by only ever asking questions of the type “Will X happen if I do Y now (with no time to receive new info between hearing the prediction and doing the action)?”, because by backwards induction the correct answer will not depend on what you actually do. This doesn’t avoid the scenarios on the original where multiple people act on their Predict-O-Matics, but I suspect these aren’t solvable without coordination.
Maybe I misunderstood what you mean by dualism, but I don’t think that’s true. Say the Predict-O-Matic has an outside view model (of itself) like “The metal box on your desk (the Predict-O-Matic) will make a self-fullfilling prophecy that maximizes the number of paperclips”. Then you ask it how likely it is that your digital records will survive for 100 years. It notices that that depends significantly on how much effort you make to secure them. It notices that that significantly depends on what the metal box on your desk tells you. It uses it’s low-model resolution of what the box says. To work that out, it checks which outputs would be self-fulfilling, and then which of these leads to the most paperclips. The more unsecure your digital records are, the more you will invest in paper, and the more paperclips you will need. Therefore the metal box will tell you the lowest self-fulfilling propability for your question. Since that number is *self-fulfilling*, it is in fact the correct answer, and the Predict-O-Matic will answer with it.
I think this avoids your argument that
because it doesn’t have to simulate itself in detail to know what the metal box (it) will do. The low-resolution model provides a shortcut around that, but it will be accurate despite the low resolution, because by believing it is simple, it becomes simple.
Can you usefully ask for conditionals? Maybe. The answer to the conditional depends on what worlds you are likely to take Action A in. It might be that in most worlds where you do A, you do it because of a prediction from the metal box, and since we know those maximize paperclips, there’s a good chance the action will fail to prevent it in those cricumstances. But if that’s not the case, for example because it’s certain you won’t ask the box any more questions between this one and the event it tries to predict.
It might be possible to avoid any problems of this sort by only ever asking questions of the type “Will X happen if I do Y now (with no time to receive new info between hearing the prediction and doing the action)?”, because by backwards induction the correct answer will not depend on what you actually do. This doesn’t avoid the scenarios on the original where multiple people act on their Predict-O-Matics, but I suspect these aren’t solvable without coordination.