This is the most interesting assessment of free will I’ve seen in a long time. I’ve thought about this a great deal before, particularly when trying to work out my feelings on this post. Obviously we shouldn’t expect any predictor to also be able to predict itself, for halting problem reasons. But yeah, why couldn’t it shop around to find a way of telling the prediction which preserves the outcome? Eventually I decided that probably there was something like that, but it would look a lot more like coercion than prediction. I, for instance, could probably predict a persons actions fairly well if I was holding them at gunpoint. Omega could do the same by digging through possible sentences until it worked out how to tell you what you’d do without swaying the odds.
This is the most interesting assessment of free will I’ve seen in a long time. I’ve thought about this a great deal before, particularly when trying to work out my feelings on this post. Obviously we shouldn’t expect any predictor to also be able to predict itself, for halting problem reasons. But yeah, why couldn’t it shop around to find a way of telling the prediction which preserves the outcome? Eventually I decided that probably there was something like that, but it would look a lot more like coercion than prediction. I, for instance, could probably predict a persons actions fairly well if I was holding them at gunpoint. Omega could do the same by digging through possible sentences until it worked out how to tell you what you’d do without swaying the odds.