This feels like the kind of philosophical pondering that only makes any amount of sense in a world of perfect spherical cows, but immediately falls apart when you consider realistic real-world parameters.
Like… to go back to the Newcomb’s problem… perfect oracles that can predict the future obviously don’t exist. I mean, I know the author knows that. But I think we disagree on how relevant that is?
Discussions of Newcomb’s problem usually handwave the oracle problem away; eg “Omega’s predictions are almost always right”… but the “almost” is pulling a lot of weight in that sentence. When is Omega wrong? How does it make its decisions? Is it analyzing your atoms? Even if it is, it feels like it should only be able to get an analysis of your personality and how likely you are to pick one or two boxes, not to perfectly predict whether you will (indeed, at the time it gives you a choice, it’s perfectly possible that the decision you’ll make is still fundamentally random, and you might possibly make both choices depending on factors Omega can’t possibly control).
I think there are interesting discussions to be made about eg the value of honor, of sticking to precommitments even when the information you have suggests it’s better for you to betray them, etc. And on the other hand, there’s value to be had in discussing the fact that, in the real world, there’s a lot of situations where pretending to have honor is a perfectly good substitute for actually having honor, and wannabe-Omegas aren’t quite able to tell the difference.
But you have to get out of the realm of spherical cows to have those discussions.
This feels like the kind of philosophical pondering that only makes any amount of sense in a world of perfect spherical cows, but immediately falls apart when you consider realistic real-world parameters.
Like… to go back to the Newcomb’s problem… perfect oracles that can predict the future obviously don’t exist. I mean, I know the author knows that. But I think we disagree on how relevant that is?
Discussions of Newcomb’s problem usually handwave the oracle problem away; eg “Omega’s predictions are almost always right”… but the “almost” is pulling a lot of weight in that sentence. When is Omega wrong? How does it make its decisions? Is it analyzing your atoms? Even if it is, it feels like it should only be able to get an analysis of your personality and how likely you are to pick one or two boxes, not to perfectly predict whether you will (indeed, at the time it gives you a choice, it’s perfectly possible that the decision you’ll make is still fundamentally random, and you might possibly make both choices depending on factors Omega can’t possibly control).
I think there are interesting discussions to be made about eg the value of honor, of sticking to precommitments even when the information you have suggests it’s better for you to betray them, etc. And on the other hand, there’s value to be had in discussing the fact that, in the real world, there’s a lot of situations where pretending to have honor is a perfectly good substitute for actually having honor, and wannabe-Omegas aren’t quite able to tell the difference.
But you have to get out of the realm of spherical cows to have those discussions.