Omega and hypercomputational powers isn’t needed, just decent enough prediction about what someone would do. I’ve seen Transparent Newcomb being run on someone before, at a math camp. They were predicted to not take the small extra payoff, and they didn’t. And there was also an instance of acausal vote trading that I managed to pull off a few years ago, and I’ve put someone in a counterfactual mugging sort of scenario where I did pay out due to predicting they’d take the small loss in a nearby possible world. 2⁄3 of those instances were cases where I was specifically picking people that seemed unusually likely to take this sort of thing seriously, and it was predictable what they’d do.
I guess you figure out the entity is telling the truth in roughly the same way you’d figure out a human is telling the truth? Like “they did this a lot against other humans and their prediction record is accurate”.
And no, I don’t think that you’d be able to get from this mathematical framework to proving “a proof of benevolence is impossible”. What the heck would that proof even look like?
What the heck would that proof even look like, indeed. That’s what I haven’t figured out yet.
(On the practical level… I’m pretty sure it would be awesome to interact with a benevolent god and it seems that the thing you’re suggesting is that there are prosaic versions.
One obvious prosaic version of such proximity is a job in finance. The courts and contracts and so on are kind of like Omega, and surely this entity is benevolent? Luck goes well: millions in bonuses. Luck goes bad: you’re laid off. Since of course the system in general is benevolent: surely it would be acceptable to participate? The personal asymmetry in outcomes would make the whole situation potentially nice to be near to.
But then I wonder about that assumption of benevolence and think about Mammon, and I remember The Big Short …and I go back to wondering how Omega offers a finite creature a proof of benevolence.)
The proof of benevolence is a red herring. Just imagine the exact same game happening again and again. Eventually you should become convinced that the game works as advertised.
Omega and hypercomputational powers isn’t needed, just decent enough prediction about what someone would do. I’ve seen Transparent Newcomb being run on someone before, at a math camp. They were predicted to not take the small extra payoff, and they didn’t. And there was also an instance of acausal vote trading that I managed to pull off a few years ago, and I’ve put someone in a counterfactual mugging sort of scenario where I did pay out due to predicting they’d take the small loss in a nearby possible world. 2⁄3 of those instances were cases where I was specifically picking people that seemed unusually likely to take this sort of thing seriously, and it was predictable what they’d do.
I guess you figure out the entity is telling the truth in roughly the same way you’d figure out a human is telling the truth? Like “they did this a lot against other humans and their prediction record is accurate”.
And no, I don’t think that you’d be able to get from this mathematical framework to proving “a proof of benevolence is impossible”. What the heck would that proof even look like?
What the heck would that proof even look like, indeed. That’s what I haven’t figured out yet.
(On the practical level… I’m pretty sure it would be awesome to interact with a benevolent god and it seems that the thing you’re suggesting is that there are prosaic versions.
One obvious prosaic version of such proximity is a job in finance. The courts and contracts and so on are kind of like Omega, and surely this entity is benevolent? Luck goes well: millions in bonuses. Luck goes bad: you’re laid off. Since of course the system in general is benevolent: surely it would be acceptable to participate? The personal asymmetry in outcomes would make the whole situation potentially nice to be near to.
But then I wonder about that assumption of benevolence and think about Mammon, and I remember The Big Short …and I go back to wondering how Omega offers a finite creature a proof of benevolence.)
The proof of benevolence is a red herring. Just imagine the exact same game happening again and again. Eventually you should become convinced that the game works as advertised.