Ok, Tile-the-Universe-with-Smiles should make some paperclips because Clippy will put smiles on something. But both agents are so far apart that they can’t empirically verify the other agent’s existence.
So, this makes sense if Clippy and Tiling can deduce each other’s existence without empirical evidence, and each one thinks this issue is similar enough to Newcomb’s problem that they pre-commit to one-boxing (aka following through even if they can’t empirically verify follow through by the other party.
But treating this problem like Newcomb’s instead of like one-shot Prisoner’s dilemma seems wrong to me. Even using some advanced decision theory, there doesn’t seem to be any reason either agent thinks the other is similar enough to cooperate with. Alternatively, each agent might have some way of verifying compliance—but then labeling this reasoning “acausal” seems terribly misleading.
Umm, prety much all of the advanced decision theories talked about here do cooperate on the prisoners dilemma. In fact, it’s sometimes used as a criterion I’m prety sure.
The advanced decision theories cooperate with themselves. They also try to figure out if the counter-party is likely to cooperate. But they don’t necessarily cooperate with everyone—consider DefectBot.
Ok, Tile-the-Universe-with-Smiles should make some paperclips because Clippy will put smiles on something. But both agents are so far apart that they can’t empirically verify the other agent’s existence.
So, this makes sense if Clippy and Tiling can deduce each other’s existence without empirical evidence, and each one thinks this issue is similar enough to Newcomb’s problem that they pre-commit to one-boxing (aka following through even if they can’t empirically verify follow through by the other party.
But treating this problem like Newcomb’s instead of like one-shot Prisoner’s dilemma seems wrong to me. Even using some advanced decision theory, there doesn’t seem to be any reason either agent thinks the other is similar enough to cooperate with. Alternatively, each agent might have some way of verifying compliance—but then labeling this reasoning “acausal” seems terribly misleading.
Internet connection wonkiness = inadvertent double post. Sorry about that, folks.
Umm, prety much all of the advanced decision theories talked about here do cooperate on the prisoners dilemma. In fact, it’s sometimes used as a criterion I’m prety sure.
The advanced decision theories cooperate with themselves. They also try to figure out if the counter-party is likely to cooperate. But they don’t necessarily cooperate with everyone—consider DefectBot.
This was to obvious for me to notice the assumption.
@TimS, this is an important objection. But rather than putting my reply under this downvoted thread, I will save it for a later.
Because the post was retracted, it will not be downvoted any further, so you’re safe to respond.