Scenario (b) doesn’t explain/analyse the situation the way I’d explain/analyse it. If Bob is able to precommit himself to play C if and only if Alice plays C, then Alice’s mindreading reads Bob’s precommitment, Alice plays C to ensure Bob will also play C (otherwise Alice would lose), then Bob’s precommitment is followed through and the (C, C) reality becomes true.
If someone can plausibly precommit themselves, via human concepts like honor or duty or obligation, or via computer concepts like rewriting one’s software code—and if they can signal this convincingly, then mutual cooperation becomes a possibility.
“Is this scenario just a theoretical curiosity that can never happen in real life because it is impossible to accurately predict the actions of any agent of any signficant complexity”
It’s scenario that is already a reality to some limited expect, though we use concepts like duty, honor, etc… It doesn’t always work, mainly because we can’t signal effectively the solidity of our precommitment, nor are we indeed always of such iron will that our precommitments are actually solid enough.
EDIT TO ADD: And isn’t this concept pretty much what the whole Mutually Assured Destruction doctrine was built on?
Either way, this question is obviously bad for a survey—as it has to be answered with a small essay, not with a multiple-choice.
Scenario (b) doesn’t explain/analyse the situation the way I’d explain/analyse it. If Bob is able to precommit himself to play C if and only if Alice plays C, then Alice’s mindreading reads Bob’s precommitment, Alice plays C to ensure Bob will also play C (otherwise Alice would lose), then Bob’s precommitment is followed through and the (C, C) reality becomes true.
If someone can plausibly precommit themselves, via human concepts like honor or duty or obligation, or via computer concepts like rewriting one’s software code—and if they can signal this convincingly, then mutual cooperation becomes a possibility.
It’s scenario that is already a reality to some limited expect, though we use concepts like duty, honor, etc… It doesn’t always work, mainly because we can’t signal effectively the solidity of our precommitment, nor are we indeed always of such iron will that our precommitments are actually solid enough.
EDIT TO ADD: And isn’t this concept pretty much what the whole Mutually Assured Destruction doctrine was built on?
Either way, this question is obviously bad for a survey—as it has to be answered with a small essay, not with a multiple-choice.