Alice an Bob are playing a variation of a one-shot Prisoner’s dilemma. In this version of the game, instead of choosing their actions simultaneously, Alice moves first, and then Bob moves after he knows Alice move. However, Alice know Bob’s thought processes well enough that she can predict his move ahead of time.
We need information about what Bob believes about Alice’s thought processes. I am going to answer as if you had appended “and Bob knows that Alice can do this.” to the previous sentence so that I can give a useful answer. Without being given such information the problem would just be about allocating priors to Bob that represent his beliefs about Alice’s thought process.
Both Alice and Bob are rational utility maximizers.
Be more specific. People embed various assumptions about what is ‘rational’ behind that phrase. If you mean “Alice and Bob are both Causal Decision Theorists attempting to maximise utility” then then answer is (D, D). If Bob acts ‘rationally’ in as much as he operates according to a Reflexive Decision Theory of some sort (ie. TDT or UDT) then the outcome is (C, C) regardless of which of the plausible decision theories Alice is assumed to be implementing (CDT, TDT, UDT, EDT).
The underspecification was intentional, so that people may answer according to what their preferred decision theory.
Ahh, survey question. In that case may I suggest leaving the ‘rational’ there but removing the ‘utility maxisers’. I think that would get you the most reliable information of the kind you are trying to elicit from the response. This is just because there are some whose “preferred decision theory” and specific use of terminology is such that they would say the ‘rational’ thing for Bob to do is to cooperate but that a “utility maximising” thing would be to defect. I expect you are more interested in the decision output than the ontology.
We need information about what Bob believes about Alice’s thought processes. I am going to answer as if you had appended “and Bob knows that Alice can do this.” to the previous sentence so that I can give a useful answer. Without being given such information the problem would just be about allocating priors to Bob that represent his beliefs about Alice’s thought process.
Be more specific. People embed various assumptions about what is ‘rational’ behind that phrase. If you mean “Alice and Bob are both Causal Decision Theorists attempting to maximise utility” then then answer is (D, D). If Bob acts ‘rationally’ in as much as he operates according to a Reflexive Decision Theory of some sort (ie. TDT or UDT) then the outcome is (C, C) regardless of which of the plausible decision theories Alice is assumed to be implementing (CDT, TDT, UDT, EDT).
Yes.
The underspecification was intentional, so that people may answer according to what their preferred decision theory.
Ahh, survey question. In that case may I suggest leaving the ‘rational’ there but removing the ‘utility maxisers’. I think that would get you the most reliable information of the kind you are trying to elicit from the response. This is just because there are some whose “preferred decision theory” and specific use of terminology is such that they would say the ‘rational’ thing for Bob to do is to cooperate but that a “utility maximising” thing would be to defect. I expect you are more interested in the decision output than the ontology.