do you understand how a rational agent might prefer B?
to
do you understand how a society of rational agents might want to create a framework of enforceable precommitments that incentivizes B to a point such that P1, when being mugged, will prefer B?
For example, if anyone who gave up a wallet later received a death sentence for doing so, the loss of life would be factored out—in effect, being mugged would become a death sentence regardless of your choice, in which case it’d be much easier hanging on to your purse for the good of the many. (Even if society killing you otherwise could be construed as having a slightly alienating effect.)
Maybe change
to
For example, if anyone who gave up a wallet later received a death sentence for doing so, the loss of life would be factored out—in effect, being mugged would become a death sentence regardless of your choice, in which case it’d be much easier hanging on to your purse for the good of the many. (Even if society killing you otherwise could be construed as having a slightly alienating effect.)
Edited accordingly. Thanks.