If you have 100% identical consequentialist values to all other humans, then that means ‘cooperation’ and ‘defection’ are both impossible for humans (because they can’t be put in PDs). … To properly visualize the PD, you need an actual value conflict
True, but the flip side of this is that efficiency (in Coasian terms) is precisely defined as pursuing 100% identical consequentialist values, where the shared “values” are determined by a weighted sum of each agent’s utility function (and the weights are typically determined by agent endowments).
True, but the flip side of this is that efficiency (in Coasian terms) is precisely defined as pursuing 100% identical consequentialist values, where the shared “values” are determined by a weighted sum of each agent’s utility function (and the weights are typically determined by agent endowments).