It’s not obvious, yeah. My failure of communication on the original post. My point, as I intended it, was that I mixed my intuitive feeling(“rationalist should follow categorical imperative because it feels sensible”) to an obvious fact. My reasoning was based on simplistic model of PD where punishing for non-normative things and trusting and abiding otherwise works. So, I was basically asking for clarification in a guise of a statement :)
I think my earlier response to you (now deleted) misunderstood your comment. I’m still not sure I understand you now, but I’ll give it another shot.
All of the things I listed are commonly accepted within the relevant fields as individually rational. It boils down to the idea that it is individually rational to defect in a one-shot PD where you’ll never see the other player again and the result will never be made public. Yes, we have lots of mechanisms to improve group rationality, like laws, institutions, social norms, etc., but all of that just shows how hard group rationality is.
Here’s another example that might help make my point. How much “CPU time” does an average person’s brain spend to play status games instead of doing something socially productive? That is hardly rational on a group level, but we have little hope of reducing it by any significant amount.
Argue this point in more detail, it isn’t obvious.
It’s not obvious, yeah. My failure of communication on the original post. My point, as I intended it, was that I mixed my intuitive feeling(“rationalist should follow categorical imperative because it feels sensible”) to an obvious fact. My reasoning was based on simplistic model of PD where punishing for non-normative things and trusting and abiding otherwise works. So, I was basically asking for clarification in a guise of a statement :)
I think my earlier response to you (now deleted) misunderstood your comment. I’m still not sure I understand you now, but I’ll give it another shot.
All of the things I listed are commonly accepted within the relevant fields as individually rational. It boils down to the idea that it is individually rational to defect in a one-shot PD where you’ll never see the other player again and the result will never be made public. Yes, we have lots of mechanisms to improve group rationality, like laws, institutions, social norms, etc., but all of that just shows how hard group rationality is.
Here’s another example that might help make my point. How much “CPU time” does an average person’s brain spend to play status games instead of doing something socially productive? That is hardly rational on a group level, but we have little hope of reducing it by any significant amount.
One box!
Eliezer’s solution to Newcomb’s problem doesn’t apply to human cooperation.