I think all of them follow a pattern of “there is a naive baseline expectation where you treat other people’s maps as a blackbox that suggest a deal is good, and a more sophisticated expectation that involves modeling the details of other people’s maps that suggests its bad” and highlights some heuristics that you could have used to figure this out in advance (in the subway example, a fully empty car does indeed seem a bit too good to be true, in the juggling example you do really need to think about who is going to sign up, in the bedroom example you want to avoid giving the other person a choice even if both options look equally good to you, in the Thanksgiving example you needed to model which foods get eaten first and how correlated your preferences are with the ones of other people, etc.).
This feels like a relatively natural category to me. It’s not like an earth-shattering unintuitive category, but I dispute that it doesn’t carve reality at an important joint.
They don’t. As I already explained, these examples are bad because the outcomes are not all bad, in addition to not reflecting the same causal patterns or being driven by adverse selection. The only consistent thing here is a Marxian paranoia that everyone else is naive and being ripped off in trades. Which is a common cognitive bias in denying gains to trade. The subway car is simply an equilibrium. You cannot tell if ‘you’ are better off or worse off in any car, so it is not the case that ‘the deal is bad’ The room and food examples actually imply the best outcome happened, as the room and food went to those who valued it more and so ate it sooner (it’s not about correlation of preferences, it’s about intensity); the deal was good there. And the Laffy Taffy example explicitly doesn’t involve anything like that but is pure chance (so it can’t involve “other people’s maps” or ‘adverse selection’).
I think you missed the point of the Laffy Taffy example. He got the flavor he didn’t like because he’d been systematically eating the ones he did like while leaving the flavor he didn’t like in the bowl. (Or his friend wasn’t actually picking at random.)
I think all of them follow a pattern of “there is a naive baseline expectation where you treat other people’s maps as a blackbox that suggest a deal is good, and a more sophisticated expectation that involves modeling the details of other people’s maps that suggests its bad” and highlights some heuristics that you could have used to figure this out in advance (in the subway example, a fully empty car does indeed seem a bit too good to be true, in the juggling example you do really need to think about who is going to sign up, in the bedroom example you want to avoid giving the other person a choice even if both options look equally good to you, in the Thanksgiving example you needed to model which foods get eaten first and how correlated your preferences are with the ones of other people, etc.).
This feels like a relatively natural category to me. It’s not like an earth-shattering unintuitive category, but I dispute that it doesn’t carve reality at an important joint.
They don’t. As I already explained, these examples are bad because the outcomes are not all bad, in addition to not reflecting the same causal patterns or being driven by adverse selection. The only consistent thing here is a Marxian paranoia that everyone else is naive and being ripped off in trades. Which is a common cognitive bias in denying gains to trade. The subway car is simply an equilibrium. You cannot tell if ‘you’ are better off or worse off in any car, so it is not the case that ‘the deal is bad’ The room and food examples actually imply the best outcome happened, as the room and food went to those who valued it more and so ate it sooner (it’s not about correlation of preferences, it’s about intensity); the deal was good there. And the Laffy Taffy example explicitly doesn’t involve anything like that but is pure chance (so it can’t involve “other people’s maps” or ‘adverse selection’).
I think you missed the point of the Laffy Taffy example. He got the flavor he didn’t like because he’d been systematically eating the ones he did like while leaving the flavor he didn’t like in the bowl. (Or his friend wasn’t actually picking at random.)