Initial reaction: I like this post a lot. It’s short, to the point. It has examples relating its concept to several different areas of life: relationships, business, politics, fashion. It demonstrates a fucky dynamic that in hindsight obviously game-theoretically exists, and gives me an “oh shit” reaction.
Meditating a bit on an itch I had: what this post doesn’t tell me is how common this dynamic, or how to detect when it’s happening.
While writing this review: hm, is this dynamic meaningfully different from the idea of a costly signal?
Thinking about the examples:
You are married, and want to take your spouse out to a romantic dinner. You could choose a place you both love, or a place that only they love. You choose the place you don’t love, so they will know how much you love them. After all, you didn’t come here for the food. … you care more about your spouse’s view of how much you care about their experience than you care about your own experience.
Seems believably common, but also kind of like a fairly normal not-very-fucky costly signal.
(And, your spouse probably wants to come to both restaurants sometimes. And they probably come to the place-only-they-love less than they ideally would, because you don’t love it.)
Gets fuckier if they’d enjoy the place-you-both-love more than the place-only-they-love. Then the cost isn’t just your own enjoyment; it’s a signal you care about something, at the cost of the thing you supposedly care about. Still believable, but feels less common.
A middle manager must choose how to improve widget production. He can choose a policy that improperly maintains the factory and likely eventually poisons the water supply, or a policy that would prevent that at no additional cost. He knows that when he is up for promotion, management will want to know the higher ups can count on him to make the quarterly numbers look good and not concern himself with long term issues or what consequences might fall on others. If he cared about not poisoning the water supply, he would not be a reliable political ally. Thus, he chooses the neglectful policy. … you care more about being seen as focused on your own success than you care about outcomes you won’t be responsible for.
(Didn’t work for me, but I think the link is supposed to highlight the following phrase in the second paragraph: “The Immoral Mazes sequence is an exploration of what causes that hell, and how and why it has spread so widely in our society. Its thesis is that this is the result of a vicious cycle arising from competitive pressures among those competing for their own organizational advancement.”)
Ah, so I think this has an important difference from a normal “costly signal”, in that the cost is to other people.
I read the immoral mazes sequence at the time, but don’t remember in depth. I could believe that it reliably attests that this sort of thing happens a lot.
A politician can choose between two messages that affirm their loyalty [to their biggest campaign contributer]: Advocating a beneficial [to the general public] policy, or advocating a useless and wasteful policy. They choose useless, because the motive behind advocating a beneficial policy is ambiguous. Maybe they wanted people to benefit! … you care more about being seen as loyal than about improving the world by being helpful.
Again, the cost is to other people, not the politician.
How often do politicians get choices like this? I think a more concrete example would be helpful for me here, even if it was fictional. (But non-fiction would be better, and if it’s difficult to find one… that doesn’t mean it doesn’t happen, real-world complications might mean we don’t know about them and/or can’t be sure that’s what’s going on with them. But still, if it’s difficult to find a real-world example of this, that says something bad.)
(Is the point of the link to the pledge of allegiance simply that that’s a signal of loyalty that costs a bit of time? I’m not American and didn’t read the article in depth, I could be missing something.)
A start-up founder can choose between building a quality product without technical debt and creating a hockey stick graph with it, or building a superficially similar low-quality product with technical debt and using that. Both are equally likely to create the necessary graph, and both take about the same amount of effort, time and money. They choose the low-quality product, so the venture capitalists can appreciate their devotion to creating a hockey stick graph. … you care about those making decisions over your fate believing that you will focus on the things they believe the next person deciding your fate will care about, so they can turn a profit. They don’t want you distracted by things like product quality.
The claimed effect here is: the more investors know about your code quality, the more incentive you have to write bad code. I could tell an opposite story, where if they know you have bad code they expect that to hurt your odds of maintaining a hockey-stick graph.
So this example only seems to hold up if
Investors know about your code quality.
They care about what your code quality says about loyalty to their interests, more than what it says about your ability to satisfy their interests.
I’m not convinced on either point.
If it does hold up… yeah, it’s fucky in that it’s another case of “signal you care about a thing by damaging the thing”.
You can choose between making a gift and buying a gift. You choose to make a gift, because you are rich and buying something from a store would be meaningless. Or you are poor, so you buy something from a store, because a handmade gift wouldn’t show you care.
Mostly just seems like another fairly straightforward costly signal? Not very fucky.
Old joke: One Russian oligarch says, “Look at my scarf! I bought it for ten thousand rubles.” The other says, “That’s nothing, I bought the same scarf for twenty thousand rubles.” … the oligarchs want to show they have money to burn, and that they care a lot about showing they have lots of money to burn. That they actively want to Get Got to show they don’t care. If someone thought the scarf was bought for mundane utility, that wouldn’t do at all.
Same as above. I think this is what economists call a Veblen good.
I don’t think the article convinces me to be significantly more concerned about regular costly signals than I currently do, where the cost is entirely on the person sending the signal. That’s two and a half of the examples, and they seem like the least-fucky. I think… if I’m supposed to care about those, I’d probably want an article that specifically focuses on them, rather than mixing them with fuckier things.
The ones where (some of) the cost is to other people, or to the thing-supposedly-cared-about, are more worrying. But there’s also not enough detail here to convince me those are very common. And the example closest to my area of expertise is the one I don’t believe. I feel like a lot of my worry about these, in my initial reading, came from association with the others, that having separated out I find more-believable and less-fucky. I don’t think it’s deliberate, but I feel vaguely Motte-and-Baileyed.
Writing this has moved me from positive on the review to slightly negative. (Strictly speaking I didn’t cast a vote before, but I was expecting it would have been positive.) But I won’t be shocked if something I’ve missed brings me back to positive.
Initial reaction: I like this post a lot. It’s short, to the point. It has examples relating its concept to several different areas of life: relationships, business, politics, fashion. It demonstrates a fucky dynamic that in hindsight obviously game-theoretically exists, and gives me an “oh shit” reaction.
Meditating a bit on an itch I had: what this post doesn’t tell me is how common this dynamic, or how to detect when it’s happening.
While writing this review: hm, is this dynamic meaningfully different from the idea of a costly signal?
Thinking about the examples:
Seems believably common, but also kind of like a fairly normal not-very-fucky costly signal.
(And, your spouse probably wants to come to both restaurants sometimes. And they probably come to the place-only-they-love less than they ideally would, because you don’t love it.)
Gets fuckier if they’d enjoy the place-you-both-love more than the place-only-they-love. Then the cost isn’t just your own enjoyment; it’s a signal you care about something, at the cost of the thing you supposedly care about. Still believable, but feels less common.
(Didn’t work for me, but I think the link is supposed to highlight the following phrase in the second paragraph: “The Immoral Mazes sequence is an exploration of what causes that hell, and how and why it has spread so widely in our society. Its thesis is that this is the result of a vicious cycle arising from competitive pressures among those competing for their own organizational advancement.”)
Ah, so I think this has an important difference from a normal “costly signal”, in that the cost is to other people.
I read the immoral mazes sequence at the time, but don’t remember in depth. I could believe that it reliably attests that this sort of thing happens a lot.
Again, the cost is to other people, not the politician.
How often do politicians get choices like this? I think a more concrete example would be helpful for me here, even if it was fictional. (But non-fiction would be better, and if it’s difficult to find one… that doesn’t mean it doesn’t happen, real-world complications might mean we don’t know about them and/or can’t be sure that’s what’s going on with them. But still, if it’s difficult to find a real-world example of this, that says something bad.)
(Is the point of the link to the pledge of allegiance simply that that’s a signal of loyalty that costs a bit of time? I’m not American and didn’t read the article in depth, I could be missing something.)
The claimed effect here is: the more investors know about your code quality, the more incentive you have to write bad code. I could tell an opposite story, where if they know you have bad code they expect that to hurt your odds of maintaining a hockey-stick graph.
So this example only seems to hold up if
Investors know about your code quality.
They care about what your code quality says about loyalty to their interests, more than what it says about your ability to satisfy their interests.
I’m not convinced on either point.
If it does hold up… yeah, it’s fucky in that it’s another case of “signal you care about a thing by damaging the thing”.
Mostly just seems like another fairly straightforward costly signal? Not very fucky.
Same as above. I think this is what economists call a Veblen good.
I don’t think the article convinces me to be significantly more concerned about regular costly signals than I currently do, where the cost is entirely on the person sending the signal. That’s two and a half of the examples, and they seem like the least-fucky. I think… if I’m supposed to care about those, I’d probably want an article that specifically focuses on them, rather than mixing them with fuckier things.
The ones where (some of) the cost is to other people, or to the thing-supposedly-cared-about, are more worrying. But there’s also not enough detail here to convince me those are very common. And the example closest to my area of expertise is the one I don’t believe. I feel like a lot of my worry about these, in my initial reading, came from association with the others, that having separated out I find more-believable and less-fucky. I don’t think it’s deliberate, but I feel vaguely Motte-and-Baileyed.
Writing this has moved me from positive on the review to slightly negative. (Strictly speaking I didn’t cast a vote before, but I was expecting it would have been positive.) But I won’t be shocked if something I’ve missed brings me back to positive.