Simulacrum 3 As Stag-Hunt Strategy
Reminder of the rules of Stag Hunt:
Each player chooses to hunt either Rabbit or Stag
Players who choose Rabbit receive a small reward regardless of what everyone else chooses
Players who choose Stag receive a large reward if-and-only-if everyone else chooses Stag. If even a single player chooses Rabbit, then all the Stag-hunters receive zero reward.
From the outside, the obvious choice is for everyone to hunt Stag. But in real-world situations, there’s lots of noise and uncertainty, and not everyone sees the game the same way, so the Schelling choice is Rabbit.
How does one make a Stag hunt happen, rather than a Rabbit hunt, even though the Schelling choice is Rabbit?
If one were utterly unscrupulous, one strategy would be to try to trick everyone into thinking that Stag is the obvious right choice, regardless of what everyone else is doing.
Now, tricking people is usually a risky strategy at best—it’s not something we can expect to work reliably, especially if we need to trick everyone. But this is an unusual case: we’re tricking people in a way which (we expect) will benefit them. Therefore, they have an incentive to play along.
So: we make our case for Stag, try to convince people it’s the obviously-correct choice no matter what. And… they’re not fooled. But they all pretend to be fooled. And they all look around at each other, see everyone else also pretending to be fooled, and deduce that everyone else will therefore choose Stag. And if everyone else is choosing Stag… well then, Stag actually is the obvious choice. Just like that, Stag becomes the new Schelling point.
We can even take it a step further.
If nobody actually needs to be convinced that Stag is the best choice regardless, then we don’t actually need to try to trick them. We can just pretend to try to trick them. Pretend to pretend that Stag is the best choice regardless. That will give everyone else the opportunity to pretend to be fooled by this utterly transparent ploy, and once again we’re off to hunt Stag.
This is simulacrum 3: we’re not telling the truth about reality (simulacrum 1), or pretending that reality is some other way in order to manipulate people (simulacrum 2). We’re pretending to pretend that reality is some other way, so that everyone else can play along.
In The Wild
We have a model for how-to-win-at-Stag-Hunt. If it actually works, we’d expect to find it in the wild in places where economic selection pressure favors groups which can hunt Stag. More precisely: we want to look for places where the payout increases faster-than-linearly with the number of people buying in. Economics jargon: we’re looking for increasing marginal returns.
Telecoms, for instance, are a textbook example. One telecom network connecting fifty cities is far more valuable than fifty networks which each only work within one city. In terms of marginal returns: the fifty-first city connected to a network contributes more value than the first, since anyone in the first fifty cities can reach a person in the fifty-first. The bigger the network, the more valuable it is to expand it.
From an investor’s standpoint, this means that a telecom investment is likely to have better returns if more people invest in it. It’s like a Stag Hunt for investors: each investor wants to invest if-and-only-if enough other investors also invest. (Though note that it’s more robust than a true Stag Hunt—we don’t need literally every investor to invest in order to get a big payoff.)
Which brings us to this graph, from T-mobile’s 2016 annual report (second page):
Fun fact: that is not a graph of those numbers. Some clever person took the numbers, and stuck them as labels on a completely unrelated graph. Those numbers are actually near-perfectly linear, with a tiny amount of downward curvature.
Who is this supposed to fool, and to what end?
This certainly shouldn’t fool any serious investment analyst. They’ll all have their own spreadsheets and graphs forecasting T-mobile’s growth. Unless T-mobile’s management deeply and fundamentally disbelieves the efficient markets hypothesis, this isn’t going to inflate the stock price.
It could just be that T-mobile’s management were themselves morons, or had probably-unrealistic models of just how moronic their investors were. Still, I’d expect competition (both market pressure and investor pressure in shareholder/board meetings) to weed out that level of stupidity.
My current best guess is that this graph is not intended to actually fool anyone—at least not anyone who cares enough to pay attention. This graph is simulacrum 3 behavior: it’s pretending to pretend that growth is accelerating. Individual investors play along, pretending to be fooled so that all the investors will see them pretending to be fooled. The end result is that everyone hunts the Stag: the investors invest in T-mobile, T-mobile uses the investments to continue expansion, and increasing investment yields increasing returns because that’s how telecom networks work.
… Well, that’s almost my model. It still needs one final tweak.
I’ve worded all this as though T-mobile’s managers and investors are actually thinking through this confusing recursive rabbit-hole of a strategy. But that’s not how economics usually works. This whole strategy works just as well when people accidentally stumble into it. Managers see that companies grow when they “try to look good for investors”. Managers don’t need to have a whole gears-level model of how or why that works, they just see that it works and then do more of it. Likewise with the investors, though to a more limited extent.
And thus, a maze is born: there are real economic incentives for managers to pretend (or at least pretend to pretend) that the company is profitable and growing and all-around deserving of sunglasses emoji, regardless of what’s actually going on. In industries where this sort of behavior results in actually-higher profits/growth/etc, economic pressure will select for managers who play the game, whether intentionally or not. In particular, we should expect this sort of thing in industries with increasing marginal returns on investment.
Takeaway
Reality is that which remains even if you don’t believe it. Simulacrum 3 is that which remains only if enough people pretend (or at least pretend to pretend) to believe it. Sometimes that’s enough to create real value—in particular by solving Stag Hunt problems. In such situations, economic pressure will select for groups which naturally engage in simulacrum-3-style behavior.
- Recursive Middle Manager Hell by 1 Jan 2023 4:33 UTC; 222 points) (
- Parasitic Language Games: maintaining ambiguity to hide conflict while burning the commons by 12 Mar 2023 5:25 UTC; 115 points) (
- Recursive Middle Manager Hell by 17 Jan 2023 19:02 UTC; 73 points) (EA Forum;
- Prizes for the 2021 Review by 10 Feb 2023 19:47 UTC; 69 points) (
- Voting Results for the 2021 Review by 1 Feb 2023 8:02 UTC; 66 points) (
- 11 Nov 2022 2:38 UTC; 39 points) 's comment on We must be very clear: fraud in the service of effective altruism is unacceptable by (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 25 points) (
- 2 Dec 2021 12:29 UTC; 22 points) 's comment on Morality is Scary by (
- 13 Nov 2022 1:00 UTC; 16 points) 's comment on We must be very clear: fraud in the service of effective altruism is unacceptable by (
- 28 Jan 2021 18:10 UTC; 12 points) 's comment on Covid: Bill Gates and Vaccine Production by (
- 2 Dec 2021 19:58 UTC; 10 points) 's comment on Morality is Scary by (
- 30 Dec 2021 7:57 UTC; 3 points) 's comment on Simulacra and Subjectivity by (
- 7 Sep 2021 13:54 UTC; 1 point) 's comment on LVSN’s Shortform by (
This gave a satisfying “click” of how the Simulacra and Staghunt concepts fit together.
Things I would consider changing:
1. Lion Parable. In the comments, John expands on this post with a parable about lion-hunters who believe in “magical protection against lions.” That parable is actually what I normally think of when I think of this post, and I was sad to learn it wasn’t actually in the post. I’d add it in, maybe as the opening example.
2. Do we actually need the word “simulacrum 3”? Something on my mind since last year’s review is “how much work are the words “simulacra” doing for us? I feel vaguely like I learned something from Simulacra Levels and their Interactions, but the concept still feels overly complicated as a dependency to explain new concepts. If I read this post in the wild without having spent awhile grokking Simulacra I think I’d find it pretty confusing.
But, meanwhile, the original sequences talked about “belief in belief”. I think that’s still a required dependency here, but, a) Belief in Belief is a shorter post, and I think b) I think this post + the literal words “belief in belief” helps grok the concept in the first place.
On the flipside, I think the Simulacra concept does help point towards an overall worldview about what’s going on in society, in a gnarlier way than belief-in-belief communicates. I’m confused here.
Important Context
A background thing in my mind whenever I read one of these coordination posts is an older John post: From Personal to Prison Gangs. We’ve got Belief-in-Belief/Simulacra3 as Stag Hunt strategies. Cool. They still involve… like, falsehoods and confusion and self-deception. Surely we shouldn’t have to rely on that?
My hope is yes, someday. But I don’t know how to reliably do it at scale yet. I want to just quote the end of the prison gangs piece:
Most of the writing on simulacrum levels have left me feeling less able to reason about them, that they are too evil to contemplate. This post engaged with them as one fact in the world among many, which was already an improvement. I’ve found myself referring to this idea several times over the last two years, and it left me more alert to looking for other explanations in this class.
A great explanation of something I’ve felt, but not been able to articulate. Connecting the ideas of Stag-Hunt, Coordination problems, and simulacrum levels is a great insight that has paid dividends as an explanatory tool.