In Excerpts from a larger discussion about simulacra, following Baudrillard, Jessica Taylor and I laid out a model of simulacrum levels with something of a fall-from grace feel to the story:
First, words were used to maintain shared accounting. We described reality intersubjectively in order to build shared maps, the better to navigate our environment. I say that the food source is over there, so that our band can move towards or away from it when situationally appropriate, or so people can make other inferences based on this knowledge.
The breakdown of naive intersubjectivity—people start taking the shared map as an object to be manipulated, rather than part of their own subjectivity. For instance, I might say there’s a lion over somewhere where I know there’s food, in order to hoard access to that resource for idiosyncratic advantage. Thus, the map drifts from reality, and we start dissociating from the maps we make.
When maps drift far enough from reality, in some cases people aren’t even parsing it as though it had a literal specific objective meaning that grounds out in some verifiable external test outside of social reality. Instead, the map becomes a sort of command language for coordinating actions and feelings. “There’s food over there” is construed and evaluated as a bid to move in that direction, and evaluated as such. Any argument for or against the implied call to action is conflated with an argument for or against the proposition literally asserted. This is how arguments become soldiers. Any attempt to simply investigate the literal truth of the proposition is considered at best naive and at worst politically irresponsible.
But since this usage is parasitic on the old map structure that was meant to describe something outside the system of describers, language is still structured in terms of reification and objectivity, so it substantively resembles something with descriptive power, or “aboutness.” For instance, while you cannot acquire a physician’s privileges and social role simply by providing clear evidence of your ability to heal others, those privileges are still justified in terms of pseudo-consequentialist arguments about expertise in healing.Finally, the pseudostructure itself becomes perceptible as an object that can be manipulated, the pseudocorrespondence breaks down, and all assertions are nothing but moves in an ever-shifting game where you’re trying to think a bit ahead of the others (for positional advantage), but not too far ahead.
There is some merit to this linear treatment, but it obscures an important structural feature: the resemblance of levels 1 and 3, and 2 and 4.
Another way to think about it, is that in levels 1 and 3, speech patterns are authentically part of our subjectivity. Just as babies are confused if you show them something that violates their object permanence assumptions, and a good rationalist is more confused by falsehood than by truth, people operating at simulacrum level 3 are confused and disoriented if a load-bearing social identity or relationship is invalidated.
Likewise, levels 2 and 4 are similar in nature—they consist of nothing more than taking levels 1 and 3 respectively as object (i.e. something outside oneself to be manipulated) rather than as subject (part of one’s own native machinery for understanding and navigating one’s world). We might name the levels:
Simulacrum Level 1: Objectivity as Subject (objectivism, or epistemic consciousness)
Simulacrum Level 2: Objectivity as Object (lying)
Simulacrum Level 3: Relating as Subject (power relation, or ritual magic)
Simulacrum Level 4: Relating as Object (chaos magic, hedge magic, postmodernity) [1]
I’m not attached to these names and suspect we need better ones. But in any case this framework should make it clear that there are some domains where what we do with our communicative behavior is naturally “level 3” and not a degraded form of level 1, while in other domains level 3 behavior has to be a degenerate form of level 1.[2]
Much body language, for instance, doesn’t have a plausibly objective interpretation, but is purely relational, even if evolutionary psychology can point to objective qualities we’re sometimes thereby trying to signal. Sometimes we’re just trying to stay in rhythm with each other, or project good vibes.
[1] Some chaos magicians have attempted to use the language of power relation (gods, rituals, etc) to reconstruct the rational relation between map and territory, e.g. Alan Moore’s Promethea. The postmodern rationalist project, by contrast, involves constructing a model of relational and postrelational perspectives through rational epistemic means.
[2] A prepublication comment by Zack M. Davis that seemed pertinent enough to include:
Maps that reflect the territory are level 1. Coordination games are “pure” level 3 (there’s no “right answer”; we just want to pick strategies that fit together). When there are multiple maps that fit different aspects of the territory (political map vs. geographic map vs. globe, or different definitions of the same word), but we want to all use the SAME map in order to work together, then we have a coordination game on which map to use. To those who don’t believe in non-social reality, attempts to improve maps (Level 1) just look like lobbying for a different coordination equilibrium (Level 4): “God doesn’t exist” isn’t a nonexistence claim about deities; it’s a bid to undermine the monotheism coalition and give their stuff to the atheism coalition.
Book Review: Cailin O’Connor’s The Origins of Unfairness: Social Categories and Cultural Evolution
Schelling Categories, and Simple Membership Tests
Frames that describe perception can become tools for controlling perception.
The idea of simulacra has been generative here on LessWrong, used by Elizabeth in her analysis of negative feedback, and by Zvi in his writings on Covid-19. It appears to originate in private conversations between Benjamin Hoffman and Jessica Taylor. The four simulacra levels or stages are a conception of Baudrillard’s, from Simulacra and Simulation. The Wikipedia summary quoted on the original blog post between Hoffman and Taylor has been reworded several times by various authors and commenters.
We should approach this writing with a gut-level understanding of what motivates these authors to so passionately defend “level one” speech, and why they find simulacra to be so threatening as to describe it in Biblical terms. For Hoffman and Taylor, simulacra are “corruption of discourse,” “destroying the language,” “wireheading,” comparable to a speculative bubble, a “fall from grace.” For Elizabeth, they cause “more conflict or less information in the world,” and she “wanted a button [she] could push to make everyone go to level one all the time.” The “game” is an “enemy,” a “climate” that “punishes” honesty. Zvi’s relentlessly cynical attitude about speech he figures as simulacra is well-known to readers of his Covid-19 analysis.
It is transparently obvious to me that there are some problems in which perceptions matter enormously, others in which perceptions matter not at all, and still others in which both are equally important and dependent upon one another. Problems of perceptions operate in a complex but ultimately lawful manner. It is possible to develop a correct gears-level model of perceptions and use it to predict and control them to advantage. As Elizabeth points out, however, this perpetuates the necessity of doing so. It may be more advantageous to build relationships and institutions that at least resemble the sort that we would build if we placed a great deal of value on preserving our capacity for honesty.
Many people claim to have a bullshit detector, but nobody has one that I can borrow. Most analysis of simulacra is a description of untruths. While this can be important, it mostly motivates me to seek predictive models, identifying honest and reliable experts, make factual and logical arguments, and to advocate for perceptions that allow us to focus on solving practical problems. It’s never wise to overanalyze nonsense, and I hope that the rationalist community can continue to focus less on the thousands of things that are not and should not be, and more on what should be, what can be, and what is.
I’m not actually sure what it is you’re prescribing here. Which things seem like nonsense to you? Which things did you mean by “overanalyzing nonsense” and which things would you mean by “focus on what should be / can be / what is”
Simulacra levels 2-4, and especially 3-4, are ways of characterizing speech as not meaning what it literally appears to mean. Analysis of simulacra often seems to spend a lot of time trying to highlight these quotes, characterize exactly how they depart from literal truth, and assert an imprecise and unverifiable reason the speech appears as it is. The level 2+ speech such analysis is criticizing is what I am referring to here as “nonsense,” because that is how it is being taken as by the critics. I rarely if ever find that my ability to predict the behavior of the speakers or institutions they belong to is enhanced by these criticisms, characterizations of simulacra levels, and vague speculations.
Analyzing nonsense can be a way to motivate discussion of what does make sense. “Here’s the nonsense the ‘experts’ or ‘leaders’ are saying, here’s a quick explanation of why it’s nonsense, and now here’s a more sensible interpretation of what’s going on.”
But going deep into characterizing just why this particular form of nonsense is the way it is, and what type of nonsense it is, rapidly becomes unconvincing to me.
Better might be “only a fool analyzes blitz,” referring to chess games played out in just a few minutes. I think that a combination of pressure, time constraints, and ego lead to people saying things that seem to them intuitively like the most sensible thing to say at that time. Just as analyzing a blitz chess move that seemed sensible with a computer often reveals a fatal flaw, so analyzing blitz speech reveals all sorts of foolishness. The simulacra theory invited us to think really hard about why certain bad chess moves seemed superficially compelling to the player of a blitz game. I don’t think there’s much we can learn from that, though.
By contrast, attempts to describe how we can better measure, predict, and control the physical world in morally good ways, including the social world, seem fruitful.
Hmm. So on one hand, I think it’s reasonable to argue that all the Simulacra stuff hasn’t made much legible case for it actually being a model with explanatory power.
But, to say “there’s nothing to explain” or “it’s not worth trying” seems pretty wrong. If we’re reliably running into particular kinds of nonsense (and we seem to be), knowing what’s generating the nonsense seems important both for predicting/navigating the world, and for helping us not fall prey to it. (Maybe your point there is that “steering towards goodness” is better than “steering away from badness”, which seems plausibly true, but a) I think we need at least some model of badness, b) there are places where, say, Simulacrum Level 3 might actually be an important coordination strategy)
I haven’t seen these analyses into definitions and causes done with rigor. It also seems very hard to achieve rigor in these analyses, given that the information into individual psychology and sociology of specific institutions we’d need to do so successfully is hard to come by.
As such, the tack these authors take is often not to attempt such a rigorous analyses, but instead to go straight from their current model, composed of guesswork, to activist claims about how to improve the world and the level of destruction caused by that guesswork-based model.
The analysis, then, seems to be of a guesswork-based, ill-defined model with limited predictive power or falsifiability, involving a lot of arguing with organizations and people you perceive as propagandists for empirically and morally wrong views. It also seems to involve a tendency to discourage behaviors that could disconfirm the assumptions.
But I don’t want to tear into it too deeply. I recognize that simulacra levels point at something real. I also think that doing this too much would be hypocritical.
If I saw more attempts to falsify the model or use it to make predictions, I’d be happier with it.
This post went in a similar direction as Daniel Kokotajlo’s 2x2 Simulacrum grid. It seems to have a “medium amount of embedded worldmodel”, contrasted with some of Zvi’s later simulacra writing (which I think bundle a bunch of Moral-Maze-ish considerations into Simulacrum 4) and Daniel’s grid-version (which is basically unopinionated about where the levels came from)
I like that this post notes the distinction between domains where Simulacrum 3 is a degenerate form of level 1, vs domains where Simulacrum 3 is the “natural” form of expression.