I’ve noticed comments to effect of “simulacrum levels seem very confusing”. Personally, simulacrum levels seem fairly obvious-in-retrospect and self-expanatory to me, based on a handful of explanations and examples from Benquo and Zvi’s posts. I’m not sure whether I’m missing something (in which case I should figure out what), or whether I have some pre-existing frame which makes it all more natural (in which case I should figure out what that frame is and try to communicate it), or whether this is just about happening to read the right posts in the right order. So… what are some things people find confusing about simulacrum levels?
[Question] What confusions do people have about simulacrum levels?
Hmm it’s not that I find them confusing, and I even managed to explain them to someone who didn’t know about them. I think it just feels… too high complexity or something. Like there’s a simpler version just around the corner. Maybe I’d benefit from 3 positive and negative real life examples of each level.
maybe something closer to
causal reality
social reality
being causal about social reality
being social about being causal about social reality
not there yet, but closer for me.
I had that experience at first. A bit like some chemistry professor just told me “the stages of boiling” as if we can objectively divide things into stages from almost-boiling water to a heavy boil, and I’m like, this level of descriptive accuracy sounds like it belongs in a cooking class rather than chemistry.
I also had the experience of explaining the levels to a friend. I got a “why is this important” type reaction. When I gave examples of mistakes people could make if they interpreted things at the wrong level (EG interpreting “guns don’t kill people, people kill people” as a factual question rather than signalling political allegiance), my friend said something along the lines of “that’s just a dumb mistake, I don’t need the levels to understand that”.
Right...! I think that’s a very good point on the usefulness of the model. Like, I find it interesting and useful as a model, but I don’t think I’ve ever applied it in practice.
This isn’t exactly a confusion about the model itself, but this seems like the right place to ask this question:
What areas of the world are people able to predict better once they’ve internalized the “simulacrum levels” model? Like, if I go through all the effort of learning which statements and behaviors are “level 1“ or level 3” and what principles go into those distinctions and how the levels relate to each other, then in what way will I be better able to navigate the world?
I ask because this is a very esoteric theory which I only partially understand after ~a couple hours of serious effort, and some people clearly think there’s a big payoff for really internalizing it. However, so far the justification I’ve seen people claim for the payoff has always been in terms of subjective insight and the feeling of understanding, not in terms of improved ability to navigate social situations or predict the trajectories of groups or avoid dangerous people, or any similar feats which I might expect a person could perform if they had a true theory in this area.
In other words, what’s the argument that these beliefs pay rent?
I endorse this use of the question feature. Probing for confusions specifically on a topic is a good idea because:
The question format seems like it would lower the threshold for articulating them
It collects a bunch of different confusions in the same place, helping to get a well-rounded update and thus increasing the quality ceiling of each iteration of refinement
Especially since johnswentworth wasn’t one of the driving authors of the idea, I put this into the mental bucket of “good stewardship of the pipeline,” alongside tasks like recording or summarizing off-Lesswrong conversations/interviews.
(Posting this in a spirit of self-congratulation: I wrote up a spiel about what I found confusing, and then realised that I’m confused on a much more fundamental level about the nature of the various explanations and how they relate to each other, and am now going back to reread the various sources rather than writing something unhelpfully confusing about a confused confusion.)
smug smaug :)
I’ve forged my own understanding of the levels, picking and choosing the parts that made sense to me from various posts. But there was definitely a lot of picking and choosing—parts that didn’t make sense to me, I’ve simply discarded. So there are certainly versions I don’t understand.
I’m planning to take a look back at some articles to indicate places of dissonance, but off the top of my head, I think the point of highest friction is the idea that each stage follows from the last in a systematic way. This story seems to almost work rather than actually working.
OK, looking back a bit, I think the main point of discordance for me is that the 3rd level “masks the absence of meaning”. This was repeated often in simulacrum posts.
I can understand this “masks the absence of meaning” as a thing, but to me it seems more sensible to think of level 3 as “signalling”. I like the interpretation where levels 1 and 3 are both “honest in their own way” (level 3 is like “vibing”, an honestly expressing what is felt in the moment, just devoid of a concept of truth like that at level 1). This seems incompatible with “masks the absence of meaning”.
A “show trial” was given as an example of the mask-absence-of-meaning level 3; this makes some sense, as it hides the absence of rule of law in which statements of guilt would be meaningful (as opposed to simply lying about guilt). But it makes less sense as “honest signalling” to indicate group affiliation with those who prefer rule-of-law-flavored vibes.
Quoting the original wikipedia summary which seems to have sparked much of the discussion:
For me, I somewhat buy a natural progression between the levels in this model:
Level 1: truth.
Level 2: masks the absence of level 1; IE, lying.
Level 3: masks the absence of even level 2; IE, masks the absence of meaning.
However, level 4 feels less like a natural next step and more like a summary of all the rest of the infinite levels of such a hierarchy—as if to say “and so on”. The implicit claim is that anything worse than level 3 is so bad as to be not worth classifying in further detail.
In my preferred interpretation, we instead think in this way:
Level 1: truth.
Level 2: Malign subversion of the level-1 system; IE, lying.
Level 3: The behavior at level 2 corrupts the meaning of the symbols at level 1 (basically honest people are communicating, but using a language build with liars). What survives is a kind of looser meaning system. Meaning becomes “whatever you can infer”; words therefore have a tendency to say more about group affiliation than about reality.
Level 4: Malign subversion of the level-3 system.
This leads to some slippage between 3 and 4 for me. If I go with the original description of the levels (in which 3 masks lack of meaning, and 4 indicates collapse of meaning, where symbols refer only to symbols), it seems like signalling should be level 4, not level 3.
I agree that “each stage follows in a systematic way” doesn’t quite work, and to further illuminate that I’d like to describe the specific systematic progression that I personally inferred before deciding that it doesn’t seem to match how the levels are actually being used in discussion:
(Since I don’t think this matches current usage, I’m going to deliberately change terminology and say “steps” instead of “levels” in a weak attempt to prevent conflation.)
A. To ascend from an odd step to an even step, the speaker’s motive changes, but their communicative intent remains the same.
B. To ascend from an even step to an odd step, the speaker’s motive remains the same, but their intent is now to communicate that motive.
At step 1, when I say
”There’s a tiger across the river”
I want you to believe
There is a tiger across the river
because
There IS a tiger across the river (or so I think)
At step 2, when I say
”There’s a tiger across the river”
I want you to believe
There is a tiger across the river
because
I don’t want anyone to cross the river
At step 3, when I say
”There’s a tiger across the river”
I want you to believe
I don’t want anyone to cross the river
because
I don’t want anyone to cross the river
At step 4, when I say
”There’s a tiger across the river”
I want you to believe
I don’t want anyone to cross the river
because
I want to ally myself with the vermilion political party
At step 5, when I say
”There’s a tiger across the river”
I want you to believe
I want to ally myself with the vermilion political party
because
I want to ally myself with the vermilion political party
At step 6, when I say
”There’s a tiger across the river”
I want you to believe
I want to ally myself with the vermilion political party
because
I want vermilion party votes to help me become mayor
At step 7, when I say
”There’s a tiger across the river”
I want you to believe
I want vermilion party votes to help me become mayor
because
I want vermilion party votes to help me become mayor
At step 8, when I say
”There’s a tiger across the river”
I want you to believe
I want vermilion party votes to help me become mayor
because
I’m trying to split the vermilion’s party vote so their other candidate doesn’t win
etc.
I don’t think there’s any strict upper bound to how many steps you can get out of this progression, but the practical depth is limited for the following reason:
Notice that there might be many possible motivations that could be introduced at an even step. In step 2 above, I used “I don’t want anyone to cross the river”, but I could have used “I want to organize a tiger hunting party” or “I want to promote the development of anti-tiger weaponry” or “I want us to acknowledge that our attempt to avoid tigers is failing and we should try to reach an accommodation with them instead”.
A successful step-3 communication can only occur if there is a single step-2 motive that is so common or so obvious (in context) that it can be safely inferred by the listener. (Otherwise, I might want you to understand that I don’t want anyone to cross the river, but you might mistakenly think I want to organize a tiger hunting party.)
Also note that all of the odd steps might be called “honest” in the sense that you want the listener to believe an accurate thing (you are trying to make their map look like your map), but only step 1 is truthful in the sense that it accurately describes object-level reality. All of the even steps are dishonest.
I’m not sure this model is particularly helpful, except that perhaps it illuminates a difference between “honesty” and “truthfulness”.
I think current simulacra discussions are sort-of collapsing all of steps 3+ into “simulacra level 3”, and then “simulacra level 4″ is sort-of like step infinity, except I don’t think the relation between simulacra levels and the model I described above is actually that clean. I would welcome further attempts to concisely differentiate them.
Level 3 is identity, masking the absence of justification. Level 4 masks the absence of identity.
Ah, interesting, I like it!
In terms of attempting pseudo-rigor for the pattern of succession of stages, I don’t understand why you throw “justification” in there. I would have said “meaning”, something which still holds strong at level 2 (in order for us to even define lying).
And then we have the next natural questions: what grows to mask the absence of identity? What would level 5 mask?
So level 4 is...intention masking the absence of identity?
Then level 5 is nonsense words, masking the absence of intention.
For levels 6 and higher, please see [Cuil Theory](http://cuiltheory.wikidot.com/what-is-cuil-theory).
I don’t know if this is a confusion, per se, but I dislike that there’s this undertone in the model that everything above the first level is progressively worse. I mean, sure, it’s worse for some purposes, but this quickly fades into the background and becomes a model about what’s worse within a particular worldview and for particular purposes.
Complaints that any particular simulacrum level get in the way of truth is not very interesting to someone who doesn’t actually care much about truth, and instead is more concerned with, say, social harmony or power. And you might say, well, the posts acknowledge that, but it’s done in a way that makes it very clear there are taken-for-granted reasons why truth is the most import thing that won’t connect for anyone for whom this isn’t true.
When I look at the theory of simulacrum levels I have a reaction like “yeah, that’s cool, I get what you’re saying, but also you have to communicate at all the levels concurrently all the time anyway, so stop whining about it and get on with living with humans as they are rather than pining for some ideal world that doesn’t exist.” Not the most charitable reaction, and again not exactly a confusion, but I bring this up because I could see this manifesting as confusion for someone who similarly sees this problem but doesn’t quite have the words to put to it.
If I had to guess at why I’ve grasped the concept but not the level-number mapping, I think the “simulacrum level N” schema makes it harder to learn. There’s no intrinsic 2-ness about SL 2 or 4-ness about SL 4, so it’s a memorization game. Not a big game, especially if you actually use the concept handles in conversations, but…
Generally, it’s harder to learn a set of vocab words and phrases if there are pairs which look similar. (I think there’s some psychology-forgetting theory research on this, but I forgot what it’s called)
Even worse, this pairwise similarity can impede retention in the long-term, in my experience. For example, I am (or at least have been) quite proficient in French, but because my teacher tried to teach all of the days of the week at the same time, they still give me trouble.
EDIT: The way to get around this is by learning each similar concept a week+ apart. I have a special “conflicting concepts” Anki deck when I have to add cards for similar things.
They’re named as the planets: Sun-day, Moon-day, Mars-day, Mercury-day, Jupiter-day, Venus-day, and Saturn-day.
It’s easy to remember when you realize that the English names are just the equivalent Norse gods: Saturday, Sunday and Monday are obvious. Tyr’s-day (god of combat, like Mars), Odin’s-day (eloquent traveler god, like Mercury), Thor’s-day (god of thunder and lightning, like Jupiter), and Freyja’s-day (goddess of love, like Venus) are how we get the names Tuesday, Wednesday, Thursday, and Friday.
Note also Odin was “Woden” in Old English
In Old English Wednesday was “Wōdnesdæg”, but yeah. Woden is the Anglo-Saxon version of Odin. Also Friday was “Frīġedæġ”. It’s not actually clear from the record if Fríge and Freyja are the same goddess or not, but they’re so similar that it’s a matter of some debate among scholars. The Norse pantheon apparently had no equivalent for Saturn, so Saturday kept the Roman name.
The weekdays were named for the seven “naked-eye” planets known to Hellenistic astrology. (The Sun and Moon counted as planets in that system.) The seven planetary gods were said to watch over the Earth in hourly shifts in order of (geocentric) distance: Saturn, Jupiter, Mars, Sun, Venus, Mercury, Moon. Because a 24-hour day is 3 in arithmetic modulo 7, a different god opened each day of the week, and so the day was named for its opening god. Counting by 24′s (or 3′s) in this cycle ordered by distance gives us the familiar order of days of the week.
Je vous remercie du fond du coeur :)
That is so cool, I can’t believe I’d never heard it before!
Names for days of the week are similarly derived from the planets in other languages, often from the local pantheon. Knowing which Chinese element was associated with which planet from watching the Sailor Moon anime was how I learned the weekday names and associated kanji in Japanese.
Interesting. In Mandarin, at least nowadays, they’re just numbered. Week is “星期” or “star period.” Sunday is “星期 day” but the others are just 星期 one, 星期 two, and so on. I wonder when that happened.
In Japanese, the days are written with 日 and 月, and then the five elements corresponding to each planet: 火, 水, 木, 金, and 土, followed by -曜日. It was spelled the same in Classical Chinese (where the Japanese got it from), until 1911. The Chinese had apparently learned of Hellenistic astrology by the 4th century. In Classical Chinese, I suppose e.g. Wednesday would be literally “Water-Luminary day”.
Maybe the reason it’s so hard to remember is that SL N is SL4. ;-)
Zvi attempted to address this in a more recent post, but I still feel like there’s two slightly different abstractions getting conflated with simulacrum levels that don’t quite gel together for me. See my comment on this post:
https://www.lesswrong.com/posts/yqEh9ewKwzig4kzyx/what-is-meant-by-simulcra-levels?commentId=62QE96aqwyLq98mM2
Your linked comment was very useful. To those who didn’t click, here’s a relevant snippet:
(Your link is wrong)
I think that the main thing that confuses me is the nuance of SL4, and I also think that’s the main place where the rationalist communities understanding/use of simulacra levels breaks down on the abstract level.
One of the original posts bringing simulacra to LessWrong explicitly described the effort to disentangle simulacra from Marxist European philosophers. I think that this was entirely successful, and intuitive for the first 3 levels, but I think that the fourth simulacra level is significantly more challenging to disentangle from the ideological theses advanced by said philosophers, and I’m not sure that I’ve seen a non-object level description that doesn’t use highly loaded phrases (symbol, signifier) that come with nuanced and essential connotations from Baudrillard and others. I worry that this leads to the inaccurate analogy of 1:2=3:4, and the loss of a legitimately helpful concept.
I define SL4 in terms of a description I heard once of a summary of Baudrillard’s work: a simulacrum is when a simulation breaks off and becomes its own thing, but still connected to the original. And whether or not that’s how Baudrillard thought of SL4, it’s a useful concept on its own. (My simulacrum of “simulacrum” as it were.)
For example, a smartphone is a miniature computer and video game console that also has telephone capabilities; it’s a simulacrum of Bell’s talk-over-telegraph-wires device.
The iPod Video is an almost identical piece of hardware and software minus the telephony, and even that can be simulated with the right VOIP app. I can imagine someone saying, “Well, it’s still essentially a smartphone.” But we don’t say the same of a laptop computer using a VOIP app, or even a jailbroken Nintendo Switch or DSi. We’ve reached the edge of the simulacrum.
I’m confused about the simulacrum level of texts that don’t pretend that you should take them literally.
For example novels, articles in The Onion, ironic remarks, obvious exaggerations, metaphors, or jokes.
Each of these can be used to influence our view of the world, so just giving up and saying that the concept of simulacrum levels does not apply in these cases is unsatisfactory.
My first understanding of simulacrum levels was that these types of texts would never be level 1 simulacrum because they do not give an accurate map of the real world, and they would typically be at least level 3. On the other hand, level 3 and 4 does not really fit because often such texts do try to influence our view of the world rather than our view of the sender. George Orwell pointed at something in the real world in 1984 and it wasn’t important who wrote that novel, so 1984 can be understood as a level 1 text.
My current understanding (after having only understood simulacrum levels for a day) is that even at level 1, you don’t have to understand the text literally. Instead, to find the simulacrum level of a text, you have to first translate the text into a message, and then analyze why the sender wants to send that message.
I could use more clarity on what is and isn’t level three.
Supposedly at level three, saying “There’s a lion across the river” means “I’m with the popular kids who are too cool to go across the river.” But there’s more than one kind of motivation the speaker might have.
A) A felt sense that “There’s a lion across the river” would be a good thing to say (based on subconscious desire to affiliate with the cool kids, and having heard the cool kids say this)
B) A conscious calculation that saying this will ingratiate you with the cool kids, based on explicit reasoning about other things the cool kids have said, but motivated by a felt sense that those kids are cool and you want to join them
C) A conscious calculation that saying this will ingratiate you with the cool kids, motivated by a conscious calculation that gaining status among the cool kids will yield tangible benefits.
Are all three of these contained by level three? Or does an element of conscious calculation take us into level four?
(I think C) has a tendency to turn into B) and B) likewise into A), but I don’t think it’s inevitable)