Frames that describe perception can become tools for controlling perception.
The idea of simulacra has been generative here on LessWrong, used by Elizabeth in her analysis of negative feedback, and by Zvi in his writings on Covid-19. It appears to originate in private conversations between Benjamin Hoffman and Jessica Taylor. The four simulacra levels or stages are a conception of Baudrillard’s, from Simulacra and Simulation. The Wikipedia summary quoted on the original blog post between Hoffman and Taylor has been reworded several times by various authors and commenters.
We should approach this writing with a gut-level understanding of what motivates these authors to so passionately defend “level one” speech, and why they find simulacra to be so threatening as to describe it in Biblical terms. For Hoffman and Taylor, simulacra are “corruption of discourse,” “destroying the language,” “wireheading,” comparable to a speculative bubble, a “fall from grace.” For Elizabeth, they cause “more conflict or less information in the world,” and she “wanted a button [she] could push to make everyone go to level one all the time.” The “game” is an “enemy,” a “climate” that “punishes” honesty. Zvi’s relentlessly cynical attitude about speech he figures as simulacra is well-known to readers of his Covid-19 analysis.
It is transparently obvious to me that there are some problems in which perceptions matter enormously, others in which perceptions matter not at all, and still others in which both are equally important and dependent upon one another. Problems of perceptions operate in a complex but ultimately lawful manner. It is possible to develop a correct gears-level model of perceptions and use it to predict and control them to advantage. As Elizabeth points out, however, this perpetuates the necessity of doing so. It may be more advantageous to build relationships and institutions that at least resemble the sort that we would build if we placed a great deal of value on preserving our capacity for honesty.
Many people claim to have a bullshit detector, but nobody has one that I can borrow. Most analysis of simulacra is a description of untruths. While this can be important, it mostly motivates me to seek predictive models, identifying honest and reliable experts, make factual and logical arguments, and to advocate for perceptions that allow us to focus on solving practical problems. It’s never wise to overanalyze nonsense, and I hope that the rationalist community can continue to focus less on the thousands of things that are not and should not be, and more on what should be, what can be, and what is.
It’s never wise to overanalyze nonsense, and I hope that the rationalist community can continue to focus less on the thousands of things that are not and should not be, and more on what should be, what can be, and what is.
I’m not actually sure what it is you’re prescribing here. Which things seem like nonsense to you? Which things did you mean by “overanalyzing nonsense” and which things would you mean by “focus on what should be / can be / what is”
Simulacra levels 2-4, and especially 3-4, are ways of characterizing speech as not meaning what it literally appears to mean. Analysis of simulacra often seems to spend a lot of time trying to highlight these quotes, characterize exactly how they depart from literal truth, and assert an imprecise and unverifiable reason the speech appears as it is. The level 2+ speech such analysis is criticizing is what I am referring to here as “nonsense,” because that is how it is being taken as by the critics. I rarely if ever find that my ability to predict the behavior of the speakers or institutions they belong to is enhanced by these criticisms, characterizations of simulacra levels, and vague speculations.
Analyzing nonsense can be a way to motivate discussion of what does make sense. “Here’s the nonsense the ‘experts’ or ‘leaders’ are saying, here’s a quick explanation of why it’s nonsense, and now here’s a more sensible interpretation of what’s going on.”
But going deep into characterizing just why this particular form of nonsense is the way it is, and what type of nonsense it is, rapidly becomes unconvincing to me.
Better might be “only a fool analyzes blitz,” referring to chess games played out in just a few minutes. I think that a combination of pressure, time constraints, and ego lead to people saying things that seem to them intuitively like the most sensible thing to say at that time. Just as analyzing a blitz chess move that seemed sensible with a computer often reveals a fatal flaw, so analyzing blitz speech reveals all sorts of foolishness. The simulacra theory invited us to think really hard about why certain bad chess moves seemed superficially compelling to the player of a blitz game. I don’t think there’s much we can learn from that, though.
By contrast, attempts to describe how we can better measure, predict, and control the physical world in morally good ways, including the social world, seem fruitful.
Hmm. So on one hand, I think it’s reasonable to argue that all the Simulacra stuff hasn’t made much legible case for it actually being a model with explanatory power.
But, to say “there’s nothing to explain” or “it’s not worth trying” seems pretty wrong. If we’re reliably running into particular kinds of nonsense (and we seem to be), knowing what’s generating the nonsense seems important both for predicting/navigating the world, and for helping us not fall prey to it. (Maybe your point there is that “steering towards goodness” is better than “steering away from badness”, which seems plausibly true, but a) I think we need at least some model of badness, b) there are places where, say, Simulacrum Level 3 might actually be an important coordination strategy)
I haven’t seen these analyses into definitions and causes done with rigor. It also seems very hard to achieve rigor in these analyses, given that the information into individual psychology and sociology of specific institutions we’d need to do so successfully is hard to come by.
As such, the tack these authors take is often not to attempt such a rigorous analyses, but instead to go straight from their current model, composed of guesswork, to activist claims about how to improve the world and the level of destruction caused by that guesswork-based model.
The analysis, then, seems to be of a guesswork-based, ill-defined model with limited predictive power or falsifiability, involving a lot of arguing with organizations and people you perceive as propagandists for empirically and morally wrong views. It also seems to involve a tendency to discourage behaviors that could disconfirm the assumptions.
But I don’t want to tear into it too deeply. I recognize that simulacra levels point at something real. I also think that doing this too much would be hypocritical.
If I saw more attempts to falsify the model or use it to make predictions, I’d be happier with it.
Frames that describe perception can become tools for controlling perception.
The idea of simulacra has been generative here on LessWrong, used by Elizabeth in her analysis of negative feedback, and by Zvi in his writings on Covid-19. It appears to originate in private conversations between Benjamin Hoffman and Jessica Taylor. The four simulacra levels or stages are a conception of Baudrillard’s, from Simulacra and Simulation. The Wikipedia summary quoted on the original blog post between Hoffman and Taylor has been reworded several times by various authors and commenters.
We should approach this writing with a gut-level understanding of what motivates these authors to so passionately defend “level one” speech, and why they find simulacra to be so threatening as to describe it in Biblical terms. For Hoffman and Taylor, simulacra are “corruption of discourse,” “destroying the language,” “wireheading,” comparable to a speculative bubble, a “fall from grace.” For Elizabeth, they cause “more conflict or less information in the world,” and she “wanted a button [she] could push to make everyone go to level one all the time.” The “game” is an “enemy,” a “climate” that “punishes” honesty. Zvi’s relentlessly cynical attitude about speech he figures as simulacra is well-known to readers of his Covid-19 analysis.
It is transparently obvious to me that there are some problems in which perceptions matter enormously, others in which perceptions matter not at all, and still others in which both are equally important and dependent upon one another. Problems of perceptions operate in a complex but ultimately lawful manner. It is possible to develop a correct gears-level model of perceptions and use it to predict and control them to advantage. As Elizabeth points out, however, this perpetuates the necessity of doing so. It may be more advantageous to build relationships and institutions that at least resemble the sort that we would build if we placed a great deal of value on preserving our capacity for honesty.
Many people claim to have a bullshit detector, but nobody has one that I can borrow. Most analysis of simulacra is a description of untruths. While this can be important, it mostly motivates me to seek predictive models, identifying honest and reliable experts, make factual and logical arguments, and to advocate for perceptions that allow us to focus on solving practical problems. It’s never wise to overanalyze nonsense, and I hope that the rationalist community can continue to focus less on the thousands of things that are not and should not be, and more on what should be, what can be, and what is.
I’m not actually sure what it is you’re prescribing here. Which things seem like nonsense to you? Which things did you mean by “overanalyzing nonsense” and which things would you mean by “focus on what should be / can be / what is”
Simulacra levels 2-4, and especially 3-4, are ways of characterizing speech as not meaning what it literally appears to mean. Analysis of simulacra often seems to spend a lot of time trying to highlight these quotes, characterize exactly how they depart from literal truth, and assert an imprecise and unverifiable reason the speech appears as it is. The level 2+ speech such analysis is criticizing is what I am referring to here as “nonsense,” because that is how it is being taken as by the critics. I rarely if ever find that my ability to predict the behavior of the speakers or institutions they belong to is enhanced by these criticisms, characterizations of simulacra levels, and vague speculations.
Analyzing nonsense can be a way to motivate discussion of what does make sense. “Here’s the nonsense the ‘experts’ or ‘leaders’ are saying, here’s a quick explanation of why it’s nonsense, and now here’s a more sensible interpretation of what’s going on.”
But going deep into characterizing just why this particular form of nonsense is the way it is, and what type of nonsense it is, rapidly becomes unconvincing to me.
Better might be “only a fool analyzes blitz,” referring to chess games played out in just a few minutes. I think that a combination of pressure, time constraints, and ego lead to people saying things that seem to them intuitively like the most sensible thing to say at that time. Just as analyzing a blitz chess move that seemed sensible with a computer often reveals a fatal flaw, so analyzing blitz speech reveals all sorts of foolishness. The simulacra theory invited us to think really hard about why certain bad chess moves seemed superficially compelling to the player of a blitz game. I don’t think there’s much we can learn from that, though.
By contrast, attempts to describe how we can better measure, predict, and control the physical world in morally good ways, including the social world, seem fruitful.
Hmm. So on one hand, I think it’s reasonable to argue that all the Simulacra stuff hasn’t made much legible case for it actually being a model with explanatory power.
But, to say “there’s nothing to explain” or “it’s not worth trying” seems pretty wrong. If we’re reliably running into particular kinds of nonsense (and we seem to be), knowing what’s generating the nonsense seems important both for predicting/navigating the world, and for helping us not fall prey to it. (Maybe your point there is that “steering towards goodness” is better than “steering away from badness”, which seems plausibly true, but a) I think we need at least some model of badness, b) there are places where, say, Simulacrum Level 3 might actually be an important coordination strategy)
I haven’t seen these analyses into definitions and causes done with rigor. It also seems very hard to achieve rigor in these analyses, given that the information into individual psychology and sociology of specific institutions we’d need to do so successfully is hard to come by.
As such, the tack these authors take is often not to attempt such a rigorous analyses, but instead to go straight from their current model, composed of guesswork, to activist claims about how to improve the world and the level of destruction caused by that guesswork-based model.
The analysis, then, seems to be of a guesswork-based, ill-defined model with limited predictive power or falsifiability, involving a lot of arguing with organizations and people you perceive as propagandists for empirically and morally wrong views. It also seems to involve a tendency to discourage behaviors that could disconfirm the assumptions.
But I don’t want to tear into it too deeply. I recognize that simulacra levels point at something real. I also think that doing this too much would be hypocritical.
If I saw more attempts to falsify the model or use it to make predictions, I’d be happier with it.