Some people have high trust in science, and enough knowledge and engagement with the news media that it’s easy to transfer information about COVID-19 to them. Other people have low trust in science, media, and government, and their lack of scientific knowledge means that the cost of transferring scientific information about COVID-19 to them is high.
Compounding this problem, they don’t trust most other people to know better than they do, or to explain things to them in a way that’s accessible and respectful. There are probably people out there who could explain things to them in a way they’d be receptive to. But triangulating that interaction would be very difficult.
Given those difficulties, they “don’t buy” that COVID-19 is a serious issue, and turn to their favorite politician or pundit, who reinforces their distrust and misguided notions of how the world works, because their constituents have an interest in feeling wise and respected by powerful people, and it’s cheap to triangulate and transfer that feeling in exchange for a vote. By contrast, building trust in a smart pandemic response, transferring information about it, and triangulating that information with the voters is much more difficult.
FWIW I think this misses a lot of trouble that people on LW seem to have with the pandemic response, which is that the news media and scientific organizations like the CDC had significant issues responding in a rational way.
Oh I completely agree. My aim here wasn’t to give a complete account of pandemic weirdness, just to show how alternative language can help us dissect the situation in a more detailed way than seems possible with “social reality vs. physical reality.”
My aim here wasn’t to give a complete account of pandemic weirdness, just to show how alternative language can help us dissect the situation in a more detailed way than seems possible with “social reality vs. physical reality.”
But it’s very different when “there are some people that don’t trust the true information we’re getting” and “none of the institutions that were supposed to be giving accurate information were.”
One of these points at an idiosyncratic problem with a specific group getting a bit confused, and one points at a systemic problem with our institutions’ relationship with truth. Which thing you focus on has direct implications for how useful this post is.
Treat my post as utterly worthless for the purposes of actually dissecting the COVID response, and useful for the purpose of a quick sketch illustrating how a different choice of language could help us dissect the COVID response. I find it hard to express their behavior using the “social vs. physical reality frame.” It’s more natural to me to say something like:
“The CDC didn’t trust the American public to conserve the limited early supply of masks for healthcare workers if they knew that masks were helpful. So they transferred the false information that masks were unhelpful, because it was easier to triangulate a single message that produced the desired behavior to the entire American public than it was to triangulate two separate messages—one for the untrustworthy public who’d buy up all the masks if they knew it would help; and one for the trustworthy public who would refrain from buying up the masks even knowing that the masks would be helpful to them. This decision reduced the transaction cost of buying compliance from the public in the short run by eliminating the triangulation problem, but also undermined trust in the CDC to an unknown extent. This may increase the transaction cost of buying compliance from the American public in the future due to the increased price of purchasing their future trust.”
Shoehorning this analysis into the trust/transfer/triangulation framework for transaction costs is both easy for me to do, and feels like it makes my thoughts clearer. I know precisely what I mean by those words. By contrast, I don’t think that “social reality” or “physical” reality carve nature at the joints, and I don’t think I’ll ever have a definition of them that I trust will be interpreted correctly by my audience. That’s why I choose not to use them.
Obviously it’s very clunky compared to just saying it in natural speech. But sometimes a linguistic straightjacket is helpful.
Shoehorning this analysis into the trust/transfer/triangulation framework for transaction costs is both easy for me to do, and feels like it makes my thoughts clearer. I know precisely what I mean by those words. By contrast, I don’t think that “social reality” or “physical” reality carve nature at the joints, and I don’t think I’ll ever have a definition of them that I trust will be interpreted correctly by my audience. That’s why I choose not to use them.
I like the trust/transfer/triangulation framework. If I can, I’d like to add 3 words that I think would make the trust/transfer/triangulation framework as expressive as the simulacra level models: trick/truth/tribe. By adding these 3 words, you can then begin to easily see which Simulacra level each player is playing on.
In your example above, You would be able to see that the CDC is trying to transfer tricks to triangulate trust, which puts them at simulacra level 2 at least.
Meanwhile, in the example above that, you’d be able to see that a good portion of the public avoiding masks was trying to transfer truth about their tribe through the use of masks, which puts them at least at level 3.
You could even use this framework to trace these organizations and individuals backwards, seeing when they learned to use tricks about truth and when they learned to use tricks about tribes.
But you’d still be stuck analyzing each individual agent and situation.
Meanwhile, the Simulacra levels are operating on a higher level of abstraction. Can we quickly see which agents will use tricks and truth to triangulate either trust or tribes? Is there a pattern to it? Is there a pattern to how agents learn to use these different tools? That’s what this framework is attempting to answer.
FWIW I think this misses a lot of trouble that people on LW seem to have with the pandemic response, which is that the news media and scientific organizations like the CDC had significant issues responding in a rational way.
Oh I completely agree. My aim here wasn’t to give a complete account of pandemic weirdness, just to show how alternative language can help us dissect the situation in a more detailed way than seems possible with “social reality vs. physical reality.”
But it’s very different when “there are some people that don’t trust the true information we’re getting” and “none of the institutions that were supposed to be giving accurate information were.”
One of these points at an idiosyncratic problem with a specific group getting a bit confused, and one points at a systemic problem with our institutions’ relationship with truth. Which thing you focus on has direct implications for how useful this post is.
Treat my post as utterly worthless for the purposes of actually dissecting the COVID response, and useful for the purpose of a quick sketch illustrating how a different choice of language could help us dissect the COVID response. I find it hard to express their behavior using the “social vs. physical reality frame.” It’s more natural to me to say something like:
Shoehorning this analysis into the trust/transfer/triangulation framework for transaction costs is both easy for me to do, and feels like it makes my thoughts clearer. I know precisely what I mean by those words. By contrast, I don’t think that “social reality” or “physical” reality carve nature at the joints, and I don’t think I’ll ever have a definition of them that I trust will be interpreted correctly by my audience. That’s why I choose not to use them.
Obviously it’s very clunky compared to just saying it in natural speech. But sometimes a linguistic straightjacket is helpful.
I like the trust/transfer/triangulation framework. If I can, I’d like to add 3 words that I think would make the trust/transfer/triangulation framework as expressive as the simulacra level models: trick/truth/tribe. By adding these 3 words, you can then begin to easily see which Simulacra level each player is playing on.
In your example above, You would be able to see that the CDC is trying to transfer tricks to triangulate trust, which puts them at simulacra level 2 at least.
Meanwhile, in the example above that, you’d be able to see that a good portion of the public avoiding masks was trying to transfer truth about their tribe through the use of masks, which puts them at least at level 3.
You could even use this framework to trace these organizations and individuals backwards, seeing when they learned to use tricks about truth and when they learned to use tricks about tribes.
But you’d still be stuck analyzing each individual agent and situation.
Meanwhile, the Simulacra levels are operating on a higher level of abstraction. Can we quickly see which agents will use tricks and truth to triangulate either trust or tribes? Is there a pattern to it? Is there a pattern to how agents learn to use these different tools? That’s what this framework is attempting to answer.