Maybe it’s just your phrasing, but I feel like this is subtly missing what it means to contextualize by supposing you can create a context where something can be left out, like saying let me create a new set of everything that doesn’t include everything.
I confused by what you mean when you say “just tell the truth”. The only interpretation that comes to mind is one where you mean something like the contextualized perspective is not capable of saying anything true, and that seems insufficiently charitable.
I think contextualization allows something like understanding how the study intends for me to respond and using that to guess the teacher’s password, rather than falling for what I would consider the epistemic trap of thinking the study’s isolating perspective is the “real” one. Maybe that’s what you meant?
I think a proper contextualizing perspective would recognize that the study’s isolated perspective is indeed one of the most relevant perspectives when in the study. If I’m tracking what people will think of me when in fact what I do during the study won’t get back to people I care about at all, I’m not properly tracking context, instead of I’ve internalized tribal perspectives so much that I can’t actually separate them from real context.
To me this is what separates Simulacra levels from contextualizing.
People got post-research interviewed and asked to explain their answers. There were social feedback mechanisms. Even if there wasn’t peer to peer social feedback, it was certainly possible to annoy the authority (researchers) who is giving you the questions (like annoying your teacher who gives you a test). The researchers want you to answer a particular way so people, reasonably, guess what that is, even if they don’t already have that way highly internalized (as most people do).
This is how people have learned to deal with questions in general. And people are correct to be very wary of guessing “it’s safe to be literal now” (often when it looks safe, it’s not, so people come to the reasonable rule of thumb that it’s never safe and basically decide (but not as a conscious decision) that maintaining a literalist personality to be used very rarely, when it’s hard to even identify any safe times to use it, is not worth the cost). People have near-zero experience in situations where being hyper literal (or whatever you want to call it) won’t be punished. Those scenarios barely exist. Even science, academia or Less Wrong mostly aren’t like that.
I think a proper contextualizing perspective would recognize that the study’s isolated perspective is indeed one of the most relevant perspectives when in the study. If I’m tracking what people will think of me when in fact what I do during the study won’t get back to people I care about at all, I’m not properly tracking context, instead of I’ve internalized tribal perspectives so much that I can’t actually separate them from real context.
To me this is what separates Simulacra levels from contextualizing.
People got post-research interviewed and asked to explain their answers. There were social feedback mechanisms. Even if there wasn’t peer to peer social feedback, it was certainly possible to annoy the authority (researchers) who is giving you the questions (like annoying your teacher who gives you a test). The researchers want you to answer a particular way so people, reasonably, guess what that is, even if they don’t already have that way highly internalized (as most people do).
This is how people have learned to deal with questions in general. And people are correct to be very wary of guessing “it’s safe to be literal now” (often when it looks safe, it’s not, so people come to the reasonable rule of thumb that it’s never safe and basically decide (but not as a conscious decision) that maintaining a literalist personality to be used very rarely, when it’s hard to even identify any safe times to use it, is not worth the cost). People have near-zero experience in situations where being hyper literal (or whatever you want to call it) won’t be punished. Those scenarios barely exist. Even science, academia or Less Wrong mostly aren’t like that.
More related to this in my followup post: Asch Conformity Could Explain the Conjunction Fallacy