When I first read this I intuitively felt like this was a useful pattern (it reminds me of one of the useful bits of Illuminatus!), but I haven’t been able to construct any hypotheticals where I’d use it.
I don’t think it’s a compelling account of your three scenarios. The response in scenario 1 avoids giving Alec any orders, but it also avoids demonstrating the community’s value to him in solving the problem. To a goal-driven Alec who’s looking for resources rather superiors, it’s still disappointing: “we don’t have any agreed-upon research directions, you have to come up with your own” is the kind of insight you can fit in a blog post, not something you have to go to a workshop to learn. “Why did I sign up for this?” is a pretty rude thing for this Alec to say out loud, but he’s kinda right. In this analysis, the response in scenario 3 is better because it clearly demonstrates value: Alec will have to come up with his own ideas, but he can surround himself with other people who are doing the same thing, and if he has a good idea he can get paid to work on it.
More generally, I think ambiguity between syncing and sharing is uncommon and not that interesting. Even when people are asking to be told what to do, there’s usually a lot of overlap between “the things the community would give as advice” and “the things you do to fit in to the community”. For example, if you go to a go club and ask the players there how to get stronger at go, and you take their advice, you’ll both get stronger and go and become more like the kind of person who hangs out in go clubs. If you just want to be in sync with the go club narrative and don’t care about the game, you’ll still ask most of the same questions: the go players will have a hard time telling your real motivation, and it’s not clear to me that they have an incentive to try.
But if they did care about that distinction, one thing they could do is divide their responses into narrative and informative parts, tagged explicitly as “here’s what we do, and here’s why”: “We all studied beginner-level life and death problems before we tried reading that book of tactics you’ve got, because each of those tactics might come up once per game, if at all, whereas you’ll be thinking about life and death every time you make a move”. Or for the AI safety case, “We don’t have a single answer we’re confident in: we each have our own models of AI development, failure, and success, that we came to through our own study and research. We can explain those models to you but ultimately you will have to develop your own, probably more than once. I know that’s not career advice, as such, but that’s preparadigmatic research for you.” (note that I only optimized that for illustrating the principle, not for being sound AI research advice!)
tl;dr I think narrative syncing is a natural category but I’m much less confident that “narrative syncing disguised as information sharing” is a problem worth noting, and in the AI-safety example I think you’re applying it to a mostly unrelated problem.
I haven’t been able to construct any hypotheticals where I’d use it…. tl;dr I think narrative syncing is a natural category but I’m much less confident that “narrative syncing disguised as information sharing” is a problem worth noting,
I’m curious what you think of the examples in the long comment I just made (which was partly in response to this, but which I wrote as its own thing because I also wish I’d added it to the post in general).
I’m now thinking there’re really four concepts:
Narrative syncing. (Example: “the sand is lava.”)
Narrative syncing that can easily be misunderstood as information sharing. (Example: many of Fauci’s statements about covid, if this article about it is correct.)
Narrative syncing that sets up social pressure not to disagree, or not to weaken the apparent social norm about how we’ll talk about that. (Example: “Gambi’s is agreat restaurant and we are all agreed on going there,” when said in an irate tone of voice after a long and painful discussion about which restaurant to go to.”)
Narrative syncing that falls into categories #2 and #3 simultaneously. (Example: “The 911 terrorists were cowards,” if used to establish a norm for how we’re going to speak around here rather than to share honest impressions and invite inquiry.)
I am currently thinking that category #4 is my real nemesis — the actual thing I want to describe, and that I think is pretty common and leads to meaningfully worse epistemics than an alternate world where we skillfully get the good stuff without the social pressures against inquiry/speech.
I also have a prediction that most (though not all) instances of #2 will also be instances of #3, which is part of why I think there’s a “natural cluster worth forming a concept around” here.
For example, if you go to a go club and ask the players there how to get stronger at go, and you take their advice, you’ll both get stronger and go and become more like the kind of person who hangs out in go clubs. If you just want to be in sync with the go club narrative and don’t care about the game, you’ll still ask most of the same questions: the go players will have a hard time telling your real motivation, and it’s not clear to me that they have an incentive to try.
This seems right to me about most go clubs, but there’re a lot of other places that seem to me different on this axis.
Distinguishing features of Go clubs from my POV:
A rapid and trustworthy feedback loop, where everyone wins and loses at non-rigged games of Go regularly. (Opposite of schools proliferating without evidence.)
A lack of need to coordinate individuals. (People win or lose Go games on their own, rather than by needing to organize other people into coordinating their play.)
Some places where I expect “being in sync with the narrative” would diverge more from “just figuring out how to get stronger / how to do the object-level task in a general way”:
A hypothetical Go club that somehow twisted around to boost a famous player’s ego about how very useful his particular life-and-death problems were, or something, maybe so they could keep him around and brag about how they had him at their club, and so individual members could stay on his good side. (Doesn’t seem very likely, but it’s a thought experiment.)
Many groups with an “ideological” slant, e.g. the Sierra Club or ACLU or a particular church
(?Maybe? not sure about this one) Many groups that are trying to coordinate their members to follow a particular person’s vision for coordinated action, e.g. Ikea’s or most other big companies’ interactions with their staff, or even a ~8-employee coffee shop that’s trying to realize a particular person’s vision
When I first read this I intuitively felt like this was a useful pattern (it reminds me of one of the useful bits of Illuminatus!), but I haven’t been able to construct any hypotheticals where I’d use it.
I don’t think it’s a compelling account of your three scenarios. The response in scenario 1 avoids giving Alec any orders, but it also avoids demonstrating the community’s value to him in solving the problem. To a goal-driven Alec who’s looking for resources rather superiors, it’s still disappointing: “we don’t have any agreed-upon research directions, you have to come up with your own” is the kind of insight you can fit in a blog post, not something you have to go to a workshop to learn. “Why did I sign up for this?” is a pretty rude thing for this Alec to say out loud, but he’s kinda right. In this analysis, the response in scenario 3 is better because it clearly demonstrates value: Alec will have to come up with his own ideas, but he can surround himself with other people who are doing the same thing, and if he has a good idea he can get paid to work on it.
More generally, I think ambiguity between syncing and sharing is uncommon and not that interesting. Even when people are asking to be told what to do, there’s usually a lot of overlap between “the things the community would give as advice” and “the things you do to fit in to the community”. For example, if you go to a go club and ask the players there how to get stronger at go, and you take their advice, you’ll both get stronger and go and become more like the kind of person who hangs out in go clubs. If you just want to be in sync with the go club narrative and don’t care about the game, you’ll still ask most of the same questions: the go players will have a hard time telling your real motivation, and it’s not clear to me that they have an incentive to try.
But if they did care about that distinction, one thing they could do is divide their responses into narrative and informative parts, tagged explicitly as “here’s what we do, and here’s why”: “We all studied beginner-level life and death problems before we tried reading that book of tactics you’ve got, because each of those tactics might come up once per game, if at all, whereas you’ll be thinking about life and death every time you make a move”. Or for the AI safety case, “We don’t have a single answer we’re confident in: we each have our own models of AI development, failure, and success, that we came to through our own study and research. We can explain those models to you but ultimately you will have to develop your own, probably more than once. I know that’s not career advice, as such, but that’s preparadigmatic research for you.” (note that I only optimized that for illustrating the principle, not for being sound AI research advice!)
tl;dr I think narrative syncing is a natural category but I’m much less confident that “narrative syncing disguised as information sharing” is a problem worth noting, and in the AI-safety example I think you’re applying it to a mostly unrelated problem.
I’m curious what you think of the examples in the long comment I just made (which was partly in response to this, but which I wrote as its own thing because I also wish I’d added it to the post in general).
I’m now thinking there’re really four concepts:
Narrative syncing. (Example: “the sand is lava.”)
Narrative syncing that can easily be misunderstood as information sharing. (Example: many of Fauci’s statements about covid, if this article about it is correct.)
Narrative syncing that sets up social pressure not to disagree, or not to weaken the apparent social norm about how we’ll talk about that. (Example: “Gambi’s is a great restaurant and we are all agreed on going there,” when said in an irate tone of voice after a long and painful discussion about which restaurant to go to.”)
Narrative syncing that falls into categories #2 and #3 simultaneously. (Example: “The 911 terrorists were cowards,” if used to establish a norm for how we’re going to speak around here rather than to share honest impressions and invite inquiry.)
I am currently thinking that category #4 is my real nemesis — the actual thing I want to describe, and that I think is pretty common and leads to meaningfully worse epistemics than an alternate world where we skillfully get the good stuff without the social pressures against inquiry/speech.
I also have a prediction that most (though not all) instances of #2 will also be instances of #3, which is part of why I think there’s a “natural cluster worth forming a concept around” here.
This seems right to me about most go clubs, but there’re a lot of other places that seem to me different on this axis.
Distinguishing features of Go clubs from my POV:
A rapid and trustworthy feedback loop, where everyone wins and loses at non-rigged games of Go regularly. (Opposite of schools proliferating without evidence.)
A lack of need to coordinate individuals. (People win or lose Go games on their own, rather than by needing to organize other people into coordinating their play.)
Some places where I expect “being in sync with the narrative” would diverge more from “just figuring out how to get stronger / how to do the object-level task in a general way”:
A hypothetical Go club that somehow twisted around to boost a famous player’s ego about how very useful his particular life-and-death problems were, or something, maybe so they could keep him around and brag about how they had him at their club, and so individual members could stay on his good side. (Doesn’t seem very likely, but it’s a thought experiment.)
Many groups with an “ideological” slant, e.g. the Sierra Club or ACLU or a particular church
(?Maybe? not sure about this one) Many groups that are trying to coordinate their members to follow a particular person’s vision for coordinated action, e.g. Ikea’s or most other big companies’ interactions with their staff, or even a ~8-employee coffee shop that’s trying to realize a particular person’s vision