(That was a possible failure mode mentioned, I don’t know why you’re reiterating it with just more detail)
Separately from the more meta discussion about norms, I believe the failure mode I mentioned is quite different from yours in an important respect that is revealed by the potential remedy you pointed out (“have each discussion group be composed of a proportional amount of each party’s supporters. or maybe have them be 1-on-1 discussions instead of groups of x>2 because those tend to go better anyways”).
Together with your explanation of the failure mode (“when e.g it’s 3 against 2 or 4 against 1”), it seems to me like you are thinking of a situation where one Republican, for instance, is in a group with 4 Democrats, and thus feels pressure from all sides in a group discussion because everyone there has strong priors that disagree with his/hers. Or, as another example, when a person arguing for a minority position is faced with 4 others who might be aggresively conventional-minded and instantly disapprove of any deviation from the Overton window. (I could very easily be misinterpreting what you are saying, though, so I am less than 95% confident of your meaning.)
In this spot, the remedy makes a lot of sense: prevent these gang-up-on-the-lonely-dissenter spots by making the ideological mix-up of the group more uniform or by encouraging 1-on-1 conversations in which each ideology or system of beliefs will only have one representative arguing for it.
But I am talking about a failure mode that focuses on the power of one single individual to swing the room towards him/her, regardless of how many are initially on his/her side from a coalitional perspective. Not because those who disagree are initially in the minority and thus cowed into staying silent (and fuming, or in any case not being internally convinced), but rather because the “combination of charisma, social skills, and assertiveness in dialogue” would take control of the conversation and turn the entire room in its favor, likely by getting the others to genuinely believe that they are being persuaded for rational reasons instead of social proof.
This seems importantly different from your potential downside, as can be seen by the fact that the proposed remedy would not be of much use here; the Dark Arts conversational superpowers would be approximately as effective in 1-on-1 discussions as in group chats (perhaps even more so in some spots, since there would be nobody else in the room to potentially call out the missing logic or misleading rhetoric etc) and would still remain impactful even if the room was ideologically mixed-up to start.
To clarify, I do not expect the majority of such conversations to actually result in a clever arguer that’s good at conversations to be able to convince those who disagree to bend over to his/her position (the world is not lacking for charismatic and ambitious people, so I would expect everything around us to look quite different if convincing others to change their political leanings was simple). But, conditional on the group having reached consensus, I do predict, with high probability, that it did so because of these types of social dynamics rather than because they are composed of people that react well to “valid arguments” that challenge closely-held political beliefs.
(edit: wrote this before I saw the edit in your most recent comment. Feel free to ignore all of this until the matter gets resolved)
Meta-level response about “did you mean this or rule it out/not have a world model where it happens?”:
Some senses in which you’re right that it’s not what I was meaning:
It’s more specific/detailed. I was not thinking in this level of detail about how such discussions would play out.
I was thinking more about pressure than about charisma (where someone genuinely seems convincing). And yes, charisma could be even more powerful in a 1-on-1 setting.
Senses in which it is what I meant:
This is not something my world model rules out, it just wasn’t zoomed in on it, possibly because I’m used to sometimes experiencing a lot of pressure from neurotypical people over my beliefs. (that could have biased my internal frame to overfocus on pressure).
For the parts about more even distributions being better, it’s more about: yes, these dynamics exist, but I thought they’d be even worse when combined with a background conformity pressure, e.g when there’s one dominant-pressuring person and everyone but you passively agreeing with what they’re saying, and tolerating it because they agree.
Object-level response:
conditional on the group having reached consensus, I do predict, with high probability, that it did so because of these types of social dynamics rather than because they are composed of people that react well to “valid arguments” that challenge closely-held political beliefs.
(First, to be clear: the beliefs don’t have to be closely-held; we’d see consensuses more often when for {all but at most one side} they’re not)
That seems plausible. We could put it into a (handwavey) calculation form, where P(1 dark arts arguer) is higher than P(5 truth-seekers). But it’s actually a lot more complex; e.g., what about P(all opposing participants susceptible to such an arguer), or how e.g one more-truth-seeking attitude can influence others to have a similar attitude for that context. (and this is without me having good priors on the frequencies and degrees of these qualities, so I’m mostly uncertain).
A world with such a proposal implemented might even then see training programs for clever dark arts arguing. (Kind of like I mentioned at the start, but again with me using the case of pressuring specifically: “memetics teaching people how to pressure others into agreeing during the group discussion”)
Separately from the more meta discussion about norms, I believe the failure mode I mentioned is quite different from yours in an important respect that is revealed by the potential remedy you pointed out (“have each discussion group be composed of a proportional amount of each party’s supporters. or maybe have them be 1-on-1 discussions instead of groups of x>2 because those tend to go better anyways”).
Together with your explanation of the failure mode (“when e.g it’s 3 against 2 or 4 against 1”), it seems to me like you are thinking of a situation where one Republican, for instance, is in a group with 4 Democrats, and thus feels pressure from all sides in a group discussion because everyone there has strong priors that disagree with his/hers. Or, as another example, when a person arguing for a minority position is faced with 4 others who might be aggresively conventional-minded and instantly disapprove of any deviation from the Overton window. (I could very easily be misinterpreting what you are saying, though, so I am less than 95% confident of your meaning.)
In this spot, the remedy makes a lot of sense: prevent these gang-up-on-the-lonely-dissenter spots by making the ideological mix-up of the group more uniform or by encouraging 1-on-1 conversations in which each ideology or system of beliefs will only have one representative arguing for it.
But I am talking about a failure mode that focuses on the power of one single individual to swing the room towards him/her, regardless of how many are initially on his/her side from a coalitional perspective. Not because those who disagree are initially in the minority and thus cowed into staying silent (and fuming, or in any case not being internally convinced), but rather because the “combination of charisma, social skills, and assertiveness in dialogue” would take control of the conversation and turn the entire room in its favor, likely by getting the others to genuinely believe that they are being persuaded for rational reasons instead of social proof.
This seems importantly different from your potential downside, as can be seen by the fact that the proposed remedy would not be of much use here; the Dark Arts conversational superpowers would be approximately as effective in 1-on-1 discussions as in group chats (perhaps even more so in some spots, since there would be nobody else in the room to potentially call out the missing logic or misleading rhetoric etc) and would still remain impactful even if the room was ideologically mixed-up to start.
To clarify, I do not expect the majority of such conversations to actually result in a clever arguer that’s good at conversations to be able to convince those who disagree to bend over to his/her position (the world is not lacking for charismatic and ambitious people, so I would expect everything around us to look quite different if convincing others to change their political leanings was simple). But, conditional on the group having reached consensus, I do predict, with high probability, that it did so because of these types of social dynamics rather than because they are composed of people that react well to “valid arguments” that challenge closely-held political beliefs.
(edit: wrote this before I saw the edit in your most recent comment. Feel free to ignore all of this until the matter gets resolved)
I think this is a good object-level comment.
Meta-level response about “did you mean this or rule it out/not have a world model where it happens?”:
Some senses in which you’re right that it’s not what I was meaning:
It’s more specific/detailed. I was not thinking in this level of detail about how such discussions would play out.
I was thinking more about pressure than about charisma (where someone genuinely seems convincing). And yes, charisma could be even more powerful in a 1-on-1 setting.
Senses in which it is what I meant:
This is not something my world model rules out, it just wasn’t zoomed in on it, possibly because I’m used to sometimes experiencing a lot of pressure from neurotypical people over my beliefs. (that could have biased my internal frame to overfocus on pressure).
For the parts about more even distributions being better, it’s more about: yes, these dynamics exist, but I thought they’d be even worse when combined with a background conformity pressure, e.g when there’s one dominant-pressuring person and everyone but you passively agreeing with what they’re saying, and tolerating it because they agree.
Object-level response:
(First, to be clear: the beliefs don’t have to be closely-held; we’d see consensuses more often when for {all but at most one side} they’re not)
That seems plausible. We could put it into a (handwavey) calculation form, where P(1 dark arts arguer) is higher than P(5 truth-seekers). But it’s actually a lot more complex; e.g., what about P(all opposing participants susceptible to such an arguer), or how e.g one more-truth-seeking attitude can influence others to have a similar attitude for that context. (and this is without me having good priors on the frequencies and degrees of these qualities, so I’m mostly uncertain).
A world with such a proposal implemented might even then see training programs for clever dark arts arguing. (Kind of like I mentioned at the start, but again with me using the case of pressuring specifically: “memetics teaching people how to pressure others into agreeing during the group discussion”)