random idea for a voting system (i’m a few centuries late. this is just for fun.)
instead of voting directly, everyone is assigned to a discussion group of x (say 5) of themself and others near them. the group meets to discuss at an official location (attendance is optional). only if those who showed up reach consensus does the group cast one vote.
many of these groups would not reach consensus, say 70-90%. that’s fine. the point is that most of the ones which do would be composed of people who make and/or are receptive to valid arguments. this would then shift the memetic focus of politics towards rational arguments instead of being mostly rhetoric/bias reinforcement (which is unlikely to produce consensus when repeated in this setting).
possible downside: another possible equilibrium is memetics teaching people how to pressure others into agreeing during the group discussion, when e.g it’s 3 against 2 or 4 against 1. possible remedy: have each discussion group be composed of a proportional amount of each party’s supporters. or maybe have them be 1-on-1 discussions instead of groups of x>2 because those tend to go better anyways.
also, this would let misrepresented minority positions be heard correctly.
i don’t think this would have saved humanity from ending up in an inadequate equilibrium, but maybe would have at least been less bad.
the point is that most of the ones which do would be composed of people who make and/or are receptive to valid arguments
I strongly disagree with this, as a descriptive matter of how the vast majority of groups of regular (neurotypical) people function.
I would expect that the groups which reach consensus would generally do so because whichever of the 5 individuals has greatest combination of charisma, social skills, and assertiveness in dialogue would domineer the discussion and steer it in a direction where whoever else might disagree gets conversationally out-skilled to the point where social pressure from everyone else gets them to give up and drop their objections (likely by actually subjectively feeling that they get convinced by the arguments of the charismatic person, when in reality it’s just social proof doing the work).
I think the fact that you don’t expect this to happen is more due to you improperly generalizing from the community of LW-attracted people (including yourself), whose average psychological make-up appears to me to be importantly different from that of the broader public.
Please don’t make unfounded speculation[1] about my psychology. I feel pressured to respond just to say that’s not true (that I am not generalizing from lesswrong users).
the groups which reach consensus would generally do so because whichever of the 5 individuals has greatest combination of charisma, social skills, and assertiveness in dialogue would domineer the discussion
(That was a possible failure mode mentioned, I don’t know why you’re reiterating it with just more detail). My impression was that many neurotypicals are used (/desensitized) to that happening by now and that there might frequently be attempts from multiple which would not be resolved.
But this was not a strongly held belief, nor a topic that seems important at this phase of history; it was just a fun-idea-shortform. I feel discouraged by what I perceive to be the assertiveness/assumingness of your comment.
(edit: I agree correctly-hedged speculation is okay and would have been okay here, I meant something like confidently-expressed claims about another user’s mind with low evidence.)
I disagree that the speculation was unfounded. I checked your profile before making that comment (presumably written by you, and thus a very well-founded source) and saw “~ autistic.” I would not have made that statement, as written, if this had not been the case (for instance the part of “including yourself”).
Then, given my past experience with similar proposals that were written about on LW, in which other users correctly pointed out the problems with the proposal and it was revealed that the OP was implicitly making assumptions that the broader community was akin to that of LW, it was reasonable to infer that the same was happening here. (It still seems reasonable to infer this, regardless of your comment, but that is beside the point.) In any case, I said “think” which signaled that I understood my speculation was not necessarily correct.
I have written up my thoughts before on why good moderation practices should not allow for the mind-reading of others, but I strongly oppose any norm that says the mere speculation, explicitly labeled as such through language that signals some epistemic humility, is inherently bad. I even more strongly oppose a norm that other users feeling pressured to respond should have a meaningful impact on whether a comment is proper or not.
I expect your comment to not have been a claim about the norms of LW, but rather a personal request. If so, I do not expect to comply (unless required to by moderation).
I don’t agree that my bio stating I’m autistic[1] is strong/relevant* evidence that I assume the rest of the world is like me or LessWrong users, I’m very aware that this is not the case. I feel a lot of uncertainty about what happens inside the minds of neurotypical people (and most others), but I know they’re very different in various specific ways, and I don’t think the assumption you inferred is one I make; it was directly implied in my shortform that neurotypicals engage in politics in a really irrational way, are influentiable by such social pressures as you (and I) mentioned, etc.
*Technically, being a LessWrong user is some bayesian evidence that one makes that assumption, if that’s all you know about them, so I added the hedge “strong/relevant”, i.e. enough to reasonably cause one to write “I think you are making [clearly-wrong assumption x]” instead of using more uncertain phrasings.
I even more strongly oppose a norm that other users feeling pressured to respond should have a meaningful impact on whether a comment is proper or not.
I agree that there are cases where feeling pressured to respond is acceptable. E.g., if someone writes a counterargument which one think misunderstands their position, they might feel some internal pressure to respond to correct this; I think that’s okay, or at least unavoidable.
I don’t know how to define a general rule for determining when making-someone-feel-pressured is okay or not, but this seemed like a case where it was not okay: in my view, it was caused by an unfounded confident expression of belief about my mind.
If you internally believe you had enough evidence to infer what you wrote at the level of confidence to just be prefaced with ‘I think’, perhaps it should not be against LW norms, though; I don’t have strong opinions on what site norms should be, or how norms should differ when the subject is the internal mind of another user.
More on norms: the assertive writing style of your two comments here seems also possibly norm-violating as well.
As a moderator: I do think sunwillrise was being a bit obnoxious here. I think the norms they used here were fine for frontpage LW posts, but shortform is trying to do something that is more casual and more welcoming of early-stage ideas, and this kind of psychologizing I think has reasonably strong chilling-effects on people feeling comfortable with that.
I don’t think it’s a huge deal, my best guess is I would just ask sunwillrise to comment less on quila’s stuff in-particular, and if it becomes a recurring theme, to maybe more generally try to change how they comment on shortforms.
I do think the issue here is kind of subtle. I definitely notice an immune reaction to sunwillrise’s original comment, but I can’t fully put into words why I have that reaction, and I would also have that reaction if it was made as a comment on a frontpage post (but I would just be more tolerant of it).
I think the fact that you don’t expect this to happen is more due to you improperly generalizing from the community of LW-attracted people (including yourself), whose average psychological make-up appears to me to be importantly different from that of the broader public.
Like, I think my key issue here is that sunwillrise just started a whole new topic that quila had expressed no interest in talking about, which is the topic of “what are my biases on this topic, and if I am wrong, what would be the reason I am wrong?”, which like, IDK, is a fine topic, but it is just a very different topic that doesn’t really have anything to do with the object level. Like, whether quila is biased on this topic does not make a difference to question of whether this policy-esque proposal would be a good idea, and I think quila (and most other readers) are usually more interested in discussing that then meta-level bias stuff.
There is also a separate thing, where making this argument in some sense assumes that you are right, which I think is a fine thing to do, but does often make good discussion harder. Like, I think for comments, its usually best to focus on the disagreement, and not to invoke random other inferences about the world about what is true if you are right. There can be a place for that, especially if it helps illucidate your underlying world model, but I think in this case little of that happened.
(That was a possible failure mode mentioned, I don’t know why you’re reiterating it with just more detail)
Separately from the more meta discussion about norms, I believe the failure mode I mentioned is quite different from yours in an important respect that is revealed by the potential remedy you pointed out (“have each discussion group be composed of a proportional amount of each party’s supporters. or maybe have them be 1-on-1 discussions instead of groups of x>2 because those tend to go better anyways”).
Together with your explanation of the failure mode (“when e.g it’s 3 against 2 or 4 against 1”), it seems to me like you are thinking of a situation where one Republican, for instance, is in a group with 4 Democrats, and thus feels pressure from all sides in a group discussion because everyone there has strong priors that disagree with his/hers. Or, as another example, when a person arguing for a minority position is faced with 4 others who might be aggresively conventional-minded and instantly disapprove of any deviation from the Overton window. (I could very easily be misinterpreting what you are saying, though, so I am less than 95% confident of your meaning.)
In this spot, the remedy makes a lot of sense: prevent these gang-up-on-the-lonely-dissenter spots by making the ideological mix-up of the group more uniform or by encouraging 1-on-1 conversations in which each ideology or system of beliefs will only have one representative arguing for it.
But I am talking about a failure mode that focuses on the power of one single individual to swing the room towards him/her, regardless of how many are initially on his/her side from a coalitional perspective. Not because those who disagree are initially in the minority and thus cowed into staying silent (and fuming, or in any case not being internally convinced), but rather because the “combination of charisma, social skills, and assertiveness in dialogue” would take control of the conversation and turn the entire room in its favor, likely by getting the others to genuinely believe that they are being persuaded for rational reasons instead of social proof.
This seems importantly different from your potential downside, as can be seen by the fact that the proposed remedy would not be of much use here; the Dark Arts conversational superpowers would be approximately as effective in 1-on-1 discussions as in group chats (perhaps even more so in some spots, since there would be nobody else in the room to potentially call out the missing logic or misleading rhetoric etc) and would still remain impactful even if the room was ideologically mixed-up to start.
To clarify, I do not expect the majority of such conversations to actually result in a clever arguer that’s good at conversations to be able to convince those who disagree to bend over to his/her position (the world is not lacking for charismatic and ambitious people, so I would expect everything around us to look quite different if convincing others to change their political leanings was simple). But, conditional on the group having reached consensus, I do predict, with high probability, that it did so because of these types of social dynamics rather than because they are composed of people that react well to “valid arguments” that challenge closely-held political beliefs.
(edit: wrote this before I saw the edit in your most recent comment. Feel free to ignore all of this until the matter gets resolved)
Meta-level response about “did you mean this or rule it out/not have a world model where it happens?”:
Some senses in which you’re right that it’s not what I was meaning:
It’s more specific/detailed. I was not thinking in this level of detail about how such discussions would play out.
I was thinking more about pressure than about charisma (where someone genuinely seems convincing). And yes, charisma could be even more powerful in a 1-on-1 setting.
Senses in which it is what I meant:
This is not something my world model rules out, it just wasn’t zoomed in on it, possibly because I’m used to sometimes experiencing a lot of pressure from neurotypical people over my beliefs. (that could have biased my internal frame to overfocus on pressure).
For the parts about more even distributions being better, it’s more about: yes, these dynamics exist, but I thought they’d be even worse when combined with a background conformity pressure, e.g when there’s one dominant-pressuring person and everyone but you passively agreeing with what they’re saying, and tolerating it because they agree.
Object-level response:
conditional on the group having reached consensus, I do predict, with high probability, that it did so because of these types of social dynamics rather than because they are composed of people that react well to “valid arguments” that challenge closely-held political beliefs.
(First, to be clear: the beliefs don’t have to be closely-held; we’d see consensuses more often when for {all but at most one side} they’re not)
That seems plausible. We could put it into a (handwavey) calculation form, where P(1 dark arts arguer) is higher than P(5 truth-seekers). But it’s actually a lot more complex; e.g., what about P(all opposing participants susceptible to such an arguer), or how e.g one more-truth-seeking attitude can influence others to have a similar attitude for that context. (and this is without me having good priors on the frequencies and degrees of these qualities, so I’m mostly uncertain).
A world with such a proposal implemented might even then see training programs for clever dark arts arguing. (Kind of like I mentioned at the start, but again with me using the case of pressuring specifically: “memetics teaching people how to pressure others into agreeing during the group discussion”)
random idea for a voting system (i’m a few centuries late. this is just for fun.)
instead of voting directly, everyone is assigned to a discussion group of x (say 5) of themself and others near them. the group meets to discuss at an official location (attendance is optional). only if those who showed up reach consensus does the group cast one vote.
many of these groups would not reach consensus, say 70-90%. that’s fine. the point is that most of the ones which do would be composed of people who make and/or are receptive to valid arguments. this would then shift the memetic focus of politics towards rational arguments instead of being mostly rhetoric/bias reinforcement (which is unlikely to produce consensus when repeated in this setting).
possible downside: another possible equilibrium is memetics teaching people how to pressure others into agreeing during the group discussion, when e.g it’s 3 against 2 or 4 against 1. possible remedy: have each discussion group be composed of a proportional amount of each party’s supporters. or maybe have them be 1-on-1 discussions instead of groups of x>2 because those tend to go better anyways.
also, this would let misrepresented minority positions be heard correctly.
i don’t think this would have saved humanity from ending up in an inadequate equilibrium, but maybe would have at least been less bad.
I strongly disagree with this, as a descriptive matter of how the vast majority of groups of regular (neurotypical) people function.
I would expect that the groups which reach consensus would generally do so because whichever of the 5 individuals has greatest combination of charisma, social skills, and assertiveness in dialogue would domineer the discussion and steer it in a direction where whoever else might disagree gets conversationally out-skilled to the point where social pressure from everyone else gets them to give up and drop their objections (likely by actually subjectively feeling that they get convinced by the arguments of the charismatic person, when in reality it’s just social proof doing the work).
I think the fact that you don’t expect this to happen is more due to you improperly generalizing from the community of LW-attracted people (including yourself), whose average psychological make-up appears to me to be importantly different from that of the broader public.
Please don’t make unfounded speculation[1] about my psychology. I feel pressured to respond just to say that’s not true (that I am not generalizing from lesswrong users).
(That was a possible failure mode mentioned, I don’t know why you’re reiterating it with just more detail). My impression was that many neurotypicals are used (/desensitized) to that happening by now and that there might frequently be attempts from multiple which would not be resolved.
But this was not a strongly held belief, nor a topic that seems important at this phase of history; it was just a fun-idea-shortform. I feel discouraged by what I perceive to be the assertiveness/assumingness of your comment.
(edit: I agree correctly-hedged speculation is okay and would have been okay here, I meant something like confidently-expressed claims about another user’s mind with low evidence.)
I disagree that the speculation was unfounded. I checked your profile before making that comment (presumably written by you, and thus a very well-founded source) and saw “~ autistic.” I would not have made that statement, as written, if this had not been the case (for instance the part of “including yourself”).
Then, given my past experience with similar proposals that were written about on LW, in which other users correctly pointed out the problems with the proposal and it was revealed that the OP was implicitly making assumptions that the broader community was akin to that of LW, it was reasonable to infer that the same was happening here. (It still seems reasonable to infer this, regardless of your comment, but that is beside the point.) In any case, I said “think” which signaled that I understood my speculation was not necessarily correct.
I have written up my thoughts before on why good moderation practices should not allow for the mind-reading of others, but I strongly oppose any norm that says the mere speculation, explicitly labeled as such through language that signals some epistemic humility, is inherently bad. I even more strongly oppose a norm that other users feeling pressured to respond should have a meaningful impact on whether a comment is proper or not.
I expect your comment to not have been a claim about the norms of LW, but rather a personal request. If so, I do not expect to comply (unless required to by moderation).
I don’t agree that my bio stating I’m autistic[1] is strong/relevant* evidence that I assume the rest of the world is like me or LessWrong users, I’m very aware that this is not the case. I feel a lot of uncertainty about what happens inside the minds of neurotypical people (and most others), but I know they’re very different in various specific ways, and I don’t think the assumption you inferred is one I make; it was directly implied in my shortform that neurotypicals engage in politics in a really irrational way, are influentiable by such social pressures as you (and I) mentioned, etc.
*Technically, being a LessWrong user is some bayesian evidence that one makes that assumption, if that’s all you know about them, so I added the hedge “strong/relevant”, i.e. enough to reasonably cause one to write “I think you are making [clearly-wrong assumption x]” instead of using more uncertain phrasings.
I agree that there are cases where feeling pressured to respond is acceptable. E.g., if someone writes a counterargument which one think misunderstands their position, they might feel some internal pressure to respond to correct this; I think that’s okay, or at least unavoidable.
I don’t know how to define a general rule for determining when making-someone-feel-pressured is okay or not, but this seemed like a case where it was not okay: in my view, it was caused by an unfounded confident expression of belief about my mind.
If you internally believe you had enough evidence to infer what you wrote at the level of confidence to just be prefaced with ‘I think’, perhaps it should not be against LW norms, though; I don’t have strong opinions on what site norms should be, or how norms should differ when the subject is the internal mind of another user.
More on norms: the assertive writing style of your two comments here seems also possibly norm-violating as well.
Edit: I’m flagging this for moderator review.
the “~ ” you quoted is just a separator from the previous words, in case you thought it meant something else
As a moderator: I do think sunwillrise was being a bit obnoxious here. I think the norms they used here were fine for frontpage LW posts, but shortform is trying to do something that is more casual and more welcoming of early-stage ideas, and this kind of psychologizing I think has reasonably strong chilling-effects on people feeling comfortable with that.
I don’t think it’s a huge deal, my best guess is I would just ask sunwillrise to comment less on quila’s stuff in-particular, and if it becomes a recurring theme, to maybe more generally try to change how they comment on shortforms.
I do think the issue here is kind of subtle. I definitely notice an immune reaction to sunwillrise’s original comment, but I can’t fully put into words why I have that reaction, and I would also have that reaction if it was made as a comment on a frontpage post (but I would just be more tolerant of it).
Like, I think my key issue here is that sunwillrise just started a whole new topic that quila had expressed no interest in talking about, which is the topic of “what are my biases on this topic, and if I am wrong, what would be the reason I am wrong?”, which like, IDK, is a fine topic, but it is just a very different topic that doesn’t really have anything to do with the object level. Like, whether quila is biased on this topic does not make a difference to question of whether this policy-esque proposal would be a good idea, and I think quila (and most other readers) are usually more interested in discussing that then meta-level bias stuff.
There is also a separate thing, where making this argument in some sense assumes that you are right, which I think is a fine thing to do, but does often make good discussion harder. Like, I think for comments, its usually best to focus on the disagreement, and not to invoke random other inferences about the world about what is true if you are right. There can be a place for that, especially if it helps illucidate your underlying world model, but I think in this case little of that happened.
Separately from the more meta discussion about norms, I believe the failure mode I mentioned is quite different from yours in an important respect that is revealed by the potential remedy you pointed out (“have each discussion group be composed of a proportional amount of each party’s supporters. or maybe have them be 1-on-1 discussions instead of groups of x>2 because those tend to go better anyways”).
Together with your explanation of the failure mode (“when e.g it’s 3 against 2 or 4 against 1”), it seems to me like you are thinking of a situation where one Republican, for instance, is in a group with 4 Democrats, and thus feels pressure from all sides in a group discussion because everyone there has strong priors that disagree with his/hers. Or, as another example, when a person arguing for a minority position is faced with 4 others who might be aggresively conventional-minded and instantly disapprove of any deviation from the Overton window. (I could very easily be misinterpreting what you are saying, though, so I am less than 95% confident of your meaning.)
In this spot, the remedy makes a lot of sense: prevent these gang-up-on-the-lonely-dissenter spots by making the ideological mix-up of the group more uniform or by encouraging 1-on-1 conversations in which each ideology or system of beliefs will only have one representative arguing for it.
But I am talking about a failure mode that focuses on the power of one single individual to swing the room towards him/her, regardless of how many are initially on his/her side from a coalitional perspective. Not because those who disagree are initially in the minority and thus cowed into staying silent (and fuming, or in any case not being internally convinced), but rather because the “combination of charisma, social skills, and assertiveness in dialogue” would take control of the conversation and turn the entire room in its favor, likely by getting the others to genuinely believe that they are being persuaded for rational reasons instead of social proof.
This seems importantly different from your potential downside, as can be seen by the fact that the proposed remedy would not be of much use here; the Dark Arts conversational superpowers would be approximately as effective in 1-on-1 discussions as in group chats (perhaps even more so in some spots, since there would be nobody else in the room to potentially call out the missing logic or misleading rhetoric etc) and would still remain impactful even if the room was ideologically mixed-up to start.
To clarify, I do not expect the majority of such conversations to actually result in a clever arguer that’s good at conversations to be able to convince those who disagree to bend over to his/her position (the world is not lacking for charismatic and ambitious people, so I would expect everything around us to look quite different if convincing others to change their political leanings was simple). But, conditional on the group having reached consensus, I do predict, with high probability, that it did so because of these types of social dynamics rather than because they are composed of people that react well to “valid arguments” that challenge closely-held political beliefs.
(edit: wrote this before I saw the edit in your most recent comment. Feel free to ignore all of this until the matter gets resolved)
I think this is a good object-level comment.
Meta-level response about “did you mean this or rule it out/not have a world model where it happens?”:
Some senses in which you’re right that it’s not what I was meaning:
It’s more specific/detailed. I was not thinking in this level of detail about how such discussions would play out.
I was thinking more about pressure than about charisma (where someone genuinely seems convincing). And yes, charisma could be even more powerful in a 1-on-1 setting.
Senses in which it is what I meant:
This is not something my world model rules out, it just wasn’t zoomed in on it, possibly because I’m used to sometimes experiencing a lot of pressure from neurotypical people over my beliefs. (that could have biased my internal frame to overfocus on pressure).
For the parts about more even distributions being better, it’s more about: yes, these dynamics exist, but I thought they’d be even worse when combined with a background conformity pressure, e.g when there’s one dominant-pressuring person and everyone but you passively agreeing with what they’re saying, and tolerating it because they agree.
Object-level response:
(First, to be clear: the beliefs don’t have to be closely-held; we’d see consensuses more often when for {all but at most one side} they’re not)
That seems plausible. We could put it into a (handwavey) calculation form, where P(1 dark arts arguer) is higher than P(5 truth-seekers). But it’s actually a lot more complex; e.g., what about P(all opposing participants susceptible to such an arguer), or how e.g one more-truth-seeking attitude can influence others to have a similar attitude for that context. (and this is without me having good priors on the frequencies and degrees of these qualities, so I’m mostly uncertain).
A world with such a proposal implemented might even then see training programs for clever dark arts arguing. (Kind of like I mentioned at the start, but again with me using the case of pressuring specifically: “memetics teaching people how to pressure others into agreeing during the group discussion”)