The same stuff that’s outlined in the post, both up at the top where I list things my brain tries to do, and down at the bottom where I say “just the basics, consistently done.”
Regenerating the list again:
Engaging in, and tolerating/applauding those who engage in:
Strawmanning (misrepresenting others’ points as weaker or more extreme than they are)
Projection (speaking as if you know what’s going on inside other people’s heads)
Putting little to no effort into distinguishing your observations from your inferences/speaking as if things definitely are what they seem to you to be
Only having or tracking a single hypothesis/giving no signal that there is more than one explanation possible for what you’ve observed
Overstating the strength of your claims
Being much quieter in one’s updates and oopses than one was in one’s bold wrongness
Weaponizing equivocation/doing motte-and-bailey
Generally, doing things which make it harder rather than easier for people to see clearly and think clearly and engage with your argument and move toward the truth
Mechanistically… since stag hunt is in the title of the post… it seems like you’re saying that any one person committing “enough of these epistemic sins to count as playing stag” would mean that all of lesswrong fails at the stag hunt, right?
And it might be the case that a single person playing stag could be made up of them failing at even just a single one of these sins? (This is the weakest point in my mechanistic model, perhaps?)
Also, what you’re calling “projection” there is not the standard model of projection I think? And my understanding is that the standard model of projection is sort of explicitly something people can’t choose not to do, by default. In the standard model of projection it takes a lot of emotional and intellectual work for a person to realize that they are blaming others for problems that are really inside themselves :-(
The practical upshot here, to me, is that if the models you’re advocating here are true, then it seems to me like lesswrong will inevitably fail at “hunting stags”.
...
And yet it also seems like you’re exhorting people to stop committing these sins and exhorting them moreover to punitively downvote people according to these standards because if LW voters become extremely judgemental like this then… maybe we will eventually all play stag and thus eventually, as a group, catch a stag?
So under the models that you seem to me to have offered, the (numerous individual) costs won’t buy any (group) benefits? I think?
There will always inevitably be a fly in the ointment… a grain of sand in the chip fab… a student among the masters… and so the stag hunt will always fail unless it occurs in extreme isolation with a very small number of moving parts of very high quality?
And yet lesswrong will hopefully always have an influx of new people who are imperfect, but learning and getting better!
And that’s (in my book) quite good… even if it means we will always fail at hunting stags.
...
The thing I think that’s good about lesswrong has almost nothing to do with bringing down a stag on this actual website.
Instead, the thing I think is good about lesswrong has to do with creating a stable pipeline of friendly people who are all, over time, are getting a little better at thinking, so they can “do more good thinking” in their lives, and businesses, and non-profits, and perhaps from within government offices, and so on.
I’m (I hope) realistically hoping for lots of little improvements, in relative isolation, based on cross-fertilization among cool people, with tolerance for error, and sharing of ideas, and polishing stuff over time… Not from one big leap based on purified perfect cooperation (which is impossible anyway for large groups).
You’re against “engaging in, and tolerating/applauding” lots and lots of stuff, while I think that most of the actual goodness arises specifically from our tolerant engagement of people making incremental progress, and giving them applause for any such incremental improvements, despite our numerous inevitable imperfections.
I am confused by a theme in your comments. You have repeatedly chosen to express that the failure of a single person completely destroys all the value of the website, even going so far as to quote ridiculous numbers (at the order of E-18 [1]) in support of this.
The only model I have for your behavior that explains why you would do this, instead of assuming something like Duncan believing something like “The value of C cooperators and D defectors is min(0,C−D2)” is that you are trying to make the argument look weak. If there is another reason to do this, I’d appreciate an explanation, because this tactic alone is enough to make me view the argument as likely adversarial.
Mechanistically… since stag hunt is in the title of the post… it seems like you’re saying that any one person committing “enough of these epistemic sins to count as playing stag” would mean that all of lesswrong fails at the stag hunt, right?
No, and if you had stopped there and let me answer rather than going on to write hundreds of words based on your misconception, I would have found it more credible that you actually wanted to engage with me and converge on something, rather than that you just really wanted to keep spamming misrepresentations of my point in the form of questions.
Epistemic status: socially brusque wild speculation. If they’re in the area and it wouldn’t be high effort, I’d like JenniferRM’s feedback on how close I am.
My model of JenniferRM isn’t of someone who wants to spam misrepresentations in the form of questions. In response to Dweomite’s comment below, they say:
It was a purposefully pointed and slightly unfair question. I didn’t predict that Duncan would be able to answer it well (though I hoped he would chill out give a good answer and then we could high five, or something).
If he answered in various bad ways (that I feared/predicted), then I was ready with secondary and tertiary criticisms.
My model of the model which which outputs words like these is that they’re very confident in their own understanding—viewing themself as a “teacher” rather than a student—and are trying to lead someone who they think doesn’t understand by the nose through a conversation which has been plotted out in advance.
Just checking, what are X, Y and Z?
(I’m interested in a concrete answer but would be happy with a brief vague answer too!)
(Added: Please don’t feel obliged to write a long explanation here just because I asked, I really just wanted to ask a small question.)
The same stuff that’s outlined in the post, both up at the top where I list things my brain tries to do, and down at the bottom where I say “just the basics, consistently done.”
Regenerating the list again:
Engaging in, and tolerating/applauding those who engage in:
Strawmanning (misrepresenting others’ points as weaker or more extreme than they are)
Projection (speaking as if you know what’s going on inside other people’s heads)
Putting little to no effort into distinguishing your observations from your inferences/speaking as if things definitely are what they seem to you to be
Only having or tracking a single hypothesis/giving no signal that there is more than one explanation possible for what you’ve observed
Overstating the strength of your claims
Being much quieter in one’s updates and oopses than one was in one’s bold wrongness
Weaponizing equivocation/doing motte-and-bailey
Generally, doing things which make it harder rather than easier for people to see clearly and think clearly and engage with your argument and move toward the truth
This is not an exhaustive list.
Mechanistically… since stag hunt is in the title of the post… it seems like you’re saying that any one person committing “enough of these epistemic sins to count as playing stag” would mean that all of lesswrong fails at the stag hunt, right?
And it might be the case that a single person playing stag could be made up of them failing at even just a single one of these sins? (This is the weakest point in my mechanistic model, perhaps?)
Also, what you’re calling “projection” there is not the standard model of projection I think? And my understanding is that the standard model of projection is sort of explicitly something people can’t choose not to do, by default. In the standard model of projection it takes a lot of emotional and intellectual work for a person to realize that they are blaming others for problems that are really inside themselves :-(
(For myself, I try not to assume I even know what’s happening in my own head, because experimentally, it seems like humans in general lack high quality introspective access to their own behavior and cognition.)
The practical upshot here, to me, is that if the models you’re advocating here are true, then it seems to me like lesswrong will inevitably fail at “hunting stags”.
...
And yet it also seems like you’re exhorting people to stop committing these sins and exhorting them moreover to punitively downvote people according to these standards because if LW voters become extremely judgemental like this then… maybe we will eventually all play stag and thus eventually, as a group, catch a stag?
So under the models that you seem to me to have offered, the (numerous individual) costs won’t buy any (group) benefits? I think?
There will always inevitably be a fly in the ointment… a grain of sand in the chip fab… a student among the masters… and so the stag hunt will always fail unless it occurs in extreme isolation with a very small number of moving parts of very high quality?
And yet lesswrong will hopefully always have an influx of new people who are imperfect, but learning and getting better!
And that’s (in my book) quite good… even if it means we will always fail at hunting stags.
...
The thing I think that’s good about lesswrong has almost nothing to do with bringing down a stag on this actual website.
Instead, the thing I think is good about lesswrong has to do with creating a stable pipeline of friendly people who are all, over time, are getting a little better at thinking, so they can “do more good thinking” in their lives, and businesses, and non-profits, and perhaps from within government offices, and so on.
I’m (I hope) realistically hoping for lots of little improvements, in relative isolation, based on cross-fertilization among cool people, with tolerance for error, and sharing of ideas, and polishing stuff over time… Not from one big leap based on purified perfect cooperation (which is impossible anyway for large groups).
You’re against “engaging in, and tolerating/applauding” lots and lots of stuff, while I think that most of the actual goodness arises specifically from our tolerant engagement of people making incremental progress, and giving them applause for any such incremental improvements, despite our numerous inevitable imperfections.
Am I missing something? What?
I am confused by a theme in your comments. You have repeatedly chosen to express that the failure of a single person completely destroys all the value of the website, even going so far as to quote ridiculous numbers (at the order of E-18 [1]) in support of this.
The only model I have for your behavior that explains why you would do this, instead of assuming something like Duncan believing something like “The value of C cooperators and D defectors is min(0,C−D2)” is that you are trying to make the argument look weak. If there is another reason to do this, I’d appreciate an explanation, because this tactic alone is enough to make me view the argument as likely adversarial.
No, and if you had stopped there and let me answer rather than going on to write hundreds of words based on your misconception, I would have found it more credible that you actually wanted to engage with me and converge on something, rather than that you just really wanted to keep spamming misrepresentations of my point in the form of questions.
Epistemic status: socially brusque wild speculation. If they’re in the area and it wouldn’t be high effort, I’d like JenniferRM’s feedback on how close I am.
My model of JenniferRM isn’t of someone who wants to spam misrepresentations in the form of questions. In response to Dweomite’s comment below, they say:
My model of the model which which outputs words like these is that they’re very confident in their own understanding—viewing themself as a “teacher” rather than a student—and are trying to lead someone who they think doesn’t understand by the nose through a conversation which has been plotted out in advance.
Plausible to me. (Thanks.)