I don’t think this is how one avoids playing status games. It’s not a simple ‘ignore status games and get to work.’ You don’t get to do that. Ever. I know, it sucks, and yes Brent Dill is laughing somewhere in the background.
You definitely don’t get to do that while pointing out that someone should update their reactions to what you’re saying based on the fact that you are one making the statement. I realize that this might be a factually accurate statement that you would make even if no monkey brains were involved, but that doesn’t matter.
Even more than that, the defense of “I realize this looks like a status move but that is a coincidence that I wasn’t paying attention to” is not a defense a community can allow, if it wants to actually avoid status moves. See Just Saying What You Mean Is Literally Impossible. This is not something people can just turn off.
The way you avoid playing status games is that you play the status game of ‘keep everything in balance’ rather than not thinking about such things at all and asserting that your monkey brain isn’t steering your decisions at all. Yes, you really do need to think about who and what is being lowered or raised in status by everything you say, just like all the other implications of everything you say, and then you need to make sure you cancel those effects out. At least when they’re big enough. When having a discussion with someone I want to treat as an equal, who wants to treat me likewise, we both keep careful track of whether someone is gaining or losing status, and do subtle things to fix it if that gets too out of hand.
Does that take up a non-trivial amount of our cognitive effort? It can, and yes that sucks, but not paying attention to status games is not a way to not play, it’s a way to play without realizing what you’re doing. Does it mean occasionally sitting there and taking it when your status is getting lowered even if you don’t think the move in question is ‘fair’ in order to maintain balance? Yeah, it kind of does mean that too, if you can’t get there and restore the balance first.
If you leave or stop engaging over this issue, I think that would be a shame, because I don’t think the project you would then have been hoping to have been involved in actually exists anywhere else either, or ever will.
I’d also note that you have an adverse selection thing going on here. You’ve written a lot of stuff and mostly it’s been received quite well. If you find the one time in a hundred where the number looks bad in a way you don’t agree with, and turn that one thing into a referendum, it’s not going to look go well.
My current state is of being very curious to learn why Conor believes that this is one of the most important variables on LW. It’s something that to me feels like a dial (how much status-dialogue is assumed in the comments) that improves discourse but is not necessary for it, while I think Conor thinks it’s necessary for us to be able to actually win. This is surprising, and Conor believes things for reasons, so I will ask him to share his information with me (time permitting, we’re currently both somewhat busy and on opposite continents).
I have a strong, and possibly scary claim to make.
Social reality is *important*. Moreso, it *has gears*.
No, that’s not a strong enough phrasing.
Social reality has *physics*.
It is very hard for humans to understand them, since we exist at or near its metaphorical Planck scale. But, there are actual, discernible principles at work. This is why I use terms like “incentive slope” or “status gradient”—I’m trying to get people to see the socio-cultural order as a structure that can be manipulated. I’m trying to get people to see White with Blue’s eyes.
You have goals. You have VERY ADMIRABLE GOALS. But even if I disagreed adamantly with your goals, they’re your *goals*. They’re your values. I can notice that I vehemently disagree with them, and declare war on you, or I can notice that I adamantly agree with them, and offer alliance. (I think you’ve noticed which side of that I wound up falling on.)
That said, you also have claims about what procedures and heuristics achieve your goals and maximize your values. Those CANNOT, themselves, be values. They are how your values interface with reality, and reality has a physics. It is actually possible to be correct or incorrect about whether a particular procedure or heuristic, implemented in a particular environment, will lead to maximizing or satisficing a particular goal.
I claim that many of your status-oriented heuristics are really not serving you well. My evidence is basically 20+ years of attempting exactly those heuristics myself, and observing that they really didn’t serve me well. And I really wanted them to.
That said, I could be wrong. It might be that there’s technique and skill involved; it might even be that I was implementing a flawed version of those heuristics. That would be awesome, if it were true. So I’d love to be proven wrong.
But before either of us if proven wrong or right, we need to start studying the shape of social reality’s physics, and formulating deep, testable hypotheses about why various moves will or won’t work.
(One of two posts, this one attempting to just focus on saying things that I’m pretty confident I’d endorse on reflection)
I think this is a noteworthy moment of “Double Crux is really the thing we need here”, because I think people are holding very different Cruxes as the thing that matters, and we either need to find the Common Crux or identify multiple Cruxes at the same time for anything good to happen.
Connor’s Crux as I understand it—The LessWrong movement will fail if it does not expect people to invest effort to doublecheck their assumptions, check rationality in the moment.
(This seems totally, 100% true to me. I can’t say how Zvi, Ben or whoever else feels but I’d be willing to bet they basically agreed, and are not arguing with you because of disagreement on that)
Zvi’s Crux as I understand it—The manner in which people give each other feedback is going to get filtered through some kind of status game, the only question is which one and how we implement in a way that ends up in the service of truth. And that Conor’s implementation is not currently doing a good enough job to win the game (either here or elsewhere)
Ben’s Crux as I understand it—The way to avoid the bad effects of status games is to avoid social bayesianism completely (i.e. don’t expect people to buy claims that are dependent on existing credibility).
I’m not sure I fully grok Ben’s claim, but insofar as I do, I think I mostly disagree with it (or at least he’s betting too hard on too strong a version of it). I think social-updating and trust are impossible to avoid, and it is often necessary as a filter for what to bother to listen to when looking at ideas that are maybe-brilliant-maybe-crackpottery.
Upon reflection, I can’t think of anything further I can say that isn’t dependent on first having heard Conor make an argument that assumes the listener is 100% on board with the claim that we should expect people to do-rationality-in-the-moment, and that whatever disagreement is going on is going on despite that.
(it may turn out other people don’t share that assumption, just noting that I personally will not be able to contribute usefully until such a point)
(note for until I return: this is a virtuous comment and I’m really happy you wrote it. Also this is no longer my crux at all, although I still think social aummaning is mostly not good epistemology)
I think avoiding status games is sort of like trying to reach probabilities of zero or one: Technically impossible, but you can get arbitrarily close, to the point where trying to measure the weight that status shifts are assigned within everyone’s decision making is lowered to be almost non-measurable.
I’m also not sure I would define “not playing the game” as within a group, making sure that everyone’s relative status is the same. This is simply a different status game, just with different objectives. It seems to me that what you suggest doing would simply open up a Pandora’s Box of undesirable epistemic issues. Personally, I want the people who consistently produce good ideas and articulate them well to have high status. And if they are doing it better than me, then I want them to have higher status than myself. I want higher status for myself too, naturally, but I channel that desire into practicing and maintaining as many characteristics that I believe aid the goals of the community. My goal is almost never to preserve egalitarian reputation at the expense of other goals, even among people I respect, since I fear that trying to elevate that goal to a high priority carries the risk of signal-boosting poor ideas and filtering out good ones. Maybe that’s not what you’re actually suggesting needs to be done, maybe your definition doesn’t include things like reputation, but does consider status in the sense of who gets to be socially dominant. I think what I consider my crux is that it’s less important to make sure that “mutual respect” and “consider equal in status, to whatever extent status actually means” mean the same thing, and more important that the “market” of ideas generated by open discourse maintains a reasonable distribution of reputation.
I don’t think this is how one avoids playing status games. It’s not a simple ‘ignore status games and get to work.’ You don’t get to do that. Ever. I know, it sucks, and yes Brent Dill is laughing somewhere in the background.
You definitely don’t get to do that while pointing out that someone should update their reactions to what you’re saying based on the fact that you are one making the statement. I realize that this might be a factually accurate statement that you would make even if no monkey brains were involved, but that doesn’t matter.
Even more than that, the defense of “I realize this looks like a status move but that is a coincidence that I wasn’t paying attention to” is not a defense a community can allow, if it wants to actually avoid status moves. See Just Saying What You Mean Is Literally Impossible. This is not something people can just turn off.
The way you avoid playing status games is that you play the status game of ‘keep everything in balance’ rather than not thinking about such things at all and asserting that your monkey brain isn’t steering your decisions at all. Yes, you really do need to think about who and what is being lowered or raised in status by everything you say, just like all the other implications of everything you say, and then you need to make sure you cancel those effects out. At least when they’re big enough. When having a discussion with someone I want to treat as an equal, who wants to treat me likewise, we both keep careful track of whether someone is gaining or losing status, and do subtle things to fix it if that gets too out of hand.
Does that take up a non-trivial amount of our cognitive effort? It can, and yes that sucks, but not paying attention to status games is not a way to not play, it’s a way to play without realizing what you’re doing. Does it mean occasionally sitting there and taking it when your status is getting lowered even if you don’t think the move in question is ‘fair’ in order to maintain balance? Yeah, it kind of does mean that too, if you can’t get there and restore the balance first.
If you leave or stop engaging over this issue, I think that would be a shame, because I don’t think the project you would then have been hoping to have been involved in actually exists anywhere else either, or ever will.
I’d also note that you have an adverse selection thing going on here. You’ve written a lot of stuff and mostly it’s been received quite well. If you find the one time in a hundred where the number looks bad in a way you don’t agree with, and turn that one thing into a referendum, it’s not going to look go well.
My current state is of being very curious to learn why Conor believes that this is one of the most important variables on LW. It’s something that to me feels like a dial (how much status-dialogue is assumed in the comments) that improves discourse but is not necessary for it, while I think Conor thinks it’s necessary for us to be able to actually win. This is surprising, and Conor believes things for reasons, so I will ask him to share his information with me (time permitting, we’re currently both somewhat busy and on opposite continents).
Loren ipsum
I have a strong, and possibly scary claim to make.
Social reality is *important*. Moreso, it *has gears*.
No, that’s not a strong enough phrasing.
Social reality has *physics*.
It is very hard for humans to understand them, since we exist at or near its metaphorical Planck scale. But, there are actual, discernible principles at work. This is why I use terms like “incentive slope” or “status gradient”—I’m trying to get people to see the socio-cultural order as a structure that can be manipulated. I’m trying to get people to see White with Blue’s eyes.
You have goals. You have VERY ADMIRABLE GOALS. But even if I disagreed adamantly with your goals, they’re your *goals*. They’re your values. I can notice that I vehemently disagree with them, and declare war on you, or I can notice that I adamantly agree with them, and offer alliance. (I think you’ve noticed which side of that I wound up falling on.)
That said, you also have claims about what procedures and heuristics achieve your goals and maximize your values. Those CANNOT, themselves, be values. They are how your values interface with reality, and reality has a physics. It is actually possible to be correct or incorrect about whether a particular procedure or heuristic, implemented in a particular environment, will lead to maximizing or satisficing a particular goal.
I claim that many of your status-oriented heuristics are really not serving you well. My evidence is basically 20+ years of attempting exactly those heuristics myself, and observing that they really didn’t serve me well. And I really wanted them to.
That said, I could be wrong. It might be that there’s technique and skill involved; it might even be that I was implementing a flawed version of those heuristics. That would be awesome, if it were true. So I’d love to be proven wrong.
But before either of us if proven wrong or right, we need to start studying the shape of social reality’s physics, and formulating deep, testable hypotheses about why various moves will or won’t work.
And I claim that that’s gonna be hard.
Loren ipsum
(One of two posts, this one attempting to just focus on saying things that I’m pretty confident I’d endorse on reflection)
I think this is a noteworthy moment of “Double Crux is really the thing we need here”, because I think people are holding very different Cruxes as the thing that matters, and we either need to find the Common Crux or identify multiple Cruxes at the same time for anything good to happen.
Connor’s Crux as I understand it—The LessWrong movement will fail if it does not expect people to invest effort to doublecheck their assumptions, check rationality in the moment.
(This seems totally, 100% true to me. I can’t say how Zvi, Ben or whoever else feels but I’d be willing to bet they basically agreed, and are not arguing with you because of disagreement on that)
Zvi’s Crux as I understand it—The manner in which people give each other feedback is going to get filtered through some kind of status game, the only question is which one and how we implement in a way that ends up in the service of truth. And that Conor’s implementation is not currently doing a good enough job to win the game (either here or elsewhere)
Ben’s Crux as I understand it—The way to avoid the bad effects of status games is to avoid social bayesianism completely (i.e. don’t expect people to buy claims that are dependent on existing credibility).
I’m not sure I fully grok Ben’s claim, but insofar as I do, I think I mostly disagree with it (or at least he’s betting too hard on too strong a version of it). I think social-updating and trust are impossible to avoid, and it is often necessary as a filter for what to bother to listen to when looking at ideas that are maybe-brilliant-maybe-crackpottery.
Upon reflection, I can’t think of anything further I can say that isn’t dependent on first having heard Conor make an argument that assumes the listener is 100% on board with the claim that we should expect people to do-rationality-in-the-moment, and that whatever disagreement is going on is going on despite that.
(it may turn out other people don’t share that assumption, just noting that I personally will not be able to contribute usefully until such a point)
(note for until I return: this is a virtuous comment and I’m really happy you wrote it. Also this is no longer my crux at all, although I still think social aummaning is mostly not good epistemology)
Loren ipsum
I think this is frequently the tone of Zvi’s writing. So, for what it’s worth, he’s not being super extra lecture-y towards you than normal. ;-)
I think avoiding status games is sort of like trying to reach probabilities of zero or one: Technically impossible, but you can get arbitrarily close, to the point where trying to measure the weight that status shifts are assigned within everyone’s decision making is lowered to be almost non-measurable.
I’m also not sure I would define “not playing the game” as within a group, making sure that everyone’s relative status is the same. This is simply a different status game, just with different objectives. It seems to me that what you suggest doing would simply open up a Pandora’s Box of undesirable epistemic issues. Personally, I want the people who consistently produce good ideas and articulate them well to have high status. And if they are doing it better than me, then I want them to have higher status than myself. I want higher status for myself too, naturally, but I channel that desire into practicing and maintaining as many characteristics that I believe aid the goals of the community. My goal is almost never to preserve egalitarian reputation at the expense of other goals, even among people I respect, since I fear that trying to elevate that goal to a high priority carries the risk of signal-boosting poor ideas and filtering out good ones. Maybe that’s not what you’re actually suggesting needs to be done, maybe your definition doesn’t include things like reputation, but does consider status in the sense of who gets to be socially dominant. I think what I consider my crux is that it’s less important to make sure that “mutual respect” and “consider equal in status, to whatever extent status actually means” mean the same thing, and more important that the “market” of ideas generated by open discourse maintains a reasonable distribution of reputation.
I basically agree with this.
Loren ipsum