Making a claim like “I claim that a “true” LWer, upon noticing that they were developing a model of me as being butthurt and complaining, would be surprised” seems like an unfair social move to me. It is generally considered rude to say “actually my model of you is totally compatible with saying you’re butthurt and complaining” or even “I haven’t kept track of you enough to have any sort of prior on this and so am going with my observations,” so people who believe those things aren’t going to comment.
It is also internally consistent that someone might downvote you and have questioned their knee-jerk reaction. My understanding is that a downvote just means “less of this on LW please,” and “even though this person is not being whiny they’re certainly not taking the steps I would reasonably expect to avoid being mistaken for whiny” is a good reason to downvote. It seems a bit excessive to demand argumentation from everyone who clicks a button.
Yeah. The thing is, it’s waaay less like “magic buttons” that you push to escape the paradigm, and waaay more like trying to diffuse a bomb, that’s strapped to your soulmate’s skull, on the back of an off-road vehicle that’s driving too fast over rough terrain.
Which isn’t to say that it can’t be done.
Lemme give an example of a move that *might* work, sometimes:
==== ”You’re playing status games,” says X.
“What? No, I’m not,” says Y.
“Yes, you are. You just pulled a lowering-Z’s-status move. It was pretty blunt, in fact.”
“Wh—ah, oh. Oh. Right, I guess—yeah, I can see how that interpretation makes perfect sense if you’re playing status games.”
“I’m not talking about whether I’m playing status games. I’m saying you are.”
“Uh. I’m not, or at least not in the way you’re thinking. Like, I grant that if you put on your status glasses my actions only make sense in terms of trying to put Z down or whatever, but if you put on some other glasses, like your engaging in truthseeking discourse glasses, you’ll see that my behavior also is complete and consistent and sensible there, too.”
“So, I notice that now you’re trying to lower my status.”
“What?”
“You’re trying to set the frame such that if I object to what you’re doing, I’m self-identifying as not-a-truthseeker.”
“Wh—no, I—gah, the point is, I’m not attending to status in the way you are, at all. Like, I see it, what you’re saying makes sense, but I wasn’t trying to play that game.”
“Well, you are playing it, though. Your actions are having ramifications within that frame, which is an obvious frame that everybody’s obviously inside of at all times. And I gotta say, you’re being a real jerk within the commonly-accepted rules of that game.”
“I’m specifically trying not to play it, though. I agree that the status implications of what is being said, and who’s saying it, are important. And we can attend to those directly if you’d like. But I’d like to attend to them while ALSO trying to figure out *what is actually real*.”
“Okay. Do you understand that I may be distrustful about that?”
“Of course. Be as distrustful as you need to be. But help me get to the truth.”
“Okay. I can absolutely take at face value the claim that you want to get to the truth. Can you accept that if we want to get to the truth, we first have to get the status thing out of the way?”
”I don’t know. That doesn’t feel right.”
“Listen. I’m willing to take your claim at face value. Can you take my claim at face value?”
”No, because these claims have truth values to them, and I don’t believe that status has to be resolved before truth can be reached.”
“Are you willing to be walked through an explanation?”
”Not really, I feel like it’s distracting us from the original conversation.”
″Okay. Then let’s just return to the original conversation, but could you rephrase what you said in a way that doesn’t sound like being a dick to Z, and maybe apologize to Z for phrasing it the way you did? And actually attend to how you feel when you do so; if you notice internal resistance, please entertain the hypothesis that this is coming from a part of you that actually *was* playing the status game. I’m not saying that to accuse you of anything; I just want you to notice it, because you want to seek truth and parts of you that play status games can interfere with that.”
Was the claim maybe something of the form “eh, can’t win ’em all, but relax, things don’t seem all that bad”?
That was broadly my point, the main reason why I didn’t say that was because I recognise that some people have unusual preferences that make decision make sense that would appear irrational from the standpoint of someone assuming normal preferences.
I’ve got my frustrations with the community too, for example, when I tried to convince people to take hypothetical seriously. Or when it was clear that the community was in decline, but it was impossible to take action on it. That made me go away for a while and engage less with the community.
But, I decided to give it another go after doing a lot of debating and just learning a lot more in general and I’ve found that I’m now getting better responses. I can now predict the most likely ways that my posts will be misunderstood and throw in the appropriate disclaimers. There are still lots of ways in which we aren’t rational, but that is why we often call ourselves aspiring rationalists and the site Less Wrong. I agree that we still have large flaws in an absolute sense, but I haven’t been able to find another site where I can go to have a discussion that is better.
Maybe its different for you, maybe your time is best spent elsewhere, but your metric does not feel like a very accurate health of the site. Like, if I’m being really honest, I’m tempted to go and upvote the comment right now just to diffuse the situation—but is that what you’re trying to measure? The votes on comments are much less reliable than the votes on posts anyway because many people read the post, browse a few comments, then consider themselves finished on the post and never come back.
Haven’t finished reading this yet, but important point:
But note that for it to end up at −3 on net, that means that either a bunch of less-weighted users downvoted it, or at least three heavy hitters, or some combination
I don’t think the math there is right (and this is a confusing thing about the site for now, not sure what to do with it) - assuming the comment started at 5, this is one-and-half heavy hitters, or 3 random people, which feels pretty different to me. (3 karma power is really easy to get)
True/fair, but I think this is something people are going to intuit wrong a lot in situations that vary on “who was actually doing the down voting”, so I wanted to make sure to note it here.
For what it’s worth, I was another (the other?) person who downvoted the comment in question early (having upvoted the post, mostly for explaining an unfamiliar interesting thing clearly).
Catching up on all this has been a little odd to me. I’m obviously not a culture lord, but also my vote wasn’t about this question of “the bar” except (not that I would naturally frame it this way) perhaps as far as I read CoolShirtMcPants as doing something similar to what you said you were doing—”here is my considered position on this, I encourage people to try it on and attend to specifically how it might come out as I imply”—and you as creating an impasse instead of recognizing that and trying to draw out more concrete arguments/scenarios/evidence. Or that even if CSMP wasn’t intentionally doing that, a “bar” should ask that you treat the comment that way.
On one hand, sure, the situation wasn’t quite symmetric. And it was an obvious, generic-seeming objection, surely already considered at least by the author and better-expressed in other comments. But on the other hand, it can still be worth saying for the sake of readers or for starting a more substantive conversation; CSMP at least tried to dig a little deeper. And in this kind of blogging I don’t usually see one person’s (pseudonymously or otherwise) staking out some position as stronger evidence than another’s doing so. Neither should really get you further than deciding it’s worth thinking about for yourself. This case wasn’t an exception.
(I waffled on saying anything at all here because your referendum, if there is one, appears to have grown beyond this, and all this stuff about status seems to me to be a poor framing. But reading votes is a tricky business, so I can at least provide more information.)
I understand that there may be costs to you for continued interaction with the site, and that your primary motivations may have shifted, but I will say that your continued presence may act as a buffer that slows down the formation of an orthodoxy, and therefore you may be providing value by remaining even if the short term costs remain negative for a while.
Hrm. I would like it if Conor stuck around, since I think the content produced in the last 30 days was enjoyable and helpful to me, but I also think paying costs to slow down the formation of an LW orthodoxy that doesn’t align with his goals would be a bad investment of energy. If it was costless or very low cost or if preventing the orthodoxy/causing it to form in a way that aligned with his goals was possible, then it would probably be worth it.
I am not in Conor’s head, but if I was in their place I wouldn’t be convinced to stick around as just a delaying tactic. A much more convincing reason might be to stick around, take notes of who does engage with me the way I wanted to engage with people, and then continue to post here while mostly just paying attention to those people.
I don’t think this is how one avoids playing status games. It’s not a simple ‘ignore status games and get to work.’ You don’t get to do that. Ever. I know, it sucks, and yes Brent Dill is laughing somewhere in the background.
You definitely don’t get to do that while pointing out that someone should update their reactions to what you’re saying based on the fact that you are one making the statement. I realize that this might be a factually accurate statement that you would make even if no monkey brains were involved, but that doesn’t matter.
Even more than that, the defense of “I realize this looks like a status move but that is a coincidence that I wasn’t paying attention to” is not a defense a community can allow, if it wants to actually avoid status moves. See Just Saying What You Mean Is Literally Impossible. This is not something people can just turn off.
The way you avoid playing status games is that you play the status game of ‘keep everything in balance’ rather than not thinking about such things at all and asserting that your monkey brain isn’t steering your decisions at all. Yes, you really do need to think about who and what is being lowered or raised in status by everything you say, just like all the other implications of everything you say, and then you need to make sure you cancel those effects out. At least when they’re big enough. When having a discussion with someone I want to treat as an equal, who wants to treat me likewise, we both keep careful track of whether someone is gaining or losing status, and do subtle things to fix it if that gets too out of hand.
Does that take up a non-trivial amount of our cognitive effort? It can, and yes that sucks, but not paying attention to status games is not a way to not play, it’s a way to play without realizing what you’re doing. Does it mean occasionally sitting there and taking it when your status is getting lowered even if you don’t think the move in question is ‘fair’ in order to maintain balance? Yeah, it kind of does mean that too, if you can’t get there and restore the balance first.
If you leave or stop engaging over this issue, I think that would be a shame, because I don’t think the project you would then have been hoping to have been involved in actually exists anywhere else either, or ever will.
I’d also note that you have an adverse selection thing going on here. You’ve written a lot of stuff and mostly it’s been received quite well. If you find the one time in a hundred where the number looks bad in a way you don’t agree with, and turn that one thing into a referendum, it’s not going to look go well.
My current state is of being very curious to learn why Conor believes that this is one of the most important variables on LW. It’s something that to me feels like a dial (how much status-dialogue is assumed in the comments) that improves discourse but is not necessary for it, while I think Conor thinks it’s necessary for us to be able to actually win. This is surprising, and Conor believes things for reasons, so I will ask him to share his information with me (time permitting, we’re currently both somewhat busy and on opposite continents).
I have a strong, and possibly scary claim to make.
Social reality is *important*. Moreso, it *has gears*.
No, that’s not a strong enough phrasing.
Social reality has *physics*.
It is very hard for humans to understand them, since we exist at or near its metaphorical Planck scale. But, there are actual, discernible principles at work. This is why I use terms like “incentive slope” or “status gradient”—I’m trying to get people to see the socio-cultural order as a structure that can be manipulated. I’m trying to get people to see White with Blue’s eyes.
You have goals. You have VERY ADMIRABLE GOALS. But even if I disagreed adamantly with your goals, they’re your *goals*. They’re your values. I can notice that I vehemently disagree with them, and declare war on you, or I can notice that I adamantly agree with them, and offer alliance. (I think you’ve noticed which side of that I wound up falling on.)
That said, you also have claims about what procedures and heuristics achieve your goals and maximize your values. Those CANNOT, themselves, be values. They are how your values interface with reality, and reality has a physics. It is actually possible to be correct or incorrect about whether a particular procedure or heuristic, implemented in a particular environment, will lead to maximizing or satisficing a particular goal.
I claim that many of your status-oriented heuristics are really not serving you well. My evidence is basically 20+ years of attempting exactly those heuristics myself, and observing that they really didn’t serve me well. And I really wanted them to.
That said, I could be wrong. It might be that there’s technique and skill involved; it might even be that I was implementing a flawed version of those heuristics. That would be awesome, if it were true. So I’d love to be proven wrong.
But before either of us if proven wrong or right, we need to start studying the shape of social reality’s physics, and formulating deep, testable hypotheses about why various moves will or won’t work.
(One of two posts, this one attempting to just focus on saying things that I’m pretty confident I’d endorse on reflection)
I think this is a noteworthy moment of “Double Crux is really the thing we need here”, because I think people are holding very different Cruxes as the thing that matters, and we either need to find the Common Crux or identify multiple Cruxes at the same time for anything good to happen.
Connor’s Crux as I understand it—The LessWrong movement will fail if it does not expect people to invest effort to doublecheck their assumptions, check rationality in the moment.
(This seems totally, 100% true to me. I can’t say how Zvi, Ben or whoever else feels but I’d be willing to bet they basically agreed, and are not arguing with you because of disagreement on that)
Zvi’s Crux as I understand it—The manner in which people give each other feedback is going to get filtered through some kind of status game, the only question is which one and how we implement in a way that ends up in the service of truth. And that Conor’s implementation is not currently doing a good enough job to win the game (either here or elsewhere)
Ben’s Crux as I understand it—The way to avoid the bad effects of status games is to avoid social bayesianism completely (i.e. don’t expect people to buy claims that are dependent on existing credibility).
I’m not sure I fully grok Ben’s claim, but insofar as I do, I think I mostly disagree with it (or at least he’s betting too hard on too strong a version of it). I think social-updating and trust are impossible to avoid, and it is often necessary as a filter for what to bother to listen to when looking at ideas that are maybe-brilliant-maybe-crackpottery.
Upon reflection, I can’t think of anything further I can say that isn’t dependent on first having heard Conor make an argument that assumes the listener is 100% on board with the claim that we should expect people to do-rationality-in-the-moment, and that whatever disagreement is going on is going on despite that.
(it may turn out other people don’t share that assumption, just noting that I personally will not be able to contribute usefully until such a point)
(note for until I return: this is a virtuous comment and I’m really happy you wrote it. Also this is no longer my crux at all, although I still think social aummaning is mostly not good epistemology)
I think avoiding status games is sort of like trying to reach probabilities of zero or one: Technically impossible, but you can get arbitrarily close, to the point where trying to measure the weight that status shifts are assigned within everyone’s decision making is lowered to be almost non-measurable.
I’m also not sure I would define “not playing the game” as within a group, making sure that everyone’s relative status is the same. This is simply a different status game, just with different objectives. It seems to me that what you suggest doing would simply open up a Pandora’s Box of undesirable epistemic issues. Personally, I want the people who consistently produce good ideas and articulate them well to have high status. And if they are doing it better than me, then I want them to have higher status than myself. I want higher status for myself too, naturally, but I channel that desire into practicing and maintaining as many characteristics that I believe aid the goals of the community. My goal is almost never to preserve egalitarian reputation at the expense of other goals, even among people I respect, since I fear that trying to elevate that goal to a high priority carries the risk of signal-boosting poor ideas and filtering out good ones. Maybe that’s not what you’re actually suggesting needs to be done, maybe your definition doesn’t include things like reputation, but does consider status in the sense of who gets to be socially dominant. I think what I consider my crux is that it’s less important to make sure that “mutual respect” and “consider equal in status, to whatever extent status actually means” mean the same thing, and more important that the “market” of ideas generated by open discourse maintains a reasonable distribution of reputation.
Loren ipsum
Making a claim like “I claim that a “true” LWer, upon noticing that they were developing a model of me as being butthurt and complaining, would be surprised” seems like an unfair social move to me. It is generally considered rude to say “actually my model of you is totally compatible with saying you’re butthurt and complaining” or even “I haven’t kept track of you enough to have any sort of prior on this and so am going with my observations,” so people who believe those things aren’t going to comment.
It is also internally consistent that someone might downvote you and have questioned their knee-jerk reaction. My understanding is that a downvote just means “less of this on LW please,” and “even though this person is not being whiny they’re certainly not taking the steps I would reasonably expect to avoid being mistaken for whiny” is a good reason to downvote. It seems a bit excessive to demand argumentation from everyone who clicks a button.
Loren ipsum
Loren ipsum
WHO SUMMONS THE GR*cough* *wheeze* goddamnit.
Yeah. The thing is, it’s waaay less like “magic buttons” that you push to escape the paradigm, and waaay more like trying to diffuse a bomb, that’s strapped to your soulmate’s skull, on the back of an off-road vehicle that’s driving too fast over rough terrain.
Which isn’t to say that it can’t be done.
Lemme give an example of a move that *might* work, sometimes:
====
”You’re playing status games,” says X.
“What? No, I’m not,” says Y.
“Yes, you are. You just pulled a lowering-Z’s-status move. It was pretty blunt, in fact.”
“Wh—ah, oh. Oh. Right, I guess—yeah, I can see how that interpretation makes perfect sense if you’re playing status games.”
“I’m not talking about whether I’m playing status games. I’m saying you are.”
“Uh. I’m not, or at least not in the way you’re thinking. Like, I grant that if you put on your status glasses my actions only make sense in terms of trying to put Z down or whatever, but if you put on some other glasses, like your engaging in truthseeking discourse glasses, you’ll see that my behavior also is complete and consistent and sensible there, too.”
“So, I notice that now you’re trying to lower my status.”
“What?”
“You’re trying to set the frame such that if I object to what you’re doing, I’m self-identifying as not-a-truthseeker.”
“Wh—no, I—gah, the point is, I’m not attending to status in the way you are, at all. Like, I see it, what you’re saying makes sense, but I wasn’t trying to play that game.”
“Well, you are playing it, though. Your actions are having ramifications within that frame, which is an obvious frame that everybody’s obviously inside of at all times. And I gotta say, you’re being a real jerk within the commonly-accepted rules of that game.”
“I’m specifically trying not to play it, though. I agree that the status implications of what is being said, and who’s saying it, are important. And we can attend to those directly if you’d like. But I’d like to attend to them while ALSO trying to figure out *what is actually real*.”
“Okay. Do you understand that I may be distrustful about that?”
“Of course. Be as distrustful as you need to be. But help me get to the truth.”
“Okay. I can absolutely take at face value the claim that you want to get to the truth. Can you accept that if we want to get to the truth, we first have to get the status thing out of the way?”
”I don’t know. That doesn’t feel right.”
“Listen. I’m willing to take your claim at face value. Can you take my claim at face value?”
”No, because these claims have truth values to them, and I don’t believe that status has to be resolved before truth can be reached.”
“Are you willing to be walked through an explanation?”
”Not really, I feel like it’s distracting us from the original conversation.”
″Okay. Then let’s just return to the original conversation, but could you rephrase what you said in a way that doesn’t sound like being a dick to Z, and maybe apologize to Z for phrasing it the way you did? And actually attend to how you feel when you do so; if you notice internal resistance, please entertain the hypothesis that this is coming from a part of you that actually *was* playing the status game. I’m not saying that to accuse you of anything; I just want you to notice it, because you want to seek truth and parts of you that play status games can interfere with that.”
That was broadly my point, the main reason why I didn’t say that was because I recognise that some people have unusual preferences that make decision make sense that would appear irrational from the standpoint of someone assuming normal preferences.
I’ve got my frustrations with the community too, for example, when I tried to convince people to take hypothetical seriously. Or when it was clear that the community was in decline, but it was impossible to take action on it. That made me go away for a while and engage less with the community.
But, I decided to give it another go after doing a lot of debating and just learning a lot more in general and I’ve found that I’m now getting better responses. I can now predict the most likely ways that my posts will be misunderstood and throw in the appropriate disclaimers. There are still lots of ways in which we aren’t rational, but that is why we often call ourselves aspiring rationalists and the site Less Wrong. I agree that we still have large flaws in an absolute sense, but I haven’t been able to find another site where I can go to have a discussion that is better.
Maybe its different for you, maybe your time is best spent elsewhere, but your metric does not feel like a very accurate health of the site. Like, if I’m being really honest, I’m tempted to go and upvote the comment right now just to diffuse the situation—but is that what you’re trying to measure? The votes on comments are much less reliable than the votes on posts anyway because many people read the post, browse a few comments, then consider themselves finished on the post and never come back.
Haven’t finished reading this yet, but important point:
But note that for it to end up at −3 on net, that means that either a bunch of less-weighted users downvoted it, or at least three heavy hitters, or some combination
I don’t think the math there is right (and this is a confusing thing about the site for now, not sure what to do with it) - assuming the comment started at 5, this is one-and-half heavy hitters, or 3 random people, which feels pretty different to me. (3 karma power is really easy to get)
And the difference feels fairly significant.
Loren ipsum
True/fair, but I think this is something people are going to intuit wrong a lot in situations that vary on “who was actually doing the down voting”, so I wanted to make sure to note it here.
Loren ipsum
For what it’s worth, I was another (the other?) person who downvoted the comment in question early (having upvoted the post, mostly for explaining an unfamiliar interesting thing clearly).
Catching up on all this has been a little odd to me. I’m obviously not a culture lord, but also my vote wasn’t about this question of “the bar” except (not that I would naturally frame it this way) perhaps as far as I read CoolShirtMcPants as doing something similar to what you said you were doing—”here is my considered position on this, I encourage people to try it on and attend to specifically how it might come out as I imply”—and you as creating an impasse instead of recognizing that and trying to draw out more concrete arguments/scenarios/evidence. Or that even if CSMP wasn’t intentionally doing that, a “bar” should ask that you treat the comment that way.
On one hand, sure, the situation wasn’t quite symmetric. And it was an obvious, generic-seeming objection, surely already considered at least by the author and better-expressed in other comments. But on the other hand, it can still be worth saying for the sake of readers or for starting a more substantive conversation; CSMP at least tried to dig a little deeper. And in this kind of blogging I don’t usually see one person’s (pseudonymously or otherwise) staking out some position as stronger evidence than another’s doing so. Neither should really get you further than deciding it’s worth thinking about for yourself. This case wasn’t an exception.
(I waffled on saying anything at all here because your referendum, if there is one, appears to have grown beyond this, and all this stuff about status seems to me to be a poor framing. But reading votes is a tricky business, so I can at least provide more information.)
I understand that there may be costs to you for continued interaction with the site, and that your primary motivations may have shifted, but I will say that your continued presence may act as a buffer that slows down the formation of an orthodoxy, and therefore you may be providing value by remaining even if the short term costs remain negative for a while.
Hrm. I would like it if Conor stuck around, since I think the content produced in the last 30 days was enjoyable and helpful to me, but I also think paying costs to slow down the formation of an LW orthodoxy that doesn’t align with his goals would be a bad investment of energy. If it was costless or very low cost or if preventing the orthodoxy/causing it to form in a way that aligned with his goals was possible, then it would probably be worth it.
I am not in Conor’s head, but if I was in their place I wouldn’t be convinced to stick around as just a delaying tactic. A much more convincing reason might be to stick around, take notes of who does engage with me the way I wanted to engage with people, and then continue to post here while mostly just paying attention to those people.
I don’t think this is how one avoids playing status games. It’s not a simple ‘ignore status games and get to work.’ You don’t get to do that. Ever. I know, it sucks, and yes Brent Dill is laughing somewhere in the background.
You definitely don’t get to do that while pointing out that someone should update their reactions to what you’re saying based on the fact that you are one making the statement. I realize that this might be a factually accurate statement that you would make even if no monkey brains were involved, but that doesn’t matter.
Even more than that, the defense of “I realize this looks like a status move but that is a coincidence that I wasn’t paying attention to” is not a defense a community can allow, if it wants to actually avoid status moves. See Just Saying What You Mean Is Literally Impossible. This is not something people can just turn off.
The way you avoid playing status games is that you play the status game of ‘keep everything in balance’ rather than not thinking about such things at all and asserting that your monkey brain isn’t steering your decisions at all. Yes, you really do need to think about who and what is being lowered or raised in status by everything you say, just like all the other implications of everything you say, and then you need to make sure you cancel those effects out. At least when they’re big enough. When having a discussion with someone I want to treat as an equal, who wants to treat me likewise, we both keep careful track of whether someone is gaining or losing status, and do subtle things to fix it if that gets too out of hand.
Does that take up a non-trivial amount of our cognitive effort? It can, and yes that sucks, but not paying attention to status games is not a way to not play, it’s a way to play without realizing what you’re doing. Does it mean occasionally sitting there and taking it when your status is getting lowered even if you don’t think the move in question is ‘fair’ in order to maintain balance? Yeah, it kind of does mean that too, if you can’t get there and restore the balance first.
If you leave or stop engaging over this issue, I think that would be a shame, because I don’t think the project you would then have been hoping to have been involved in actually exists anywhere else either, or ever will.
I’d also note that you have an adverse selection thing going on here. You’ve written a lot of stuff and mostly it’s been received quite well. If you find the one time in a hundred where the number looks bad in a way you don’t agree with, and turn that one thing into a referendum, it’s not going to look go well.
My current state is of being very curious to learn why Conor believes that this is one of the most important variables on LW. It’s something that to me feels like a dial (how much status-dialogue is assumed in the comments) that improves discourse but is not necessary for it, while I think Conor thinks it’s necessary for us to be able to actually win. This is surprising, and Conor believes things for reasons, so I will ask him to share his information with me (time permitting, we’re currently both somewhat busy and on opposite continents).
Loren ipsum
I have a strong, and possibly scary claim to make.
Social reality is *important*. Moreso, it *has gears*.
No, that’s not a strong enough phrasing.
Social reality has *physics*.
It is very hard for humans to understand them, since we exist at or near its metaphorical Planck scale. But, there are actual, discernible principles at work. This is why I use terms like “incentive slope” or “status gradient”—I’m trying to get people to see the socio-cultural order as a structure that can be manipulated. I’m trying to get people to see White with Blue’s eyes.
You have goals. You have VERY ADMIRABLE GOALS. But even if I disagreed adamantly with your goals, they’re your *goals*. They’re your values. I can notice that I vehemently disagree with them, and declare war on you, or I can notice that I adamantly agree with them, and offer alliance. (I think you’ve noticed which side of that I wound up falling on.)
That said, you also have claims about what procedures and heuristics achieve your goals and maximize your values. Those CANNOT, themselves, be values. They are how your values interface with reality, and reality has a physics. It is actually possible to be correct or incorrect about whether a particular procedure or heuristic, implemented in a particular environment, will lead to maximizing or satisficing a particular goal.
I claim that many of your status-oriented heuristics are really not serving you well. My evidence is basically 20+ years of attempting exactly those heuristics myself, and observing that they really didn’t serve me well. And I really wanted them to.
That said, I could be wrong. It might be that there’s technique and skill involved; it might even be that I was implementing a flawed version of those heuristics. That would be awesome, if it were true. So I’d love to be proven wrong.
But before either of us if proven wrong or right, we need to start studying the shape of social reality’s physics, and formulating deep, testable hypotheses about why various moves will or won’t work.
And I claim that that’s gonna be hard.
Loren ipsum
(One of two posts, this one attempting to just focus on saying things that I’m pretty confident I’d endorse on reflection)
I think this is a noteworthy moment of “Double Crux is really the thing we need here”, because I think people are holding very different Cruxes as the thing that matters, and we either need to find the Common Crux or identify multiple Cruxes at the same time for anything good to happen.
Connor’s Crux as I understand it—The LessWrong movement will fail if it does not expect people to invest effort to doublecheck their assumptions, check rationality in the moment.
(This seems totally, 100% true to me. I can’t say how Zvi, Ben or whoever else feels but I’d be willing to bet they basically agreed, and are not arguing with you because of disagreement on that)
Zvi’s Crux as I understand it—The manner in which people give each other feedback is going to get filtered through some kind of status game, the only question is which one and how we implement in a way that ends up in the service of truth. And that Conor’s implementation is not currently doing a good enough job to win the game (either here or elsewhere)
Ben’s Crux as I understand it—The way to avoid the bad effects of status games is to avoid social bayesianism completely (i.e. don’t expect people to buy claims that are dependent on existing credibility).
I’m not sure I fully grok Ben’s claim, but insofar as I do, I think I mostly disagree with it (or at least he’s betting too hard on too strong a version of it). I think social-updating and trust are impossible to avoid, and it is often necessary as a filter for what to bother to listen to when looking at ideas that are maybe-brilliant-maybe-crackpottery.
Upon reflection, I can’t think of anything further I can say that isn’t dependent on first having heard Conor make an argument that assumes the listener is 100% on board with the claim that we should expect people to do-rationality-in-the-moment, and that whatever disagreement is going on is going on despite that.
(it may turn out other people don’t share that assumption, just noting that I personally will not be able to contribute usefully until such a point)
(note for until I return: this is a virtuous comment and I’m really happy you wrote it. Also this is no longer my crux at all, although I still think social aummaning is mostly not good epistemology)
Loren ipsum
I think this is frequently the tone of Zvi’s writing. So, for what it’s worth, he’s not being super extra lecture-y towards you than normal. ;-)
I think avoiding status games is sort of like trying to reach probabilities of zero or one: Technically impossible, but you can get arbitrarily close, to the point where trying to measure the weight that status shifts are assigned within everyone’s decision making is lowered to be almost non-measurable.
I’m also not sure I would define “not playing the game” as within a group, making sure that everyone’s relative status is the same. This is simply a different status game, just with different objectives. It seems to me that what you suggest doing would simply open up a Pandora’s Box of undesirable epistemic issues. Personally, I want the people who consistently produce good ideas and articulate them well to have high status. And if they are doing it better than me, then I want them to have higher status than myself. I want higher status for myself too, naturally, but I channel that desire into practicing and maintaining as many characteristics that I believe aid the goals of the community. My goal is almost never to preserve egalitarian reputation at the expense of other goals, even among people I respect, since I fear that trying to elevate that goal to a high priority carries the risk of signal-boosting poor ideas and filtering out good ones. Maybe that’s not what you’re actually suggesting needs to be done, maybe your definition doesn’t include things like reputation, but does consider status in the sense of who gets to be socially dominant. I think what I consider my crux is that it’s less important to make sure that “mutual respect” and “consider equal in status, to whatever extent status actually means” mean the same thing, and more important that the “market” of ideas generated by open discourse maintains a reasonable distribution of reputation.
I basically agree with this.
Loren ipsum