I have a strong, and possibly scary claim to make.
Social reality is *important*. Moreso, it *has gears*.
No, that’s not a strong enough phrasing.
Social reality has *physics*.
It is very hard for humans to understand them, since we exist at or near its metaphorical Planck scale. But, there are actual, discernible principles at work. This is why I use terms like “incentive slope” or “status gradient”—I’m trying to get people to see the socio-cultural order as a structure that can be manipulated. I’m trying to get people to see White with Blue’s eyes.
You have goals. You have VERY ADMIRABLE GOALS. But even if I disagreed adamantly with your goals, they’re your *goals*. They’re your values. I can notice that I vehemently disagree with them, and declare war on you, or I can notice that I adamantly agree with them, and offer alliance. (I think you’ve noticed which side of that I wound up falling on.)
That said, you also have claims about what procedures and heuristics achieve your goals and maximize your values. Those CANNOT, themselves, be values. They are how your values interface with reality, and reality has a physics. It is actually possible to be correct or incorrect about whether a particular procedure or heuristic, implemented in a particular environment, will lead to maximizing or satisficing a particular goal.
I claim that many of your status-oriented heuristics are really not serving you well. My evidence is basically 20+ years of attempting exactly those heuristics myself, and observing that they really didn’t serve me well. And I really wanted them to.
That said, I could be wrong. It might be that there’s technique and skill involved; it might even be that I was implementing a flawed version of those heuristics. That would be awesome, if it were true. So I’d love to be proven wrong.
But before either of us if proven wrong or right, we need to start studying the shape of social reality’s physics, and formulating deep, testable hypotheses about why various moves will or won’t work.
(One of two posts, this one attempting to just focus on saying things that I’m pretty confident I’d endorse on reflection)
I think this is a noteworthy moment of “Double Crux is really the thing we need here”, because I think people are holding very different Cruxes as the thing that matters, and we either need to find the Common Crux or identify multiple Cruxes at the same time for anything good to happen.
Connor’s Crux as I understand it—The LessWrong movement will fail if it does not expect people to invest effort to doublecheck their assumptions, check rationality in the moment.
(This seems totally, 100% true to me. I can’t say how Zvi, Ben or whoever else feels but I’d be willing to bet they basically agreed, and are not arguing with you because of disagreement on that)
Zvi’s Crux as I understand it—The manner in which people give each other feedback is going to get filtered through some kind of status game, the only question is which one and how we implement in a way that ends up in the service of truth. And that Conor’s implementation is not currently doing a good enough job to win the game (either here or elsewhere)
Ben’s Crux as I understand it—The way to avoid the bad effects of status games is to avoid social bayesianism completely (i.e. don’t expect people to buy claims that are dependent on existing credibility).
I’m not sure I fully grok Ben’s claim, but insofar as I do, I think I mostly disagree with it (or at least he’s betting too hard on too strong a version of it). I think social-updating and trust are impossible to avoid, and it is often necessary as a filter for what to bother to listen to when looking at ideas that are maybe-brilliant-maybe-crackpottery.
Upon reflection, I can’t think of anything further I can say that isn’t dependent on first having heard Conor make an argument that assumes the listener is 100% on board with the claim that we should expect people to do-rationality-in-the-moment, and that whatever disagreement is going on is going on despite that.
(it may turn out other people don’t share that assumption, just noting that I personally will not be able to contribute usefully until such a point)
(note for until I return: this is a virtuous comment and I’m really happy you wrote it. Also this is no longer my crux at all, although I still think social aummaning is mostly not good epistemology)
Loren ipsum
I have a strong, and possibly scary claim to make.
Social reality is *important*. Moreso, it *has gears*.
No, that’s not a strong enough phrasing.
Social reality has *physics*.
It is very hard for humans to understand them, since we exist at or near its metaphorical Planck scale. But, there are actual, discernible principles at work. This is why I use terms like “incentive slope” or “status gradient”—I’m trying to get people to see the socio-cultural order as a structure that can be manipulated. I’m trying to get people to see White with Blue’s eyes.
You have goals. You have VERY ADMIRABLE GOALS. But even if I disagreed adamantly with your goals, they’re your *goals*. They’re your values. I can notice that I vehemently disagree with them, and declare war on you, or I can notice that I adamantly agree with them, and offer alliance. (I think you’ve noticed which side of that I wound up falling on.)
That said, you also have claims about what procedures and heuristics achieve your goals and maximize your values. Those CANNOT, themselves, be values. They are how your values interface with reality, and reality has a physics. It is actually possible to be correct or incorrect about whether a particular procedure or heuristic, implemented in a particular environment, will lead to maximizing or satisficing a particular goal.
I claim that many of your status-oriented heuristics are really not serving you well. My evidence is basically 20+ years of attempting exactly those heuristics myself, and observing that they really didn’t serve me well. And I really wanted them to.
That said, I could be wrong. It might be that there’s technique and skill involved; it might even be that I was implementing a flawed version of those heuristics. That would be awesome, if it were true. So I’d love to be proven wrong.
But before either of us if proven wrong or right, we need to start studying the shape of social reality’s physics, and formulating deep, testable hypotheses about why various moves will or won’t work.
And I claim that that’s gonna be hard.
Loren ipsum
(One of two posts, this one attempting to just focus on saying things that I’m pretty confident I’d endorse on reflection)
I think this is a noteworthy moment of “Double Crux is really the thing we need here”, because I think people are holding very different Cruxes as the thing that matters, and we either need to find the Common Crux or identify multiple Cruxes at the same time for anything good to happen.
Connor’s Crux as I understand it—The LessWrong movement will fail if it does not expect people to invest effort to doublecheck their assumptions, check rationality in the moment.
(This seems totally, 100% true to me. I can’t say how Zvi, Ben or whoever else feels but I’d be willing to bet they basically agreed, and are not arguing with you because of disagreement on that)
Zvi’s Crux as I understand it—The manner in which people give each other feedback is going to get filtered through some kind of status game, the only question is which one and how we implement in a way that ends up in the service of truth. And that Conor’s implementation is not currently doing a good enough job to win the game (either here or elsewhere)
Ben’s Crux as I understand it—The way to avoid the bad effects of status games is to avoid social bayesianism completely (i.e. don’t expect people to buy claims that are dependent on existing credibility).
I’m not sure I fully grok Ben’s claim, but insofar as I do, I think I mostly disagree with it (or at least he’s betting too hard on too strong a version of it). I think social-updating and trust are impossible to avoid, and it is often necessary as a filter for what to bother to listen to when looking at ideas that are maybe-brilliant-maybe-crackpottery.
Upon reflection, I can’t think of anything further I can say that isn’t dependent on first having heard Conor make an argument that assumes the listener is 100% on board with the claim that we should expect people to do-rationality-in-the-moment, and that whatever disagreement is going on is going on despite that.
(it may turn out other people don’t share that assumption, just noting that I personally will not be able to contribute usefully until such a point)
(note for until I return: this is a virtuous comment and I’m really happy you wrote it. Also this is no longer my crux at all, although I still think social aummaning is mostly not good epistemology)