Broadly an overall point that makes sense and feels good to me.
Something feels off or at least oversimplified to me in some of the cases, particularly these two lines of thinking:
There’s no substantive disagreement between me and critics-of-my-blocking-policy about the difficulties that this imposes—the way it makes certain conversations tricky or impossible, the way it creates little gaps and blind spots in discussion and consensus.
&
As far as I could tell, both I and the admin team agreed about its absolute size; there were no disagreements about things like e.g. “broken links to previously written essays are a pain and a shame.”
I found myself not actually trusting that there was “no disagreement about” about the nature or size in these cases. Maybe I would if I had more data about each situation, but something about how it’s being written about raises suspicion for me. It’s not per se than I think there was disagreement, but that I think the apparent agreement was on the level of the objective details (broken links etc) but that you didn’t know how each other felt about it or what it meant to each other, and that if you’d more thoroughly seen the world through each others’ eyes, it wouldn’t seem like “zero point” is the relevant frame here.
One attempt to point at that:
It seems to me that without straightforward scales on which to measure things, or even getting clear on exactly what the units are, “setting the zero point” isn’t even a real move that’s available (intentionally or not) and I would expect people discussing in good faith to nonetheless end up with differences as a result of those.
Taking the latter case in particular, it seem likely to me (at least based on what you’ve written) that the LW admins were mostly tracking something like a sense of betrayal of expectations that people would have about LW as an ever-growing web of wisdom, and that feeling of betrayal is their units. And you’re measuring something more on the level of “how much wisdom is on LW?” And from those two scales, two different natural zero points emerge:
in removing the posts, LW goes from zero betrayal of the expectation of posts by default sticking around to more than zero betrayal of the expectation of posts sticking around
in removing the posts, LW goes from more-than-zero wisdom on it from everybody’s posts to less-than-before-but-still-more-than-zero wisdom on it with everybody’s posts minus the Conor Moreton series (and in the meantime there was some more-than-zero temporary wisdom from those posts having gone up at all)
I noticed I ended up flipping the scales here, such that both are just zero-to-more-than-zero, even though one is more-than-zero of an unwanted thing. Not sure if that’s incidental or relevant. Sometimes I’ve found in orienting to situations like this, one finds that there’s only ever presence of something, never absence.
I’m not totally satisfied with this articulation but maybe it’s a starting point for us to pick out whatever structure I’m noticing here.
Sam Harris’ argument style reminds me very much of the man that trained me, and the example of fire smoke negatively affecting health is a great zero point to contest. Sam has slipped in a zero point of physical health being the only form of health which matters. Or at least the highest. One would have to argue against his zero point, that there are other values which can be measured in terms of health greater than mere physical health associated with fire. Psychological, familial, and social immediately come to mind. Further, in the case of Sam, famed for his epistemological intransigence, one would likely have to argue against his zero point of what constitutes rationality itself in order to further one’s position that physical health is very often a secondary value, as this sort of argument follows more a conversational arrangement of complex interdependent factors, than the straight rigorous logic Sam seemingly prefers
A lot of what’s going on here is primarily frame control—setting the relevant scale on which a particular zero is then made salient. And that is not being done in the nice explicit friendly way.
He’s not casting the sneaky dark-arts version of the spell
Sam Harris here is not casting a sneaky version of Tare Detrimens, but he’s maybe (intentionally or not, benevolently or malevolently) casting a sneaky version of Fenestra Imperium.
Huh—it suddenly struck me that Peter Singer is doing the exact same thing in the drowning child thought experiment, by the way, as Tyler Alterman points out beautifully in Effective altruism in the garden of ends. He takes for granted that the frame of “moral obligation” is relevant to why someone might save the child, then uses our intuitions towards saving the child to suggest that we agree with him about this obligation being present and relevant, then he uses logic to argue that this obligation applies elsewhere too. All of that is totally explicit and rational within that frame, but he chose the frame.
In both cases, everyone agrees about what actually happens (a child dies, or doesn’t; you contribute, or you don’t).
In both cases, everyone agrees because within the frame that has been presented there is no difference! Meanwhile there is a difference in many other useful frames! And this choice of frame is NOT, as far as I can recall, explicit. Rather than recall, let me actually just go check… watching this video, he doesn’t use the phrase “moral obligation”, but asks “[if I walked past,] would I have done something wrong?”. This interactive version offers a forced choice “do you have a moral obligation to rescue the child?”
In both cases, the question assumes the frame, and is not explicit about the arbitrariness of doing so. So yes, he is explicit about setting the zero point, but focusing on that part of the move obscures the larger inexplicit move he’s making beforehand.
I like and agree with your argument here in general.
I don’t think it’s true that, in the specific LW case, ”...you didn’t know how each other felt about it or what it meant to each other, and that if you’d more thoroughly seen the world through each others’ eyes, it wouldn’t seem like ‘zero point’ is the relevant frame here.”
Or at least, none of what you said about the LW admin perspective was new to me; it had all been taken into account by me at the time. (I suspect at all times I was capable of passing their ITT with at least a C+ grade; I am less sure they were capable of passing mine.)
But that individual case seems separate from your overall point, which does seem correct. So I’m not sure where disagreement lies.
Broadly an overall point that makes sense and feels good to me.
Something feels off or at least oversimplified to me in some of the cases, particularly these two lines of thinking:
&
I found myself not actually trusting that there was “no disagreement about” about the nature or size in these cases. Maybe I would if I had more data about each situation, but something about how it’s being written about raises suspicion for me. It’s not per se than I think there was disagreement, but that I think the apparent agreement was on the level of the objective details (broken links etc) but that you didn’t know how each other felt about it or what it meant to each other, and that if you’d more thoroughly seen the world through each others’ eyes, it wouldn’t seem like “zero point” is the relevant frame here.
One attempt to point at that:
It seems to me that without straightforward scales on which to measure things, or even getting clear on exactly what the units are, “setting the zero point” isn’t even a real move that’s available (intentionally or not) and I would expect people discussing in good faith to nonetheless end up with differences as a result of those.
Taking the latter case in particular, it seem likely to me (at least based on what you’ve written) that the LW admins were mostly tracking something like a sense of betrayal of expectations that people would have about LW as an ever-growing web of wisdom, and that feeling of betrayal is their units. And you’re measuring something more on the level of “how much wisdom is on LW?” And from those two scales, two different natural zero points emerge:
in removing the posts, LW goes from zero betrayal of the expectation of posts by default sticking around to more than zero betrayal of the expectation of posts sticking around
in removing the posts, LW goes from more-than-zero wisdom on it from everybody’s posts to less-than-before-but-still-more-than-zero wisdom on it with everybody’s posts minus the Conor Moreton series (and in the meantime there was some more-than-zero temporary wisdom from those posts having gone up at all)
I noticed I ended up flipping the scales here, such that both are just zero-to-more-than-zero, even though one is more-than-zero of an unwanted thing. Not sure if that’s incidental or relevant. Sometimes I’ve found in orienting to situations like this, one finds that there’s only ever presence of something, never absence.
I’m not totally satisfied with this articulation but maybe it’s a starting point for us to pick out whatever structure I’m noticing here.
Ah this comment from facebook also feels relevant:
A lot of what’s going on here is primarily frame control—setting the relevant scale on which a particular zero is then made salient. And that is not being done in the nice explicit friendly way.
Sam Harris here is not casting a sneaky version of Tare Detrimens, but he’s maybe (intentionally or not, benevolently or malevolently) casting a sneaky version of Fenestra Imperium.
Huh—it suddenly struck me that Peter Singer is doing the exact same thing in the drowning child thought experiment, by the way, as Tyler Alterman points out beautifully in Effective altruism in the garden of ends. He takes for granted that the frame of “moral obligation” is relevant to why someone might save the child, then uses our intuitions towards saving the child to suggest that we agree with him about this obligation being present and relevant, then he uses logic to argue that this obligation applies elsewhere too. All of that is totally explicit and rational within that frame, but he chose the frame.
In both cases, everyone agrees because within the frame that has been presented there is no difference! Meanwhile there is a difference in many other useful frames! And this choice of frame is NOT, as far as I can recall, explicit. Rather than recall, let me actually just go check… watching this video, he doesn’t use the phrase “moral obligation”, but asks “[if I walked past,] would I have done something wrong?”. This interactive version offers a forced choice “do you have a moral obligation to rescue the child?”
In both cases, the question assumes the frame, and is not explicit about the arbitrariness of doing so. So yes, he is explicit about setting the zero point, but focusing on that part of the move obscures the larger inexplicit move he’s making beforehand.
I like and agree with your argument here in general.
I don’t think it’s true that, in the specific LW case, ”...you didn’t know how each other felt about it or what it meant to each other, and that if you’d more thoroughly seen the world through each others’ eyes, it wouldn’t seem like ‘zero point’ is the relevant frame here.”
Or at least, none of what you said about the LW admin perspective was new to me; it had all been taken into account by me at the time. (I suspect at all times I was capable of passing their ITT with at least a C+ grade; I am less sure they were capable of passing mine.)
But that individual case seems separate from your overall point, which does seem correct. So I’m not sure where disagreement lies.