I agree, I’m just trying to say that the common rationalist theories on this topic often disagree with your take.
If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.
I think this argument would be more transparent with examples. Whenever I think of examples of popular beliefs that it would be reasonable to change one’s support of in the light of this, they end up involving highly politicized taboos.
Once we’ve established that “X is false” is newsworthy at all, we still need to weigh the cost vs benefits of disseminating that information.
I hope that everyone including rationalists are in agreement about all this. For example, prominent rationalists are familiar with the idea of infohazards, reputational risks, picking your battles, simulacra 2, and so on. I’ve seen a lot of strong disagreement on this forum about what newsworthy information should and shouldn’t be disseminated and in what formats and contexts. I sure have my own opinions!
There’s different distinctions when it comes to infohazards. One is non-Bayesian infohazards, where certain kinds of information is thought to break people’s rationality; that seems obscure and not so relevant here. Another is recipes for destruction, where you give a small hostile faction the ability to unilaterally cause harm. This could theoretically be applicable if we were talking publishing Scott Alexander’s personal address and his habits when for where he goes, as that makes it more practical for terrorists to attack him. But that seems less relevant for his real name, when it is readily available and he ends up facing tons of attention regardless.
Reputational risks can at times be acknowledged, but at the same time reputational risks are one of the main justifications for the taboos. Stereotyping is basically reputational risk on a group level; if rationalists dismiss the danger of stereotyping with “well, I just have a curious itch”, that sure seems like a strong presumption of truthtelling over reputational risk.
Picking your battles seems mostly justified on pragmatics, so it seems to me that the NYT can just go “this is a battle that we can afford to pick”.
Rationalists seem to usually consider simulacrum level 2 to be pathological, on the basis of presumption of the desirability of truth.
…But all that is irrelevant to this discussion here. I was talking about whether Scott’s last name is newsworthy in the first place. For example, it’s not the case that lots of people around the world were under the false impression that Scott’s true last name was McSquiggles, and now NYT is going to correct the record. (It’s possible that lots of people around the world were under the false impression that Scott’s true last name is Alexander, but that misconception can be easily correctly by merely saying it’s a pseudonym.) If Scott’s true last name revealed that he was secretly British royalty, or secretly Albert Einstein’s grandson, etc., that would also at least potentially be newsworthy.
Not everything is newsworthy. The pebbles-on-the-sidewalk example I mentioned above is not newsworthy. I think Scott’s name is not newsworthy either. Incidentally, I also think there should be a higher bar for what counts as newsworthy in NYT, compared to what counts as newsworthy when I’m chatting with my spouse about what happened today, because the higher opportunity cost.
I think this is a perfectly valid argument for why NYT shouldn’t publish it, it just doesn’t seem very strong or robust and doesn’t square well with the general pro-truth ideology.
Like, if the NYT did go out and count the number of pebbles on your road, then yes there’s an opportunity cost to this etc., which makes it a pretty unnecessary thing to do, but it’s not like you’d have any good reason to whip out a big protest or anything. This is the sort of thing where at best the boss should go “was that really necessary?”, and both “no, it was an accident” or “yes, because of <obscure policy reason>” are fine responses.
If one grants a presumption of the value of truth, and grants that it is permissible, admirable even, to follow the itch to uncover things that people would really rather downplay, then it seems really hard to say that Cade Metz did anything wrong.
Another is recipes for destruction, where you give a small hostile faction the ability to unilaterally cause harm. … But that seems less relevant for his real name, when it is readily available and he ends up facing tons of attention regardless.
Not being completely hidden isn’t “readily available”. If finding his name is even a trivial inconvenience, it doesn’t cause the damage caused by plastering his name in the Times.
I think this is a perfectly valid argument for why NYT shouldn’t publish it, it just doesn’t seem very strong or robust… Like, if the NYT did go out and count the number of pebbles on your road, then yes there’s an opportunity cost to this etc., which makes it a pretty unnecessary thing to do, but it’s not like you’d have any good reason to whip out a big protest or anything.
The context from above is that we’re weighing costs vs benefits of publishing the name, and I was pulling out the sub-debate over what the benefits are (setting aside the disagreement about how large the costs are).
I agree that “the benefits are ≈0” is not a strong argument that the costs outweigh the benefits in and of itself, because maybe the costs are ≈0 as well. If a journalist wants to report the thickness of Scott Alexander’s shoelaces, maybe the editor will say it’s a waste of limited wordcount, but the journalist could say “hey it’s just a few words, and y’know, it adds a bit of color to the story”, and that’s a reasonable argument: the cost and benefit are each infinitesimal, and reasonable people can disagree about which one slightly outweighs the other.
But “the benefits are ≈0” is a deciding factor in a context where the costs are not infinitesimal. Like if Scott asserts that a local gang will beat him senseless if the journalist reports the thickness of his shoelaces, it’s no longer infinitesimal costs versus infinitesimal benefits, but rather real costs vs infinitesimal benefits.
If the objection is “maybe the shoelace thickness is actually Scott’s dark embarrassing secret that the public has an important interest in knowing”, then yeah that’s possible and the journalist should certainly look into that possibility. (In the case at hand, if Scott were secretly SBF’s brother, then everyone agrees that his last name would be newsworthy.) But if the objection is just “Scott might be exaggerating, maybe the gang won’t actually beat him up too badly if the shoelace thing is published”, then I think a reasonable ethical journalist would just leave out the tidbit about the shoelaces, as a courtesy, given that there was never any reason to put it in in the first place.
I get that this is an argument one could make. But the reason I started this tangent was because you said:
Here CM doesn’t directly argue that there was any benefit to doxxing; instead he kinda conveys a vibe / ideology that if something is true then it is self-evidently intrinsically good to publish it
That is, my original argument was not in response to the “Anyway, if the true benefit is zero (as I believe), then we don’t have to quibble over whether the cost was big or small” part of your post, it was to the vibe/ideology part.
Where I was trying to say, it doesn’t seem to me that Cade Metz was the one who introduced this vibe/ideology, rather it seems to have been introduced by rationalists prior to this, specifically to defend tinkering with taboo topics.
Like, you mention that Cade Metz conveys this vibe/ideology that you disagree with, and you didn’t try to rebut I directly, I assumed because Cade Metz didn’t defend it but just treated it as obvious.
And that’s where I’m saying, since many rationalists including Scott Alexander have endorsed this ideology, there’s a sense in which it seems wrong, almost rude, to not address it directly. Like a sort of Motte-Bailey tactic.
If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.
I think this argument would be more transparent with examples. Whenever I think of examples of popular beliefs that it would be reasonable to change one’s support of in the light of this, they end up involving highly politicized taboos.
It is not surprising when a lot of people having a false belief is caused by the existence of a taboo. Otherwise the belief would probably already have been corrected or wouldn’t have gained popularity in the first place. And giving examples for such beliefs of course is not really possible, precisely because it is taboo to argue that they are false.
I agree, I’m just trying to say that the common rationalist theories on this topic often disagree with your take.
I think this argument would be more transparent with examples. Whenever I think of examples of popular beliefs that it would be reasonable to change one’s support of in the light of this, they end up involving highly politicized taboos.
There’s different distinctions when it comes to infohazards. One is non-Bayesian infohazards, where certain kinds of information is thought to break people’s rationality; that seems obscure and not so relevant here. Another is recipes for destruction, where you give a small hostile faction the ability to unilaterally cause harm. This could theoretically be applicable if we were talking publishing Scott Alexander’s personal address and his habits when for where he goes, as that makes it more practical for terrorists to attack him. But that seems less relevant for his real name, when it is readily available and he ends up facing tons of attention regardless.
Reputational risks can at times be acknowledged, but at the same time reputational risks are one of the main justifications for the taboos. Stereotyping is basically reputational risk on a group level; if rationalists dismiss the danger of stereotyping with “well, I just have a curious itch”, that sure seems like a strong presumption of truthtelling over reputational risk.
Picking your battles seems mostly justified on pragmatics, so it seems to me that the NYT can just go “this is a battle that we can afford to pick”.
Rationalists seem to usually consider simulacrum level 2 to be pathological, on the basis of presumption of the desirability of truth.
I think this is a perfectly valid argument for why NYT shouldn’t publish it, it just doesn’t seem very strong or robust and doesn’t square well with the general pro-truth ideology.
Like, if the NYT did go out and count the number of pebbles on your road, then yes there’s an opportunity cost to this etc., which makes it a pretty unnecessary thing to do, but it’s not like you’d have any good reason to whip out a big protest or anything. This is the sort of thing where at best the boss should go “was that really necessary?”, and both “no, it was an accident” or “yes, because of <obscure policy reason>” are fine responses.
If one grants a presumption of the value of truth, and grants that it is permissible, admirable even, to follow the itch to uncover things that people would really rather downplay, then it seems really hard to say that Cade Metz did anything wrong.
By coincidence, Scott has written about this subject.
Not being completely hidden isn’t “readily available”. If finding his name is even a trivial inconvenience, it doesn’t cause the damage caused by plastering his name in the Times.
The context from above is that we’re weighing costs vs benefits of publishing the name, and I was pulling out the sub-debate over what the benefits are (setting aside the disagreement about how large the costs are).
I agree that “the benefits are ≈0” is not a strong argument that the costs outweigh the benefits in and of itself, because maybe the costs are ≈0 as well. If a journalist wants to report the thickness of Scott Alexander’s shoelaces, maybe the editor will say it’s a waste of limited wordcount, but the journalist could say “hey it’s just a few words, and y’know, it adds a bit of color to the story”, and that’s a reasonable argument: the cost and benefit are each infinitesimal, and reasonable people can disagree about which one slightly outweighs the other.
But “the benefits are ≈0” is a deciding factor in a context where the costs are not infinitesimal. Like if Scott asserts that a local gang will beat him senseless if the journalist reports the thickness of his shoelaces, it’s no longer infinitesimal costs versus infinitesimal benefits, but rather real costs vs infinitesimal benefits.
If the objection is “maybe the shoelace thickness is actually Scott’s dark embarrassing secret that the public has an important interest in knowing”, then yeah that’s possible and the journalist should certainly look into that possibility. (In the case at hand, if Scott were secretly SBF’s brother, then everyone agrees that his last name would be newsworthy.) But if the objection is just “Scott might be exaggerating, maybe the gang won’t actually beat him up too badly if the shoelace thing is published”, then I think a reasonable ethical journalist would just leave out the tidbit about the shoelaces, as a courtesy, given that there was never any reason to put it in in the first place.
I get that this is an argument one could make. But the reason I started this tangent was because you said:
That is, my original argument was not in response to the “Anyway, if the true benefit is zero (as I believe), then we don’t have to quibble over whether the cost was big or small” part of your post, it was to the vibe/ideology part.
Where I was trying to say, it doesn’t seem to me that Cade Metz was the one who introduced this vibe/ideology, rather it seems to have been introduced by rationalists prior to this, specifically to defend tinkering with taboo topics.
Like, you mention that Cade Metz conveys this vibe/ideology that you disagree with, and you didn’t try to rebut I directly, I assumed because Cade Metz didn’t defend it but just treated it as obvious.
And that’s where I’m saying, since many rationalists including Scott Alexander have endorsed this ideology, there’s a sense in which it seems wrong, almost rude, to not address it directly. Like a sort of Motte-Bailey tactic.
It is not surprising when a lot of people having a false belief is caused by the existence of a taboo. Otherwise the belief would probably already have been corrected or wouldn’t have gained popularity in the first place. And giving examples for such beliefs of course is not really possible, precisely because it is taboo to argue that they are false.
It’s totally possible to say taboo things, I do it quite often.
But my point is more, this doesn’t seem to disprove the existence of the tension/Motte-Bailey/whatever dynamic that I’m pointing at.