If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.
Once we’ve established that “X is false” is newsworthy at all, we still need to weigh the cost vs benefits of disseminating that information.
I hope that everyone including rationalists are in agreement about all this. For example, prominent rationalists are familiar with the idea of infohazards, reputational risks, picking your battles, simulacra 2, and so on. I’ve seen a lot of strong disagreement on this forum about what newsworthy information should and shouldn’t be disseminated and in what formats and contexts. I sure have my own opinions!
…But all that is irrelevant to this discussion here. I was talking about whether Scott’s last name is newsworthy in the first place. For example, it’s not the case that lots of people around the world were under the false impression that Scott’s true last name was McSquiggles, and now NYT is going to correct the record. (It’s possible that lots of people around the world were under the false impression that Scott’s true last name is Alexander, but that misconception can be easily correctly by merely saying it’s a pseudonym.) If Scott’s true last name revealed that he was secretly British royalty, or secretly Albert Einstein’s grandson, etc., that would also at least potentially be newsworthy.
Not everything is newsworthy. The pebbles-on-the-sidewalk example I mentioned above is not newsworthy. I think Scott’s name is not newsworthy either. Incidentally, I also think there should be a higher bar for what counts as newsworthy in NYT, compared to what counts as newsworthy when I’m chatting with my spouse about what happened today, because of the higher opportunity cost.
I agree, I’m just trying to say that the common rationalist theories on this topic often disagree with your take.
If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.
I think this argument would be more transparent with examples. Whenever I think of examples of popular beliefs that it would be reasonable to change one’s support of in the light of this, they end up involving highly politicized taboos.
Once we’ve established that “X is false” is newsworthy at all, we still need to weigh the cost vs benefits of disseminating that information.
I hope that everyone including rationalists are in agreement about all this. For example, prominent rationalists are familiar with the idea of infohazards, reputational risks, picking your battles, simulacra 2, and so on. I’ve seen a lot of strong disagreement on this forum about what newsworthy information should and shouldn’t be disseminated and in what formats and contexts. I sure have my own opinions!
There’s different distinctions when it comes to infohazards. One is non-Bayesian infohazards, where certain kinds of information is thought to break people’s rationality; that seems obscure and not so relevant here. Another is recipes for destruction, where you give a small hostile faction the ability to unilaterally cause harm. This could theoretically be applicable if we were talking publishing Scott Alexander’s personal address and his habits when for where he goes, as that makes it more practical for terrorists to attack him. But that seems less relevant for his real name, when it is readily available and he ends up facing tons of attention regardless.
Reputational risks can at times be acknowledged, but at the same time reputational risks are one of the main justifications for the taboos. Stereotyping is basically reputational risk on a group level; if rationalists dismiss the danger of stereotyping with “well, I just have a curious itch”, that sure seems like a strong presumption of truthtelling over reputational risk.
Picking your battles seems mostly justified on pragmatics, so it seems to me that the NYT can just go “this is a battle that we can afford to pick”.
Rationalists seem to usually consider simulacrum level 2 to be pathological, on the basis of presumption of the desirability of truth.
…But all that is irrelevant to this discussion here. I was talking about whether Scott’s last name is newsworthy in the first place. For example, it’s not the case that lots of people around the world were under the false impression that Scott’s true last name was McSquiggles, and now NYT is going to correct the record. (It’s possible that lots of people around the world were under the false impression that Scott’s true last name is Alexander, but that misconception can be easily correctly by merely saying it’s a pseudonym.) If Scott’s true last name revealed that he was secretly British royalty, or secretly Albert Einstein’s grandson, etc., that would also at least potentially be newsworthy.
Not everything is newsworthy. The pebbles-on-the-sidewalk example I mentioned above is not newsworthy. I think Scott’s name is not newsworthy either. Incidentally, I also think there should be a higher bar for what counts as newsworthy in NYT, compared to what counts as newsworthy when I’m chatting with my spouse about what happened today, because the higher opportunity cost.
I think this is a perfectly valid argument for why NYT shouldn’t publish it, it just doesn’t seem very strong or robust and doesn’t square well with the general pro-truth ideology.
Like, if the NYT did go out and count the number of pebbles on your road, then yes there’s an opportunity cost to this etc., which makes it a pretty unnecessary thing to do, but it’s not like you’d have any good reason to whip out a big protest or anything. This is the sort of thing where at best the boss should go “was that really necessary?”, and both “no, it was an accident” or “yes, because of <obscure policy reason>” are fine responses.
If one grants a presumption of the value of truth, and grants that it is permissible, admirable even, to follow the itch to uncover things that people would really rather downplay, then it seems really hard to say that Cade Metz did anything wrong.
Another is recipes for destruction, where you give a small hostile faction the ability to unilaterally cause harm. … But that seems less relevant for his real name, when it is readily available and he ends up facing tons of attention regardless.
Not being completely hidden isn’t “readily available”. If finding his name is even a trivial inconvenience, it doesn’t cause the damage caused by plastering his name in the Times.
I think this is a perfectly valid argument for why NYT shouldn’t publish it, it just doesn’t seem very strong or robust… Like, if the NYT did go out and count the number of pebbles on your road, then yes there’s an opportunity cost to this etc., which makes it a pretty unnecessary thing to do, but it’s not like you’d have any good reason to whip out a big protest or anything.
The context from above is that we’re weighing costs vs benefits of publishing the name, and I was pulling out the sub-debate over what the benefits are (setting aside the disagreement about how large the costs are).
I agree that “the benefits are ≈0” is not a strong argument that the costs outweigh the benefits in and of itself, because maybe the costs are ≈0 as well. If a journalist wants to report the thickness of Scott Alexander’s shoelaces, maybe the editor will say it’s a waste of limited wordcount, but the journalist could say “hey it’s just a few words, and y’know, it adds a bit of color to the story”, and that’s a reasonable argument: the cost and benefit are each infinitesimal, and reasonable people can disagree about which one slightly outweighs the other.
But “the benefits are ≈0” is a deciding factor in a context where the costs are not infinitesimal. Like if Scott asserts that a local gang will beat him senseless if the journalist reports the thickness of his shoelaces, it’s no longer infinitesimal costs versus infinitesimal benefits, but rather real costs vs infinitesimal benefits.
If the objection is “maybe the shoelace thickness is actually Scott’s dark embarrassing secret that the public has an important interest in knowing”, then yeah that’s possible and the journalist should certainly look into that possibility. (In the case at hand, if Scott were secretly SBF’s brother, then everyone agrees that his last name would be newsworthy.) But if the objection is just “Scott might be exaggerating, maybe the gang won’t actually beat him up too badly if the shoelace thing is published”, then I think a reasonable ethical journalist would just leave out the tidbit about the shoelaces, as a courtesy, given that there was never any reason to put it in in the first place.
I get that this is an argument one could make. But the reason I started this tangent was because you said:
Here CM doesn’t directly argue that there was any benefit to doxxing; instead he kinda conveys a vibe / ideology that if something is true then it is self-evidently intrinsically good to publish it
That is, my original argument was not in response to the “Anyway, if the true benefit is zero (as I believe), then we don’t have to quibble over whether the cost was big or small” part of your post, it was to the vibe/ideology part.
Where I was trying to say, it doesn’t seem to me that Cade Metz was the one who introduced this vibe/ideology, rather it seems to have been introduced by rationalists prior to this, specifically to defend tinkering with taboo topics.
Like, you mention that Cade Metz conveys this vibe/ideology that you disagree with, and you didn’t try to rebut I directly, I assumed because Cade Metz didn’t defend it but just treated it as obvious.
And that’s where I’m saying, since many rationalists including Scott Alexander have endorsed this ideology, there’s a sense in which it seems wrong, almost rude, to not address it directly. Like a sort of Motte-Bailey tactic.
If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.
I think this argument would be more transparent with examples. Whenever I think of examples of popular beliefs that it would be reasonable to change one’s support of in the light of this, they end up involving highly politicized taboos.
It is not surprising when a lot of people having a false belief is caused by the existence of a taboo. Otherwise the belief would probably already have been corrected or wouldn’t have gained popularity in the first place. And giving examples for such beliefs of course is not really possible, precisely because it is taboo to argue that they are false.
Metz/NYT disagree. He doesn’t completely spell out why (it’s not his style), but, luckily, Scott himself did:
If someone thinks I am so egregious that I don’t deserve the mask of anonymity, then I guess they have to name me, the same way they name criminals and terrorists.
Metz/NYT considered Scott to be bad enough to deserve whatever inconveniences/punishments would come to him as a result of tying his alleged wrongthink to his real name, is the long and short of it.
None, if you buy the “we just have a curious itch to understand the most irrelevant orthodoxy you can think of” explanation. But if that’s a valid reason for rationalists to dig into things that are taboo because of their harmful consequences, is it not then also valid for Cade Metz to follow a curious itch to dig into rationalists’ private information?
Well, I don’t understand what that position has to do with doxxing someone. What does obsessively pointing out how a reigning orthodoxy is incorrect have to do with revealing someone’s private info and making it hard for them to do their jobs? The former is socially useful because a lot of orthodoxy’s result in bad policies or cause people to err in their private lives or whatever. The latter mostly isn’t.
Yes, sometimes someone the two co-incide e.g. revealing that the church uses heliocentric models to calculate celesitial movements or watergate or whatever. But that’s quite rare and I note Matz didn’t provide any argument that doxxing scott is like one of those cases.
Consider a counterfacual where Scott in his private life crusading against DEI policies in a visible way. Then people benifitting from those policies may want to know that “there’s this political activist who’s advocating for policies that harm you and the scope of his influence is way bigger than you thought”. Which would clearly be useful info for a decent chunk of readers. Knowing his name would be useful!
Instead, it’s just “we gotta say his name. It’s so obvious, you know?” OK. So what? Who does that help? Why’s the knowledge valuable? I have not seen a good answer to those questions. Or consider: if Matz for some bizarre reason decided to figure out who “Algon” on LW is and wrote an article revealing that I’m X because “it’s true” I’d say that’s a waste of people’s time and a bit of a dick move.
Yes, he should still be allowed to do so, because regulating free-speech well is hard and I’d rather eat the costs than deal with poor regulations. Doesn’t change the dickishness of it.
The former is socially useful because a lot of orthodoxy’s result in bad policies or cause people to err in their private lives or whatever. The latter mostly isn’t.
I think once you get concrete about it in the discourse, this basically translates to “supports racist and sexist policies”, albeit from the perspective of those who are pro these policies.
Let’s take autogynephilia theory as an example of a taboo belief, both because it’s something I’m familiar with and because it’s something Scott Alexander has spoken out against, so we’re not putting Scott Alexander in any uncomfortable position about it.
Autogynephilia theory has become taboo for various reasons. Some people argue that they should still disseminate it because it’s true, even if it doesn’t have any particular policy implications, but of course that leads to paradoxes where those people themselves tend to have privacy and reputation concerns where they’re not happy about having true things about themselves shared publicly.
The alternate argument is on the basis of “a lot of orthodoxy’s result in bad policies or cause people to err in their private lives or whatever”, but when you get specific about what they are mean rather than dismissing it as “or whatever”, it’s something like “autogynephilia theory is important to know so autogynephiles don’t end up thinking they’re trans and transitioning”, which in other language would mean something like “most trans women shouldn’t have transitioned, and we need policies to ensure that fewer transition in the future”. Which is generally considered an anti-trans position!
Now you might say, well, that position is a good position. But that’s a spicy argument to make, so a lot of the time people fall back on “autogynephilia theory is true and we should have a strong presumption in favor of saying true things”.
Now, there’s also the whole “the discourse is not real life” issue, where the people who advocate for some belief might not be representative of the applications of that belief.
I think once you get concrete about it in the discourse, this basically translates to “supports racist and sexist policies”, albeit from the perspective of those who are pro these policies.
That seems basically correct? And also fine. If you think lots of people are making mistakes that will hurt themselves/others/you and you can convince people about this by sharing info, that’s basically fine to me.
I still don’t understand what this has to do with doxxing someone. I suspect we’re talking past each other right now.
but of course that leads to paradoxes where those people themselves tend to have privacy and reputation concerns where they’re not happy about having true things about themselves shared publicly.
What paradoxes, which people, which things? This isn’t a gotcha: I’m just struggling to parse this sentence right now. I can’t think of any concrete examples that fit. Maybe some “there are autogenphyliacs who claim to be trans but aren’t really and they’d be unhappy if this fact was shared because that would harm their reputation”? If that were true, and someone discovered a specific autogenphyliac who thinks they’re not really trans but presents as such and someone outed them, I would call that a dick move.
So I’m not sure what the paradox is. One stab at a potential paradox: a rational agent would come to similair conclusions if you spread the hypotheticaly true info that 99.99% of trans-females are autogenphyliacs, then a rational agent would conclude that any particular trans-woman is really a cis autogenphyliac. Which means you’re basically doxxing them by providing info that would in this world be relevant to societies making decisions about stuff like who’s allowed to compete in women’s sports.
I guess this is true but it also seems like an extreme case to me. Most people aren’t that rational, and depending on the society, are willing to believe others about kinda-unlikely things about themselves. So in a less extreme hypothetical, say 99.99% vs 90%, I can see people believing most supposedly trans women aren’t trans, but belives any specific person who claims they’re a trans-woman.
EDIT: I believe that a signficant fraction of conflicts aren’t mostly mistakes. But even there, the costs of attempts to restrict speech are quite high.
That seems basically correct? And also fine. If you think lots of people are making mistakes that will hurt themselves/others/you and you can convince people about this by sharing info, that’s basically fine to me.
I still don’t understand what this has to do with doxxing someone. I suspect we’re talking past each other right now.
I mean insofar as people insist they’re interested in it for political reasons, it makes sense to distinguish this from the doxxing and say that there’s no legitimate political use for Scott Alexander’s name.
The trouble is that often people de-emphasize their political motivations, as Scott Alexander did when he framed it around the most irrelevant orthodoxy you can think of, that one is simply interested in out of a curious itch. The most plausible motivation I can think of for making this frame is to avoid being associated with the political motivation.
But regardless of the whether that explanation is true, if one says that there’s a strong presumption in sharing truth to the point where people who dig into inconsequential dogmas that are taboo to question because they cover up moral contradictions that people are afraid will cause genocidal harm if unleashed, then it sure seems like this strong presumption in favor of truth also legitimizes mild cases of doxxing.
OK, now I understand the connection to doxing much more clearly. Thank you. To be clear, I do not endorse legalizing a no-doxxing rule.
I still disagree because it didn’t look like Metz had any reason to doxx Scott beyond “just because”. There were no big benifits to readers or any story about why there was no harm done to Scott in spite of his protests.
Whereas if I’m a journalist and encounter someone who says “if you release information about genetic differences in intelligence that will cause a genocide” I can give reasons for why that is unlikely. And I can give reasons for why I the associated common-bundle-of-beliefs-and-values ie. orthodoxy is not inconsequential, that there are likely, large (albeit not genocide large) harms that this orthodoxy is causing.
I mean I’m not arguing Cade Metz should have doxxed Scott Alexander, I’m just arguing that there is a tension between common rationalist ideology that one should have a strong presumption in favor of telling the truth, and that Cade Metz shouldn’t have doxxed Scott Alexander. As far as I can tell, this common rationalist ideology was a cover for spicier views that you have no issue admitting to, so I’m not exactly saying that there’s any contradiction in your vibe. More that there’s a contradiction in Scott Alexander’s (at least at the time of writing Kolmogorov Complicity).
I’m not sure what my own resolution to the paradox/contradiction is. Maybe that the root problem seems to be that people create information to bolster their side in political discourse, rather than to inform their ideology about how to address problems that they care about. In the latter case, creating information does real productive work, but in the former case, information mostly turns into a weapon, which incentivizes creating some of the most cursed pieces of information known to the world.
I’m just arguing that there is a tension between common rationalist ideology that one should have a strong presumption in favor of telling the truth, and that Cade Metz shouldn’t have doxxed Scott Alexander.
His doxing Scott was in an article that also contained lies, lies which made the doxing more harmful. He wouldn’t have just posted Scott’s real name in a context where no lies were involved.
Your argument rests on a false dichotomy. There are definitely other options than ‘wanting to know truth for no reason at all’ and ‘wanting to know truth to support racist policies’. It is at least plausibly the case that beneficial, non-discriminatory policies could result from knowledge currently considered taboo. It could at least be relevant to other things and therefore useful to know!
What plausible benefit is there to knowing Scott’s real name? What could it be relevant to?
People do sometimes make the case that knowing more information about sex and race differences can be helpful for women and black people. It’s a fine approach to make, if one can actually make it work out in practice. My point is just that the other two approaches also exist.
This is the “rationalists’ sexist and racist beliefs are linked to support for sexist and racist policies” argument, which is something that some of the people who promote taboo beliefs try to avoid. For example, Scott Alexander argues that it can be understood simply as having a curious itch to understand “the most irrelevant orthodoxy you can think of”, which sure sounds different from “because they have important consequences for decisions and policy and life.
I don’t think I was making that argument.
If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.
Once we’ve established that “X is false” is newsworthy at all, we still need to weigh the cost vs benefits of disseminating that information.
I hope that everyone including rationalists are in agreement about all this. For example, prominent rationalists are familiar with the idea of infohazards, reputational risks, picking your battles, simulacra 2, and so on. I’ve seen a lot of strong disagreement on this forum about what newsworthy information should and shouldn’t be disseminated and in what formats and contexts. I sure have my own opinions!
…But all that is irrelevant to this discussion here. I was talking about whether Scott’s last name is newsworthy in the first place. For example, it’s not the case that lots of people around the world were under the false impression that Scott’s true last name was McSquiggles, and now NYT is going to correct the record. (It’s possible that lots of people around the world were under the false impression that Scott’s true last name is Alexander, but that misconception can be easily correctly by merely saying it’s a pseudonym.) If Scott’s true last name revealed that he was secretly British royalty, or secretly Albert Einstein’s grandson, etc., that would also at least potentially be newsworthy.
Not everything is newsworthy. The pebbles-on-the-sidewalk example I mentioned above is not newsworthy. I think Scott’s name is not newsworthy either. Incidentally, I also think there should be a higher bar for what counts as newsworthy in NYT, compared to what counts as newsworthy when I’m chatting with my spouse about what happened today, because of the higher opportunity cost.
I agree, I’m just trying to say that the common rationalist theories on this topic often disagree with your take.
I think this argument would be more transparent with examples. Whenever I think of examples of popular beliefs that it would be reasonable to change one’s support of in the light of this, they end up involving highly politicized taboos.
There’s different distinctions when it comes to infohazards. One is non-Bayesian infohazards, where certain kinds of information is thought to break people’s rationality; that seems obscure and not so relevant here. Another is recipes for destruction, where you give a small hostile faction the ability to unilaterally cause harm. This could theoretically be applicable if we were talking publishing Scott Alexander’s personal address and his habits when for where he goes, as that makes it more practical for terrorists to attack him. But that seems less relevant for his real name, when it is readily available and he ends up facing tons of attention regardless.
Reputational risks can at times be acknowledged, but at the same time reputational risks are one of the main justifications for the taboos. Stereotyping is basically reputational risk on a group level; if rationalists dismiss the danger of stereotyping with “well, I just have a curious itch”, that sure seems like a strong presumption of truthtelling over reputational risk.
Picking your battles seems mostly justified on pragmatics, so it seems to me that the NYT can just go “this is a battle that we can afford to pick”.
Rationalists seem to usually consider simulacrum level 2 to be pathological, on the basis of presumption of the desirability of truth.
I think this is a perfectly valid argument for why NYT shouldn’t publish it, it just doesn’t seem very strong or robust and doesn’t square well with the general pro-truth ideology.
Like, if the NYT did go out and count the number of pebbles on your road, then yes there’s an opportunity cost to this etc., which makes it a pretty unnecessary thing to do, but it’s not like you’d have any good reason to whip out a big protest or anything. This is the sort of thing where at best the boss should go “was that really necessary?”, and both “no, it was an accident” or “yes, because of <obscure policy reason>” are fine responses.
If one grants a presumption of the value of truth, and grants that it is permissible, admirable even, to follow the itch to uncover things that people would really rather downplay, then it seems really hard to say that Cade Metz did anything wrong.
By coincidence, Scott has written about this subject.
Not being completely hidden isn’t “readily available”. If finding his name is even a trivial inconvenience, it doesn’t cause the damage caused by plastering his name in the Times.
The context from above is that we’re weighing costs vs benefits of publishing the name, and I was pulling out the sub-debate over what the benefits are (setting aside the disagreement about how large the costs are).
I agree that “the benefits are ≈0” is not a strong argument that the costs outweigh the benefits in and of itself, because maybe the costs are ≈0 as well. If a journalist wants to report the thickness of Scott Alexander’s shoelaces, maybe the editor will say it’s a waste of limited wordcount, but the journalist could say “hey it’s just a few words, and y’know, it adds a bit of color to the story”, and that’s a reasonable argument: the cost and benefit are each infinitesimal, and reasonable people can disagree about which one slightly outweighs the other.
But “the benefits are ≈0” is a deciding factor in a context where the costs are not infinitesimal. Like if Scott asserts that a local gang will beat him senseless if the journalist reports the thickness of his shoelaces, it’s no longer infinitesimal costs versus infinitesimal benefits, but rather real costs vs infinitesimal benefits.
If the objection is “maybe the shoelace thickness is actually Scott’s dark embarrassing secret that the public has an important interest in knowing”, then yeah that’s possible and the journalist should certainly look into that possibility. (In the case at hand, if Scott were secretly SBF’s brother, then everyone agrees that his last name would be newsworthy.) But if the objection is just “Scott might be exaggerating, maybe the gang won’t actually beat him up too badly if the shoelace thing is published”, then I think a reasonable ethical journalist would just leave out the tidbit about the shoelaces, as a courtesy, given that there was never any reason to put it in in the first place.
I get that this is an argument one could make. But the reason I started this tangent was because you said:
That is, my original argument was not in response to the “Anyway, if the true benefit is zero (as I believe), then we don’t have to quibble over whether the cost was big or small” part of your post, it was to the vibe/ideology part.
Where I was trying to say, it doesn’t seem to me that Cade Metz was the one who introduced this vibe/ideology, rather it seems to have been introduced by rationalists prior to this, specifically to defend tinkering with taboo topics.
Like, you mention that Cade Metz conveys this vibe/ideology that you disagree with, and you didn’t try to rebut I directly, I assumed because Cade Metz didn’t defend it but just treated it as obvious.
And that’s where I’m saying, since many rationalists including Scott Alexander have endorsed this ideology, there’s a sense in which it seems wrong, almost rude, to not address it directly. Like a sort of Motte-Bailey tactic.
It is not surprising when a lot of people having a false belief is caused by the existence of a taboo. Otherwise the belief would probably already have been corrected or wouldn’t have gained popularity in the first place. And giving examples for such beliefs of course is not really possible, precisely because it is taboo to argue that they are false.
It’s totally possible to say taboo things, I do it quite often.
But my point is more, this doesn’t seem to disprove the existence of the tension/Motte-Bailey/whatever dynamic that I’m pointing at.
Metz/NYT disagree. He doesn’t completely spell out why (it’s not his style), but, luckily, Scott himself did:
Metz/NYT considered Scott to be bad enough to deserve whatever inconveniences/punishments would come to him as a result of tying his alleged wrongthink to his real name, is the long and short of it.
Which racist and sexist policies?
None, if you buy the “we just have a curious itch to understand the most irrelevant orthodoxy you can think of” explanation. But if that’s a valid reason for rationalists to dig into things that are taboo because of their harmful consequences, is it not then also valid for Cade Metz to follow a curious itch to dig into rationalists’ private information?
Well, I don’t understand what that position has to do with doxxing someone. What does obsessively pointing out how a reigning orthodoxy is incorrect have to do with revealing someone’s private info and making it hard for them to do their jobs? The former is socially useful because a lot of orthodoxy’s result in bad policies or cause people to err in their private lives or whatever. The latter mostly isn’t.
Yes, sometimes someone the two co-incide e.g. revealing that the church uses heliocentric models to calculate celesitial movements or watergate or whatever. But that’s quite rare and I note Matz didn’t provide any argument that doxxing scott is like one of those cases.
Consider a counterfacual where Scott in his private life crusading against DEI policies in a visible way. Then people benifitting from those policies may want to know that “there’s this political activist who’s advocating for policies that harm you and the scope of his influence is way bigger than you thought”. Which would clearly be useful info for a decent chunk of readers. Knowing his name would be useful!
Instead, it’s just “we gotta say his name. It’s so obvious, you know?” OK. So what? Who does that help? Why’s the knowledge valuable? I have not seen a good answer to those questions. Or consider: if Matz for some bizarre reason decided to figure out who “Algon” on LW is and wrote an article revealing that I’m X because “it’s true” I’d say that’s a waste of people’s time and a bit of a dick move.
Yes, he should still be allowed to do so, because regulating free-speech well is hard and I’d rather eat the costs than deal with poor regulations. Doesn’t change the dickishness of it.
I think once you get concrete about it in the discourse, this basically translates to “supports racist and sexist policies”, albeit from the perspective of those who are pro these policies.
Let’s take autogynephilia theory as an example of a taboo belief, both because it’s something I’m familiar with and because it’s something Scott Alexander has spoken out against, so we’re not putting Scott Alexander in any uncomfortable position about it.
Autogynephilia theory has become taboo for various reasons. Some people argue that they should still disseminate it because it’s true, even if it doesn’t have any particular policy implications, but of course that leads to paradoxes where those people themselves tend to have privacy and reputation concerns where they’re not happy about having true things about themselves shared publicly.
The alternate argument is on the basis of “a lot of orthodoxy’s result in bad policies or cause people to err in their private lives or whatever”, but when you get specific about what they are mean rather than dismissing it as “or whatever”, it’s something like “autogynephilia theory is important to know so autogynephiles don’t end up thinking they’re trans and transitioning”, which in other language would mean something like “most trans women shouldn’t have transitioned, and we need policies to ensure that fewer transition in the future”. Which is generally considered an anti-trans position!
Now you might say, well, that position is a good position. But that’s a spicy argument to make, so a lot of the time people fall back on “autogynephilia theory is true and we should have a strong presumption in favor of saying true things”.
Now, there’s also the whole “the discourse is not real life” issue, where the people who advocate for some belief might not be representative of the applications of that belief.
That seems basically correct? And also fine. If you think lots of people are making mistakes that will hurt themselves/others/you and you can convince people about this by sharing info, that’s basically fine to me.
I still don’t understand what this has to do with doxxing someone. I suspect we’re talking past each other right now.
What paradoxes, which people, which things? This isn’t a gotcha: I’m just struggling to parse this sentence right now. I can’t think of any concrete examples that fit. Maybe some “there are autogenphyliacs who claim to be trans but aren’t really and they’d be unhappy if this fact was shared because that would harm their reputation”? If that were true, and someone discovered a specific autogenphyliac who thinks they’re not really trans but presents as such and someone outed them, I would call that a dick move.
So I’m not sure what the paradox is. One stab at a potential paradox: a rational agent would come to similair conclusions if you spread the hypotheticaly true info that 99.99% of trans-females are autogenphyliacs, then a rational agent would conclude that any particular trans-woman is really a cis autogenphyliac. Which means you’re basically doxxing them by providing info that would in this world be relevant to societies making decisions about stuff like who’s allowed to compete in women’s sports.
I guess this is true but it also seems like an extreme case to me. Most people aren’t that rational, and depending on the society, are willing to believe others about kinda-unlikely things about themselves. So in a less extreme hypothetical, say 99.99% vs 90%, I can see people believing most supposedly trans women aren’t trans, but belives any specific person who claims they’re a trans-woman.
EDIT: I believe that a signficant fraction of conflicts aren’t mostly mistakes. But even there, the costs of attempts to restrict speech are quite high.
I mean insofar as people insist they’re interested in it for political reasons, it makes sense to distinguish this from the doxxing and say that there’s no legitimate political use for Scott Alexander’s name.
The trouble is that often people de-emphasize their political motivations, as Scott Alexander did when he framed it around the most irrelevant orthodoxy you can think of, that one is simply interested in out of a curious itch. The most plausible motivation I can think of for making this frame is to avoid being associated with the political motivation.
But regardless of the whether that explanation is true, if one says that there’s a strong presumption in sharing truth to the point where people who dig into inconsequential dogmas that are taboo to question because they cover up moral contradictions that people are afraid will cause genocidal harm if unleashed, then it sure seems like this strong presumption in favor of truth also legitimizes mild cases of doxxing.
Michael Bailey tends to insist that it’s bad to speculate about hidden motives that scientists like him might have for his research, yet when he explains his own research, he insists that he should study people’s hidden motivation using only the justification of truth and curiosity.
OK, now I understand the connection to doxing much more clearly. Thank you. To be clear, I do not endorse legalizing a no-doxxing rule.
I still disagree because it didn’t look like Metz had any reason to doxx Scott beyond “just because”. There were no big benifits to readers or any story about why there was no harm done to Scott in spite of his protests.
Whereas if I’m a journalist and encounter someone who says “if you release information about genetic differences in intelligence that will cause a genocide” I can give reasons for why that is unlikely. And I can give reasons for why I the associated common-bundle-of-beliefs-and-values ie. orthodoxy is not inconsequential, that there are likely, large (albeit not genocide large) harms that this orthodoxy is causing.
I mean I’m not arguing Cade Metz should have doxxed Scott Alexander, I’m just arguing that there is a tension between common rationalist ideology that one should have a strong presumption in favor of telling the truth, and that Cade Metz shouldn’t have doxxed Scott Alexander. As far as I can tell, this common rationalist ideology was a cover for spicier views that you have no issue admitting to, so I’m not exactly saying that there’s any contradiction in your vibe. More that there’s a contradiction in Scott Alexander’s (at least at the time of writing Kolmogorov Complicity).
I’m not sure what my own resolution to the paradox/contradiction is. Maybe that the root problem seems to be that people create information to bolster their side in political discourse, rather than to inform their ideology about how to address problems that they care about. In the latter case, creating information does real productive work, but in the former case, information mostly turns into a weapon, which incentivizes creating some of the most cursed pieces of information known to the world.
His doxing Scott was in an article that also contained lies, lies which made the doxing more harmful. He wouldn’t have just posted Scott’s real name in a context where no lies were involved.
Your argument rests on a false dichotomy. There are definitely other options than ‘wanting to know truth for no reason at all’ and ‘wanting to know truth to support racist policies’. It is at least plausibly the case that beneficial, non-discriminatory policies could result from knowledge currently considered taboo. It could at least be relevant to other things and therefore useful to know!
What plausible benefit is there to knowing Scott’s real name? What could it be relevant to?
People do sometimes make the case that knowing more information about sex and race differences can be helpful for women and black people. It’s a fine approach to make, if one can actually make it work out in practice. My point is just that the other two approaches also exist.