So, I love Scott, consider CM’s original article poorly written, and also think doxxing is quite rude, but with all the disclaimers out of the way: on the specific issue of revealing Scott’s last name, Cade Metz seems more right than Scott here? Scott was worried about a bunch of knock-off effects of having his last name published, but none of that bad stuff happened.[1]
I feel like at this point in the era of the internet, doxxing (at least, in the form of involuntary identity association) is much more of an imagined threat than a real harm. Beff Jezos’s more recent doxxing also comes to mind as something that was more controversial for the controversy, than for any factual harms done to Jezos as a result.
Scott did take a bunch of ameliorating steps, such as leaving his past job—but my best guess is that none of that would have actually been necessary. AFAICT he’s actually in a much better financial position thanks to subsequent transition to Substack—though crediting Cade Metz for this is a bit like crediting Judas for starting Christianity.
Scott was worried about a bunch of knock-off effects of having his last name published, but none of that bad stuff happened.
Didn’t Scott quit his job as a result of this? I don’t have high confidence on how bad things would have been if Scott hadn’t taken costly actions to reduce the costs, but it seems that the evidence is mostly screened off by Scott doing a lot of stuff to make the consequences less bad and/or eating some of the costs in anticipation.
I mean, Scott seems to be in a pretty good situation now, in many ways better than before.
And yes, this is consistent with NYT hurting him in expectation.
But one difference between doxxing normal people versus doxxing “influential people” is that influential people typically have enough power to land on their feet when e.g. they lose a job. And so the fact that this has worked out well for Scott (and, seemingly, better than he expected) is some evidence that the NYT was better-calibrated about how influential Scott is than he was.
This seems like an example of the very very prevalent effect that Scott wrote about in “against bravery debates”, where everyone thinks their group is less powerful than they actually are. I don’t think there’s a widely-accepted name for it; I sometimes use underdog bias. My main diagnosis of the NYT/SSC incident is that rationalists were caught up by underdog bias, even as they leveraged thousands of influential tech people to attack the NYT.
I don’t think the NYT thing played much of a role in Scott being better off now. My guess is a small minority of people are subscribed to his Substack because of the NYT thing (the dominant factor is clearly the popularity of his writing).
My guess is the NYT thing hurt him quite a bit and made the potential consequences of him saying controversial things a lot worse for him. He has tried to do things to reduce the damage of that, but I generally don’t believe that “someone seems to be doing fine” is almost ever much evidence against “this action hurt this person”. Competent people often do fine even when faced with substantial adversity, this doesn’t mean the adversity is fine.
I do think it’s clear the consequences weren’t catastrophic, and I also separately actually have a lot of sympathy for giving newspapers a huge amount of leeway to report on whatever true thing they want to report on, so that I overall don’t have a super strong take here, but I also think the costs here were probably on-net pretty substantial (and also separately that the evidence of how things have played out since then probably didn’t do very much to sway me from my priors of how much the cost would be, due to Scott internalizing the costs in advance a bunch).
I don’t think the NYT thing played much of a role in Scott being better off now. My guess is a small minority of people are subscribed to his Substack because of the NYT thing (the dominant factor is clearly the popularity of his writing).
What credence do you have that he would have started the substack at all without the NYT thing? I don’t have much information, but probably less than 80%. The timing sure seems pretty suggestive.
(I’m also curious about the likelihood that he would have started his startup without the NYT thing, but that’s less relevant since I don’t know whether the startup is actually going well.)
My guess is the NYT thing hurt him quite a bit and made the potential consequences of him saying controversial things a lot worse for him.
Presumably this is true of most previously-low-profile people that the NYT chooses to write about in not-maximally-positive ways, so it’s not a reasonable standard to hold them to. And so as a general rule I do think “the amount of adversity that you get when you used to be an influential yet unknown person but suddenly get a single media feature about you” is actually fine to inflict on people. In fact, I’d expect that many (or even most) people in this category will have a worse time of it than Scott—e.g. because they do things that are more politically controversial than Scott, have fewer avenues to make money, etc.
What credence do you have that he would have started the substack at all without the NYT thing? I don’t have much information, but probably less than 80%.
I mean, just because him starting a Substack was precipitated by a bunch of stress and uncertainty does not mean I credit the stress and uncertainty for the benefits of the Substack. Scott always could have started a Substack, and presumably had reasons for not doing so before the NYT thing. As an analogy, if I work at your company and have a terrible time, and then I quit, and then get a great job somewhere else, of course you get no credit for the quality of my new job.
The Substack situation seems analogous. It approximately does not matter whether Scott would have started the Substack without the NYT thing, so I don’t see the relevance of the question when trying to judge whether the NYT thing caused a bunch of harm.
Just because someone wasn’t successfully canceled, doesn’t mean there wasn’t a cancellation attempt, nor that most other people in their position would have withstood it
Just because they’re doing well now, doesn’t mean they wouldn’t have been doing better without the cancellation attempt
Even if the cancellation attempt itself did end up actually benefiting them, because they had the right personality and skills and position, that doesn’t mean this should have been expected ex ante
(After all, if it’s clear in advance to everyone involved that someone is uncancellable, then they’re less likely to try)
Even if it’s factually true that someone has the qualities and position to come out ahead after cancellation, they may not know or believe this, and thus the prospect of cancellation may successfully silence them
Even if they’re currently uncancellable and know this, that doesn’t mean they’ll remain so in the future
E.g. if they’re so good at what they do as to be unfireable, then maybe within a few years they’ll be offered a CEO position, at which point any cancel-worthy things they said years ago may limit their career; and if they foresee this, then that incentivizes self-censorship
The point is, cancellation attempts are bad because they create a chilling effect, an environment that incentivizes self-censorship and distorts intellectual discussion. And arguments of the form “Hey, this particular cancellation attempt wasn’t that bad because the target did well” fall down to one or more of the above arguments: they still create chilling effects and that still makes them bad.
But it wasn’t a cancellation attempt. The issue at hand is whether a policy of doxxing influential people is a good idea. The benefits are transparency about who is influencing society, and in which ways; the harms include the ones you’ve listed above, about chilling effects.
It’s hard to weigh these against each other, but one way you might do so is by following a policy like “doxx people only if they’re influential enough that they’re probably robust to things like losing their job”. The correlation between “influential enough to be newsworthy” and “has many options open to them” isn’t perfect, but it’s strong enough that this policy seems pretty reasonable to me.
To flip this around, let’s consider individuals who are quietly influential in other spheres. For example, I expect there are people who many news editors listen to, when deciding how their editorial policies should work. I expect there are people who many Democrat/Republican staffers listen to, when considering how to shape policy. In general I think transparency about these people would be pretty good for the world. If those people happened to have day jobs which would suffer from that transparency, I would say “Look, you chose to have a bunch of influence, which the world should know about, and I expect you can leverage this influence to end up in a good position somehow even after I run some articles on you. Maybe you’re one of the few highly-influential people for whom this happens to not be true, but it seems like a reasonable policy to assume that if someone is actually pretty influential then they’ll land on their feet either way.” And the fact that this was true for Scott is some evidence that this would be a reasonable policy.
(I also think that taking someone influential who didn’t previously have a public profile, and giving them a public profile under their real name, is structurally pretty analogous to doxxing. Many of the costs are the same. In both cases one of the key benefits is allowing people to cross-reference information about that person to get a better picture of who is influencing the world, and how.)
The benefits are transparency about who is influencing society
In this particular case, I don’t really see any transparency benefits. If it was the case that there was important public information attached to Scott’s full name, then this argument would make sense to me.
(E.g. if Scott Alexander was actually Mark Zuckerberg or some other public figure with information attacked to their real full name then this argument would go through.)
Fair enough if NYT needs to have a extremely coarse grained policy where they always dox influential people consistently and can’t do any cost benefit on particular cases.
If it was the case that there was important public information attached to Scott’s full name, then this argument would make sense to me.
In general having someone’s actual name public makes it much easier to find out other public information attached to them. E.g. imagine if Scott were involved in shady business dealings under his real name. This is the sort of thing that the NYT wouldn’t necessarily discover just by writing the profile of him, but other people could subsequently discover after he was doxxed.
To be clear, btw, I’m not arguing that this doxxing policy is correct, all things considered. Personally I think the benefits of pseudonymity for a healthy ecosystem outweigh the public value of transparency about real names. I’m just arguing that there are policies consistent with the NYT’s actions which are fairly reasonable.
Many comments pointed out that NYT does not in fact have a consistent policy of always revealing people’s true names. There’s even a news editorial about this which I point out in case you trust the fact-checking of NY Post more.
I think that leaves 3 possible explanations of what happened:
NYT has a general policy of revealing people’s true names, which it doesn’t consistently apply but ended up applying in this case for no particular reason.
There’s an inconsistently applied policy, and Cade Metz’s (and/or his editors’) dislike of Scott contributed (consciously or subconsciously) to insistence on applying the policy in this particular case.
There is no policy and it was a purely personal decision.
In my view, most rationalists seem to be operating under a reasonable probability distribution over these hypotheses, informed by evidence such as Metz’s mention of Charles Murray, lack of a public written policy about revealing real names, and lack of evidence that a private written policy exists.
Ok, let’s consider the case of shadowy influencers like that. It would be nice to know who such people were, sure. If they were up to nefarious things, or openly subscribed to ideologies that justify awful actions, then I’d like to know that. If there was an article that accurately laid out the nefarious things, that would be nice. If the article cherry-picked, presented facts misleadingly, made scurrilous insinuations every few paragraphs (without technically saying anything provably false)—that would be bad, possibly quite bad, but in some sense it’s par for the course for a certain tier of political writing.
When I see the combination of making scurrilous insinuations every few paragraphs and doxxing the alleged villain, I think that’s where I have to treat it as a deliberate cancellation attempt on the person. If it wasn’t actually deliberate, then it was at least “reckless disregard” or something, and I think it should be categorized the same way. If you’re going to dox someone, I figure you accept an increased responsibility to be careful about what you say about them. (Presumably for similar reasons, libel laws are stricter about non-public figures. No, I’m not saying it’s libel when the statements are “not technically lying”; but it’s bad behavior and should be known as such.)
As for the “they’re probably robust” aspect… As mentioned in my other comment, even if they predictably “do well” afterwards, that doesn’t mean they haven’t been significantly harmed. If their influence is a following of 10M people, and the cancellation attempt reduces their influence by 40%, then it is simultaneously true that (a) “They have an audience of 6M people, they’re doing fine”, and (b) “They’ve been significantly harmed, and many people in such a position who anticipated this outcome would have a significant incentive to self-censor”. It remains a bad thing. It’s less bad than doing it to random civilians, sure, but it remains bad.
But one difference between doxxing normal people versus doxxing “influential people” is that influential people typically have enough power to land on their feet when e.g. they lose a job.
It may decrease their influence substantially, though. I’ll quote at length from here. It’s not about doxxing per se, but it’s about cancellation attempts (which doxxing a heretic enables), and about arguments similar to the above:
If you’re a writer, artist or academic who has strayed beyond the narrow bounds of approved discourse, two consequences will be intimately familiar. The first is that it becomes harder to get a hearing about anything. The second is that if you do manage to say anything publicly — especially if you talk about the silencing — it will be taken as proof that you have not been silenced.
This is the logic of witch-ducking. If a woman drowns, she isn’t a witch; if she floats, she is, and must be dispatched some other way. Either way, she ends up dead.
The only counter to this is specific examples. But censorship is usually covert: when you’re passed over to speak at a conference, exhibit in a gallery or apply for a visiting fellowship, you rarely find out. Every now and then, however, the censors tip their hands.
And so, for everyone who says I can’t have been cancelled because they can still hear me, here’s the evidence.
The first time I know I was censored was even before my book criticising trans ideology came out in mid-2021. I had been asked to talk about it on the podcast of Intelligence Squared, a media company that, according to its website, aims to “promote a global conversation”. We had booked a date and time.
But as the date approached I discovered I had been dropped. When I asked why, the response was surprisingly frank: fear of a social-media pile-on, sponsors getting cold feet and younger staff causing grief. The CEO of Intelligence Squared is a former war correspondent who has written a book about his experiences in Kosovo. But at the prospect of platforming a woman whose main message is that humans come in two sexes, his courage apparently ran out.
Next came the Irish Times, my home country’s paper of record. Soon after my book came out a well-known correspondent rang me, said he had stayed up all night to finish it and wanted to write about it. He interviewed me, filed the piece, checked the quotes — and then silence. When I nudged by email, he said the piece had been spiked as it was going to press.
Sometime around then it was the BBC’s turn. I don’t know the exact date because I only found out months later, when I met a presenter from a flagship news programme. Such a shame you couldn’t come on the show, he said, to which I replied I had never been asked. It turned out that he had told a researcher to invite me on, but the researcher hadn’t, instead simply lying that I wasn’t available. I’ve still never been on the BBC to discuss trans issues.
Next came ABC, the Australian state broadcaster, which interviewed me for a radio show about religion and ethics. This time, when I nudged, I was told there had been “technical glitches” with the recording, but they would “love to revisit this one in the future”. They’ve never been back in touch. [… several more examples …]
Now, the author has a bestselling book, has been on dozens of podcasts, and now works for an advocacy organization that’s 100% behind her message (she’s not technically listed as a cofounder, but might as well be). She has certainly landed on her feet and has a decent level of reach; yet, clearly, if not for a bunch of incidents like the above—and, as she says, probably a lot more incidents for which she doesn’t have specific evidence—then she would have had much greater reach.
In Scott’s case… if we consider the counterfactual where there wasn’t a NYT article drawing such smears against Scott, then, who knows, maybe today some major news organizations (perhaps the NYT itself!) would have approached him for permission to republish some Slate Star Codex articles on their websites, perhaps specifically some of those on AI during the last ~year when AI became big news. Or offered to interview him for a huge audience on important topics, or something.
So be careful not to underestimate the extent of unseen censorship and cancellation, and therefore the damage done by “naming and shaming” tactics.
There were two issues: what is the cost of doxxing, and what is the benefit of doxxing. I think the main cruxan equally important crux of disagreement is the latter, not the former. IMO the benefit was zero: it’s not newsworthy, it brings no relevant insight, publishing it does not advance the public interest, it’s totally irrelevant to the story. Here CM doesn’t directly argue that there was any benefit to doxxing; instead he kinda conveys a vibe / ideology that if something is true then it is self-evidently intrinsically good to publish it (but of course that self-evident intrinsic goodness can be outweighed by sufficiently large costs). Anyway, if the true benefit is zero (as I believe), then we don’t have to quibble over whether the cost was big or small.
Trouble is, the rationalist community tends to get involved with taboo topics and regularly defends itself by saying that it’s because it’s self-evidently good for the truth to be known. Thus there is (at least apparently) an inconsistency.
There’s a fact of the matter about whether the sidewalk on my street has an odd vs even number of pebbles on it, but I think everyone including rationalists will agree that there’s no benefit of sharing that information. It’s not relevant for anything else.
By contrast, taboo topics generally become taboo because they have important consequences for decisions and policy and life.
If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.
Once we’ve established that “X is false” is newsworthy at all, we still need to weigh the cost vs benefits of disseminating that information.
I hope that everyone including rationalists are in agreement about all this. For example, prominent rationalists are familiar with the idea of infohazards, reputational risks, picking your battles, simulacra 2, and so on. I’ve seen a lot of strong disagreement on this forum about what newsworthy information should and shouldn’t be disseminated and in what formats and contexts. I sure have my own opinions!
…But all that is irrelevant to this discussion here. I was talking about whether Scott’s last name is newsworthy in the first place. For example, it’s not the case that lots of people around the world were under the false impression that Scott’s true last name was McSquiggles, and now NYT is going to correct the record. (It’s possible that lots of people around the world were under the false impression that Scott’s true last name is Alexander, but that misconception can be easily correctly by merely saying it’s a pseudonym.) If Scott’s true last name revealed that he was secretly British royalty, or secretly Albert Einstein’s grandson, etc., that would also at least potentially be newsworthy.
Not everything is newsworthy. The pebbles-on-the-sidewalk example I mentioned above is not newsworthy. I think Scott’s name is not newsworthy either. Incidentally, I also think there should be a higher bar for what counts as newsworthy in NYT, compared to what counts as newsworthy when I’m chatting with my spouse about what happened today, because of the higher opportunity cost.
I agree, I’m just trying to say that the common rationalist theories on this topic often disagree with your take.
If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.
I think this argument would be more transparent with examples. Whenever I think of examples of popular beliefs that it would be reasonable to change one’s support of in the light of this, they end up involving highly politicized taboos.
Once we’ve established that “X is false” is newsworthy at all, we still need to weigh the cost vs benefits of disseminating that information.
I hope that everyone including rationalists are in agreement about all this. For example, prominent rationalists are familiar with the idea of infohazards, reputational risks, picking your battles, simulacra 2, and so on. I’ve seen a lot of strong disagreement on this forum about what newsworthy information should and shouldn’t be disseminated and in what formats and contexts. I sure have my own opinions!
There’s different distinctions when it comes to infohazards. One is non-Bayesian infohazards, where certain kinds of information is thought to break people’s rationality; that seems obscure and not so relevant here. Another is recipes for destruction, where you give a small hostile faction the ability to unilaterally cause harm. This could theoretically be applicable if we were talking publishing Scott Alexander’s personal address and his habits when for where he goes, as that makes it more practical for terrorists to attack him. But that seems less relevant for his real name, when it is readily available and he ends up facing tons of attention regardless.
Reputational risks can at times be acknowledged, but at the same time reputational risks are one of the main justifications for the taboos. Stereotyping is basically reputational risk on a group level; if rationalists dismiss the danger of stereotyping with “well, I just have a curious itch”, that sure seems like a strong presumption of truthtelling over reputational risk.
Picking your battles seems mostly justified on pragmatics, so it seems to me that the NYT can just go “this is a battle that we can afford to pick”.
Rationalists seem to usually consider simulacrum level 2 to be pathological, on the basis of presumption of the desirability of truth.
…But all that is irrelevant to this discussion here. I was talking about whether Scott’s last name is newsworthy in the first place. For example, it’s not the case that lots of people around the world were under the false impression that Scott’s true last name was McSquiggles, and now NYT is going to correct the record. (It’s possible that lots of people around the world were under the false impression that Scott’s true last name is Alexander, but that misconception can be easily correctly by merely saying it’s a pseudonym.) If Scott’s true last name revealed that he was secretly British royalty, or secretly Albert Einstein’s grandson, etc., that would also at least potentially be newsworthy.
Not everything is newsworthy. The pebbles-on-the-sidewalk example I mentioned above is not newsworthy. I think Scott’s name is not newsworthy either. Incidentally, I also think there should be a higher bar for what counts as newsworthy in NYT, compared to what counts as newsworthy when I’m chatting with my spouse about what happened today, because the higher opportunity cost.
I think this is a perfectly valid argument for why NYT shouldn’t publish it, it just doesn’t seem very strong or robust and doesn’t square well with the general pro-truth ideology.
Like, if the NYT did go out and count the number of pebbles on your road, then yes there’s an opportunity cost to this etc., which makes it a pretty unnecessary thing to do, but it’s not like you’d have any good reason to whip out a big protest or anything. This is the sort of thing where at best the boss should go “was that really necessary?”, and both “no, it was an accident” or “yes, because of <obscure policy reason>” are fine responses.
If one grants a presumption of the value of truth, and grants that it is permissible, admirable even, to follow the itch to uncover things that people would really rather downplay, then it seems really hard to say that Cade Metz did anything wrong.
Another is recipes for destruction, where you give a small hostile faction the ability to unilaterally cause harm. … But that seems less relevant for his real name, when it is readily available and he ends up facing tons of attention regardless.
Not being completely hidden isn’t “readily available”. If finding his name is even a trivial inconvenience, it doesn’t cause the damage caused by plastering his name in the Times.
I think this is a perfectly valid argument for why NYT shouldn’t publish it, it just doesn’t seem very strong or robust… Like, if the NYT did go out and count the number of pebbles on your road, then yes there’s an opportunity cost to this etc., which makes it a pretty unnecessary thing to do, but it’s not like you’d have any good reason to whip out a big protest or anything.
The context from above is that we’re weighing costs vs benefits of publishing the name, and I was pulling out the sub-debate over what the benefits are (setting aside the disagreement about how large the costs are).
I agree that “the benefits are ≈0” is not a strong argument that the costs outweigh the benefits in and of itself, because maybe the costs are ≈0 as well. If a journalist wants to report the thickness of Scott Alexander’s shoelaces, maybe the editor will say it’s a waste of limited wordcount, but the journalist could say “hey it’s just a few words, and y’know, it adds a bit of color to the story”, and that’s a reasonable argument: the cost and benefit are each infinitesimal, and reasonable people can disagree about which one slightly outweighs the other.
But “the benefits are ≈0” is a deciding factor in a context where the costs are not infinitesimal. Like if Scott asserts that a local gang will beat him senseless if the journalist reports the thickness of his shoelaces, it’s no longer infinitesimal costs versus infinitesimal benefits, but rather real costs vs infinitesimal benefits.
If the objection is “maybe the shoelace thickness is actually Scott’s dark embarrassing secret that the public has an important interest in knowing”, then yeah that’s possible and the journalist should certainly look into that possibility. (In the case at hand, if Scott were secretly SBF’s brother, then everyone agrees that his last name would be newsworthy.) But if the objection is just “Scott might be exaggerating, maybe the gang won’t actually beat him up too badly if the shoelace thing is published”, then I think a reasonable ethical journalist would just leave out the tidbit about the shoelaces, as a courtesy, given that there was never any reason to put it in in the first place.
I get that this is an argument one could make. But the reason I started this tangent was because you said:
Here CM doesn’t directly argue that there was any benefit to doxxing; instead he kinda conveys a vibe / ideology that if something is true then it is self-evidently intrinsically good to publish it
That is, my original argument was not in response to the “Anyway, if the true benefit is zero (as I believe), then we don’t have to quibble over whether the cost was big or small” part of your post, it was to the vibe/ideology part.
Where I was trying to say, it doesn’t seem to me that Cade Metz was the one who introduced this vibe/ideology, rather it seems to have been introduced by rationalists prior to this, specifically to defend tinkering with taboo topics.
Like, you mention that Cade Metz conveys this vibe/ideology that you disagree with, and you didn’t try to rebut I directly, I assumed because Cade Metz didn’t defend it but just treated it as obvious.
And that’s where I’m saying, since many rationalists including Scott Alexander have endorsed this ideology, there’s a sense in which it seems wrong, almost rude, to not address it directly. Like a sort of Motte-Bailey tactic.
If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.
I think this argument would be more transparent with examples. Whenever I think of examples of popular beliefs that it would be reasonable to change one’s support of in the light of this, they end up involving highly politicized taboos.
It is not surprising when a lot of people having a false belief is caused by the existence of a taboo. Otherwise the belief would probably already have been corrected or wouldn’t have gained popularity in the first place. And giving examples for such beliefs of course is not really possible, precisely because it is taboo to argue that they are false.
Metz/NYT disagree. He doesn’t completely spell out why (it’s not his style), but, luckily, Scott himself did:
If someone thinks I am so egregious that I don’t deserve the mask of anonymity, then I guess they have to name me, the same way they name criminals and terrorists.
Metz/NYT considered Scott to be bad enough to deserve whatever inconveniences/punishments would come to him as a result of tying his alleged wrongthink to his real name, is the long and short of it.
None, if you buy the “we just have a curious itch to understand the most irrelevant orthodoxy you can think of” explanation. But if that’s a valid reason for rationalists to dig into things that are taboo because of their harmful consequences, is it not then also valid for Cade Metz to follow a curious itch to dig into rationalists’ private information?
Well, I don’t understand what that position has to do with doxxing someone. What does obsessively pointing out how a reigning orthodoxy is incorrect have to do with revealing someone’s private info and making it hard for them to do their jobs? The former is socially useful because a lot of orthodoxy’s result in bad policies or cause people to err in their private lives or whatever. The latter mostly isn’t.
Yes, sometimes someone the two co-incide e.g. revealing that the church uses heliocentric models to calculate celesitial movements or watergate or whatever. But that’s quite rare and I note Matz didn’t provide any argument that doxxing scott is like one of those cases.
Consider a counterfacual where Scott in his private life crusading against DEI policies in a visible way. Then people benifitting from those policies may want to know that “there’s this political activist who’s advocating for policies that harm you and the scope of his influence is way bigger than you thought”. Which would clearly be useful info for a decent chunk of readers. Knowing his name would be useful!
Instead, it’s just “we gotta say his name. It’s so obvious, you know?” OK. So what? Who does that help? Why’s the knowledge valuable? I have not seen a good answer to those questions. Or consider: if Matz for some bizarre reason decided to figure out who “Algon” on LW is and wrote an article revealing that I’m X because “it’s true” I’d say that’s a waste of people’s time and a bit of a dick move.
Yes, he should still be allowed to do so, because regulating free-speech well is hard and I’d rather eat the costs than deal with poor regulations. Doesn’t change the dickishness of it.
The former is socially useful because a lot of orthodoxy’s result in bad policies or cause people to err in their private lives or whatever. The latter mostly isn’t.
I think once you get concrete about it in the discourse, this basically translates to “supports racist and sexist policies”, albeit from the perspective of those who are pro these policies.
Let’s take autogynephilia theory as an example of a taboo belief, both because it’s something I’m familiar with and because it’s something Scott Alexander has spoken out against, so we’re not putting Scott Alexander in any uncomfortable position about it.
Autogynephilia theory has become taboo for various reasons. Some people argue that they should still disseminate it because it’s true, even if it doesn’t have any particular policy implications, but of course that leads to paradoxes where those people themselves tend to have privacy and reputation concerns where they’re not happy about having true things about themselves shared publicly.
The alternate argument is on the basis of “a lot of orthodoxy’s result in bad policies or cause people to err in their private lives or whatever”, but when you get specific about what they are mean rather than dismissing it as “or whatever”, it’s something like “autogynephilia theory is important to know so autogynephiles don’t end up thinking they’re trans and transitioning”, which in other language would mean something like “most trans women shouldn’t have transitioned, and we need policies to ensure that fewer transition in the future”. Which is generally considered an anti-trans position!
Now you might say, well, that position is a good position. But that’s a spicy argument to make, so a lot of the time people fall back on “autogynephilia theory is true and we should have a strong presumption in favor of saying true things”.
Now, there’s also the whole “the discourse is not real life” issue, where the people who advocate for some belief might not be representative of the applications of that belief.
I think once you get concrete about it in the discourse, this basically translates to “supports racist and sexist policies”, albeit from the perspective of those who are pro these policies.
That seems basically correct? And also fine. If you think lots of people are making mistakes that will hurt themselves/others/you and you can convince people about this by sharing info, that’s basically fine to me.
I still don’t understand what this has to do with doxxing someone. I suspect we’re talking past each other right now.
but of course that leads to paradoxes where those people themselves tend to have privacy and reputation concerns where they’re not happy about having true things about themselves shared publicly.
What paradoxes, which people, which things? This isn’t a gotcha: I’m just struggling to parse this sentence right now. I can’t think of any concrete examples that fit. Maybe some “there are autogenphyliacs who claim to be trans but aren’t really and they’d be unhappy if this fact was shared because that would harm their reputation”? If that were true, and someone discovered a specific autogenphyliac who thinks they’re not really trans but presents as such and someone outed them, I would call that a dick move.
So I’m not sure what the paradox is. One stab at a potential paradox: a rational agent would come to similair conclusions if you spread the hypotheticaly true info that 99.99% of trans-females are autogenphyliacs, then a rational agent would conclude that any particular trans-woman is really a cis autogenphyliac. Which means you’re basically doxxing them by providing info that would in this world be relevant to societies making decisions about stuff like who’s allowed to compete in women’s sports.
I guess this is true but it also seems like an extreme case to me. Most people aren’t that rational, and depending on the society, are willing to believe others about kinda-unlikely things about themselves. So in a less extreme hypothetical, say 99.99% vs 90%, I can see people believing most supposedly trans women aren’t trans, but belives any specific person who claims they’re a trans-woman.
EDIT: I believe that a signficant fraction of conflicts aren’t mostly mistakes. But even there, the costs of attempts to restrict speech are quite high.
That seems basically correct? And also fine. If you think lots of people are making mistakes that will hurt themselves/others/you and you can convince people about this by sharing info, that’s basically fine to me.
I still don’t understand what this has to do with doxxing someone. I suspect we’re talking past each other right now.
I mean insofar as people insist they’re interested in it for political reasons, it makes sense to distinguish this from the doxxing and say that there’s no legitimate political use for Scott Alexander’s name.
The trouble is that often people de-emphasize their political motivations, as Scott Alexander did when he framed it around the most irrelevant orthodoxy you can think of, that one is simply interested in out of a curious itch. The most plausible motivation I can think of for making this frame is to avoid being associated with the political motivation.
But regardless of the whether that explanation is true, if one says that there’s a strong presumption in sharing truth to the point where people who dig into inconsequential dogmas that are taboo to question because they cover up moral contradictions that people are afraid will cause genocidal harm if unleashed, then it sure seems like this strong presumption in favor of truth also legitimizes mild cases of doxxing.
OK, now I understand the connection to doxing much more clearly. Thank you. To be clear, I do not endorse legalizing a no-doxxing rule.
I still disagree because it didn’t look like Metz had any reason to doxx Scott beyond “just because”. There were no big benifits to readers or any story about why there was no harm done to Scott in spite of his protests.
Whereas if I’m a journalist and encounter someone who says “if you release information about genetic differences in intelligence that will cause a genocide” I can give reasons for why that is unlikely. And I can give reasons for why I the associated common-bundle-of-beliefs-and-values ie. orthodoxy is not inconsequential, that there are likely, large (albeit not genocide large) harms that this orthodoxy is causing.
I mean I’m not arguing Cade Metz should have doxxed Scott Alexander, I’m just arguing that there is a tension between common rationalist ideology that one should have a strong presumption in favor of telling the truth, and that Cade Metz shouldn’t have doxxed Scott Alexander. As far as I can tell, this common rationalist ideology was a cover for spicier views that you have no issue admitting to, so I’m not exactly saying that there’s any contradiction in your vibe. More that there’s a contradiction in Scott Alexander’s (at least at the time of writing Kolmogorov Complicity).
I’m not sure what my own resolution to the paradox/contradiction is. Maybe that the root problem seems to be that people create information to bolster their side in political discourse, rather than to inform their ideology about how to address problems that they care about. In the latter case, creating information does real productive work, but in the former case, information mostly turns into a weapon, which incentivizes creating some of the most cursed pieces of information known to the world.
I’m just arguing that there is a tension between common rationalist ideology that one should have a strong presumption in favor of telling the truth, and that Cade Metz shouldn’t have doxxed Scott Alexander.
His doxing Scott was in an article that also contained lies, lies which made the doxing more harmful. He wouldn’t have just posted Scott’s real name in a context where no lies were involved.
Your argument rests on a false dichotomy. There are definitely other options than ‘wanting to know truth for no reason at all’ and ‘wanting to know truth to support racist policies’. It is at least plausibly the case that beneficial, non-discriminatory policies could result from knowledge currently considered taboo. It could at least be relevant to other things and therefore useful to know!
What plausible benefit is there to knowing Scott’s real name? What could it be relevant to?
People do sometimes make the case that knowing more information about sex and race differences can be helpful for women and black people. It’s a fine approach to make, if one can actually make it work out in practice. My point is just that the other two approaches also exist.
I think this is clearly true, but the application is a bit dubious. There’s a difference between “we have to talk about the bell curve here even though the object-level benefit is very dubious because of the principle that we oppose censorship” and “let’s doxx someone”. I don’t think it’s inconsistent to be on board with the first (which I think a lot of rationalists have proven to be, and which is an example of what you claimed exists) but not the second (which is the application here).
Scott tried hard to avoid getting into the race/IQ controversy. Like, in the private email LGS shared, Scott states “I will appreciate if you NEVER TELL ANYONE I SAID THIS”. Isn’t this the opposite of “it’s self-evidently good for the truth to be known”? And yes there’s a SSC/ACX community too (not “rationalist” necessarily), but Metz wasn’t talking about the community there.
My opinion as a rationalist is that I’d like the whole race/IQ issue to f**k off so we don’t have to talk or think about it, but certain people like to misrepresent Scott and make unreasonable claims, which ticks me off, so I counterargue, just as I pushed a video by Shaun once when I thought somebody on ACX sounded a bit racist to me on the race/IQ topic.
Scott and myself are consequentialists. As such, it’s not self-evidently good for the truth to be known. I think some taboos should be broached, but not “self-evidently” and often not by us. But if people start making BS arguments against people I like? I will call BS on that, even if doing so involves some discussion of the taboo topic. But I didn’t wake up this morning having any interest in doing that.
I agree that Scott Alexander’s position is that it’s not self-evidently good for the truth about his own views to be known. I’m just saying there’s a bunch of times he’s alluded to or outright endorsed it being self-evidently good for the truth to be known in general, in order to defend himself when criticized for being interested in the truth about taboo topics.
So, I love Scott, consider CM’s original article poorly written, and also think doxxing is quite rude, but with all the disclaimers out of the way: on the specific issue of revealing Scott’s last name, Cade Metz seems more right than Scott here? Scott was worried about a bunch of knock-off effects of having his last name published, but none of that bad stuff happened.[1]
I feel like at this point in the era of the internet, doxxing (at least, in the form of involuntary identity association) is much more of an imagined threat than a real harm. Beff Jezos’s more recent doxxing also comes to mind as something that was more controversial for the controversy, than for any factual harms done to Jezos as a result.
Scott did take a bunch of ameliorating steps, such as leaving his past job—but my best guess is that none of that would have actually been necessary. AFAICT he’s actually in a much better financial position thanks to subsequent transition to Substack—though crediting Cade Metz for this is a bit like crediting Judas for starting Christianity.
Didn’t Scott quit his job as a result of this? I don’t have high confidence on how bad things would have been if Scott hadn’t taken costly actions to reduce the costs, but it seems that the evidence is mostly screened off by Scott doing a lot of stuff to make the consequences less bad and/or eating some of the costs in anticipation.
I mean, Scott seems to be in a pretty good situation now, in many ways better than before.
And yes, this is consistent with NYT hurting him in expectation.
But one difference between doxxing normal people versus doxxing “influential people” is that influential people typically have enough power to land on their feet when e.g. they lose a job. And so the fact that this has worked out well for Scott (and, seemingly, better than he expected) is some evidence that the NYT was better-calibrated about how influential Scott is than he was.
This seems like an example of the very very prevalent effect that Scott wrote about in “against bravery debates”, where everyone thinks their group is less powerful than they actually are. I don’t think there’s a widely-accepted name for it; I sometimes use underdog bias. My main diagnosis of the NYT/SSC incident is that rationalists were caught up by underdog bias, even as they leveraged thousands of influential tech people to attack the NYT.
I don’t think the NYT thing played much of a role in Scott being better off now. My guess is a small minority of people are subscribed to his Substack because of the NYT thing (the dominant factor is clearly the popularity of his writing).
My guess is the NYT thing hurt him quite a bit and made the potential consequences of him saying controversial things a lot worse for him. He has tried to do things to reduce the damage of that, but I generally don’t believe that “someone seems to be doing fine” is almost ever much evidence against “this action hurt this person”. Competent people often do fine even when faced with substantial adversity, this doesn’t mean the adversity is fine.
I do think it’s clear the consequences weren’t catastrophic, and I also separately actually have a lot of sympathy for giving newspapers a huge amount of leeway to report on whatever true thing they want to report on, so that I overall don’t have a super strong take here, but I also think the costs here were probably on-net pretty substantial (and also separately that the evidence of how things have played out since then probably didn’t do very much to sway me from my priors of how much the cost would be, due to Scott internalizing the costs in advance a bunch).
What credence do you have that he would have started the substack at all without the NYT thing? I don’t have much information, but probably less than 80%. The timing sure seems pretty suggestive.
(I’m also curious about the likelihood that he would have started his startup without the NYT thing, but that’s less relevant since I don’t know whether the startup is actually going well.)
Presumably this is true of most previously-low-profile people that the NYT chooses to write about in not-maximally-positive ways, so it’s not a reasonable standard to hold them to. And so as a general rule I do think “the amount of adversity that you get when you used to be an influential yet unknown person but suddenly get a single media feature about you” is actually fine to inflict on people. In fact, I’d expect that many (or even most) people in this category will have a worse time of it than Scott—e.g. because they do things that are more politically controversial than Scott, have fewer avenues to make money, etc.
I mean, just because him starting a Substack was precipitated by a bunch of stress and uncertainty does not mean I credit the stress and uncertainty for the benefits of the Substack. Scott always could have started a Substack, and presumably had reasons for not doing so before the NYT thing. As an analogy, if I work at your company and have a terrible time, and then I quit, and then get a great job somewhere else, of course you get no credit for the quality of my new job.
The Substack situation seems analogous. It approximately does not matter whether Scott would have started the Substack without the NYT thing, so I don’t see the relevance of the question when trying to judge whether the NYT thing caused a bunch of harm.
In general, I would say:
Just because someone wasn’t successfully canceled, doesn’t mean there wasn’t a cancellation attempt, nor that most other people in their position would have withstood it
Just because they’re doing well now, doesn’t mean they wouldn’t have been doing better without the cancellation attempt
Even if the cancellation attempt itself did end up actually benefiting them, because they had the right personality and skills and position, that doesn’t mean this should have been expected ex ante
(After all, if it’s clear in advance to everyone involved that someone is uncancellable, then they’re less likely to try)
Even if it’s factually true that someone has the qualities and position to come out ahead after cancellation, they may not know or believe this, and thus the prospect of cancellation may successfully silence them
Even if they’re currently uncancellable and know this, that doesn’t mean they’ll remain so in the future
E.g. if they’re so good at what they do as to be unfireable, then maybe within a few years they’ll be offered a CEO position, at which point any cancel-worthy things they said years ago may limit their career; and if they foresee this, then that incentivizes self-censorship
The point is, cancellation attempts are bad because they create a chilling effect, an environment that incentivizes self-censorship and distorts intellectual discussion. And arguments of the form “Hey, this particular cancellation attempt wasn’t that bad because the target did well” fall down to one or more of the above arguments: they still create chilling effects and that still makes them bad.
But it wasn’t a cancellation attempt. The issue at hand is whether a policy of doxxing influential people is a good idea. The benefits are transparency about who is influencing society, and in which ways; the harms include the ones you’ve listed above, about chilling effects.
It’s hard to weigh these against each other, but one way you might do so is by following a policy like “doxx people only if they’re influential enough that they’re probably robust to things like losing their job”. The correlation between “influential enough to be newsworthy” and “has many options open to them” isn’t perfect, but it’s strong enough that this policy seems pretty reasonable to me.
To flip this around, let’s consider individuals who are quietly influential in other spheres. For example, I expect there are people who many news editors listen to, when deciding how their editorial policies should work. I expect there are people who many Democrat/Republican staffers listen to, when considering how to shape policy. In general I think transparency about these people would be pretty good for the world. If those people happened to have day jobs which would suffer from that transparency, I would say “Look, you chose to have a bunch of influence, which the world should know about, and I expect you can leverage this influence to end up in a good position somehow even after I run some articles on you. Maybe you’re one of the few highly-influential people for whom this happens to not be true, but it seems like a reasonable policy to assume that if someone is actually pretty influential then they’ll land on their feet either way.” And the fact that this was true for Scott is some evidence that this would be a reasonable policy.
(I also think that taking someone influential who didn’t previously have a public profile, and giving them a public profile under their real name, is structurally pretty analogous to doxxing. Many of the costs are the same. In both cases one of the key benefits is allowing people to cross-reference information about that person to get a better picture of who is influencing the world, and how.)
In this particular case, I don’t really see any transparency benefits. If it was the case that there was important public information attached to Scott’s full name, then this argument would make sense to me.
(E.g. if Scott Alexander was actually Mark Zuckerberg or some other public figure with information attacked to their real full name then this argument would go through.)
Fair enough if NYT needs to have a extremely coarse grained policy where they always dox influential people consistently and can’t do any cost benefit on particular cases.
In general having someone’s actual name public makes it much easier to find out other public information attached to them. E.g. imagine if Scott were involved in shady business dealings under his real name. This is the sort of thing that the NYT wouldn’t necessarily discover just by writing the profile of him, but other people could subsequently discover after he was doxxed.
To be clear, btw, I’m not arguing that this doxxing policy is correct, all things considered. Personally I think the benefits of pseudonymity for a healthy ecosystem outweigh the public value of transparency about real names. I’m just arguing that there are policies consistent with the NYT’s actions which are fairly reasonable.
Many comments pointed out that NYT does not in fact have a consistent policy of always revealing people’s true names. There’s even a news editorial about this which I point out in case you trust the fact-checking of NY Post more.
I think that leaves 3 possible explanations of what happened:
NYT has a general policy of revealing people’s true names, which it doesn’t consistently apply but ended up applying in this case for no particular reason.
There’s an inconsistently applied policy, and Cade Metz’s (and/or his editors’) dislike of Scott contributed (consciously or subconsciously) to insistence on applying the policy in this particular case.
There is no policy and it was a purely personal decision.
In my view, most rationalists seem to be operating under a reasonable probability distribution over these hypotheses, informed by evidence such as Metz’s mention of Charles Murray, lack of a public written policy about revealing real names, and lack of evidence that a private written policy exists.
In effect Cade Metz indirectly accused Scott of racism. Which arguably counts as a cancellation attempt.
Ok, let’s consider the case of shadowy influencers like that. It would be nice to know who such people were, sure. If they were up to nefarious things, or openly subscribed to ideologies that justify awful actions, then I’d like to know that. If there was an article that accurately laid out the nefarious things, that would be nice. If the article cherry-picked, presented facts misleadingly, made scurrilous insinuations every few paragraphs (without technically saying anything provably false)—that would be bad, possibly quite bad, but in some sense it’s par for the course for a certain tier of political writing.
When I see the combination of making scurrilous insinuations every few paragraphs and doxxing the alleged villain, I think that’s where I have to treat it as a deliberate cancellation attempt on the person. If it wasn’t actually deliberate, then it was at least “reckless disregard” or something, and I think it should be categorized the same way. If you’re going to dox someone, I figure you accept an increased responsibility to be careful about what you say about them. (Presumably for similar reasons, libel laws are stricter about non-public figures. No, I’m not saying it’s libel when the statements are “not technically lying”; but it’s bad behavior and should be known as such.)
As for the “they’re probably robust” aspect… As mentioned in my other comment, even if they predictably “do well” afterwards, that doesn’t mean they haven’t been significantly harmed. If their influence is a following of 10M people, and the cancellation attempt reduces their influence by 40%, then it is simultaneously true that (a) “They have an audience of 6M people, they’re doing fine”, and (b) “They’ve been significantly harmed, and many people in such a position who anticipated this outcome would have a significant incentive to self-censor”. It remains a bad thing. It’s less bad than doing it to random civilians, sure, but it remains bad.
It may decrease their influence substantially, though. I’ll quote at length from here. It’s not about doxxing per se, but it’s about cancellation attempts (which doxxing a heretic enables), and about arguments similar to the above:
Now, the author has a bestselling book, has been on dozens of podcasts, and now works for an advocacy organization that’s 100% behind her message (she’s not technically listed as a cofounder, but might as well be). She has certainly landed on her feet and has a decent level of reach; yet, clearly, if not for a bunch of incidents like the above—and, as she says, probably a lot more incidents for which she doesn’t have specific evidence—then she would have had much greater reach.
In Scott’s case… if we consider the counterfactual where there wasn’t a NYT article drawing such smears against Scott, then, who knows, maybe today some major news organizations (perhaps the NYT itself!) would have approached him for permission to republish some Slate Star Codex articles on their websites, perhaps specifically some of those on AI during the last ~year when AI became big news. Or offered to interview him for a huge audience on important topics, or something.
So be careful not to underestimate the extent of unseen censorship and cancellation, and therefore the damage done by “naming and shaming” tactics.
+1, I agree with all of this, and generally consider the SSC/NYT incident to be an example of the rationalist community being highly tribalist.
(more on this in a twitter thread, which I’ve copied over to LW here)
What do you mean by example, here? That this is demonstrating a broader property, or that in this situation, there was a tribal dynamic?
There were two issues: what is the cost of doxxing, and what is the benefit of doxxing. I think
the main cruxan equally important crux of disagreement is the latter, not the former. IMO the benefit was zero: it’s not newsworthy, it brings no relevant insight, publishing it does not advance the public interest, it’s totally irrelevant to the story. Here CM doesn’t directly argue that there was any benefit to doxxing; instead he kinda conveys a vibe / ideology that if something is true then it is self-evidently intrinsically good to publish it (but of course that self-evident intrinsic goodness can be outweighed by sufficiently large costs). Anyway, if the true benefit is zero (as I believe), then we don’t have to quibble over whether the cost was big or small.Trouble is, the rationalist community tends to get involved with taboo topics and regularly defends itself by saying that it’s because it’s self-evidently good for the truth to be known. Thus there is (at least apparently) an inconsistency.
There’s a fact of the matter about whether the sidewalk on my street has an odd vs even number of pebbles on it, but I think everyone including rationalists will agree that there’s no benefit of sharing that information. It’s not relevant for anything else.
By contrast, taboo topics generally become taboo because they have important consequences for decisions and policy and life.
This is the “rationalists’ sexist and racist beliefs are linked to support for sexist and racist policies” argument, which is something that some of the people who promote taboo beliefs try to avoid. For example, Scott Alexander argues that it can be understood simply as having a curious itch to understand “the most irrelevant orthodoxy you can think of”, which sure sounds different from “because they have important consequences for decisions and policy and life.
I don’t think I was making that argument.
If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.
Once we’ve established that “X is false” is newsworthy at all, we still need to weigh the cost vs benefits of disseminating that information.
I hope that everyone including rationalists are in agreement about all this. For example, prominent rationalists are familiar with the idea of infohazards, reputational risks, picking your battles, simulacra 2, and so on. I’ve seen a lot of strong disagreement on this forum about what newsworthy information should and shouldn’t be disseminated and in what formats and contexts. I sure have my own opinions!
…But all that is irrelevant to this discussion here. I was talking about whether Scott’s last name is newsworthy in the first place. For example, it’s not the case that lots of people around the world were under the false impression that Scott’s true last name was McSquiggles, and now NYT is going to correct the record. (It’s possible that lots of people around the world were under the false impression that Scott’s true last name is Alexander, but that misconception can be easily correctly by merely saying it’s a pseudonym.) If Scott’s true last name revealed that he was secretly British royalty, or secretly Albert Einstein’s grandson, etc., that would also at least potentially be newsworthy.
Not everything is newsworthy. The pebbles-on-the-sidewalk example I mentioned above is not newsworthy. I think Scott’s name is not newsworthy either. Incidentally, I also think there should be a higher bar for what counts as newsworthy in NYT, compared to what counts as newsworthy when I’m chatting with my spouse about what happened today, because of the higher opportunity cost.
I agree, I’m just trying to say that the common rationalist theories on this topic often disagree with your take.
I think this argument would be more transparent with examples. Whenever I think of examples of popular beliefs that it would be reasonable to change one’s support of in the light of this, they end up involving highly politicized taboos.
There’s different distinctions when it comes to infohazards. One is non-Bayesian infohazards, where certain kinds of information is thought to break people’s rationality; that seems obscure and not so relevant here. Another is recipes for destruction, where you give a small hostile faction the ability to unilaterally cause harm. This could theoretically be applicable if we were talking publishing Scott Alexander’s personal address and his habits when for where he goes, as that makes it more practical for terrorists to attack him. But that seems less relevant for his real name, when it is readily available and he ends up facing tons of attention regardless.
Reputational risks can at times be acknowledged, but at the same time reputational risks are one of the main justifications for the taboos. Stereotyping is basically reputational risk on a group level; if rationalists dismiss the danger of stereotyping with “well, I just have a curious itch”, that sure seems like a strong presumption of truthtelling over reputational risk.
Picking your battles seems mostly justified on pragmatics, so it seems to me that the NYT can just go “this is a battle that we can afford to pick”.
Rationalists seem to usually consider simulacrum level 2 to be pathological, on the basis of presumption of the desirability of truth.
I think this is a perfectly valid argument for why NYT shouldn’t publish it, it just doesn’t seem very strong or robust and doesn’t square well with the general pro-truth ideology.
Like, if the NYT did go out and count the number of pebbles on your road, then yes there’s an opportunity cost to this etc., which makes it a pretty unnecessary thing to do, but it’s not like you’d have any good reason to whip out a big protest or anything. This is the sort of thing where at best the boss should go “was that really necessary?”, and both “no, it was an accident” or “yes, because of <obscure policy reason>” are fine responses.
If one grants a presumption of the value of truth, and grants that it is permissible, admirable even, to follow the itch to uncover things that people would really rather downplay, then it seems really hard to say that Cade Metz did anything wrong.
By coincidence, Scott has written about this subject.
Not being completely hidden isn’t “readily available”. If finding his name is even a trivial inconvenience, it doesn’t cause the damage caused by plastering his name in the Times.
The context from above is that we’re weighing costs vs benefits of publishing the name, and I was pulling out the sub-debate over what the benefits are (setting aside the disagreement about how large the costs are).
I agree that “the benefits are ≈0” is not a strong argument that the costs outweigh the benefits in and of itself, because maybe the costs are ≈0 as well. If a journalist wants to report the thickness of Scott Alexander’s shoelaces, maybe the editor will say it’s a waste of limited wordcount, but the journalist could say “hey it’s just a few words, and y’know, it adds a bit of color to the story”, and that’s a reasonable argument: the cost and benefit are each infinitesimal, and reasonable people can disagree about which one slightly outweighs the other.
But “the benefits are ≈0” is a deciding factor in a context where the costs are not infinitesimal. Like if Scott asserts that a local gang will beat him senseless if the journalist reports the thickness of his shoelaces, it’s no longer infinitesimal costs versus infinitesimal benefits, but rather real costs vs infinitesimal benefits.
If the objection is “maybe the shoelace thickness is actually Scott’s dark embarrassing secret that the public has an important interest in knowing”, then yeah that’s possible and the journalist should certainly look into that possibility. (In the case at hand, if Scott were secretly SBF’s brother, then everyone agrees that his last name would be newsworthy.) But if the objection is just “Scott might be exaggerating, maybe the gang won’t actually beat him up too badly if the shoelace thing is published”, then I think a reasonable ethical journalist would just leave out the tidbit about the shoelaces, as a courtesy, given that there was never any reason to put it in in the first place.
I get that this is an argument one could make. But the reason I started this tangent was because you said:
That is, my original argument was not in response to the “Anyway, if the true benefit is zero (as I believe), then we don’t have to quibble over whether the cost was big or small” part of your post, it was to the vibe/ideology part.
Where I was trying to say, it doesn’t seem to me that Cade Metz was the one who introduced this vibe/ideology, rather it seems to have been introduced by rationalists prior to this, specifically to defend tinkering with taboo topics.
Like, you mention that Cade Metz conveys this vibe/ideology that you disagree with, and you didn’t try to rebut I directly, I assumed because Cade Metz didn’t defend it but just treated it as obvious.
And that’s where I’m saying, since many rationalists including Scott Alexander have endorsed this ideology, there’s a sense in which it seems wrong, almost rude, to not address it directly. Like a sort of Motte-Bailey tactic.
It is not surprising when a lot of people having a false belief is caused by the existence of a taboo. Otherwise the belief would probably already have been corrected or wouldn’t have gained popularity in the first place. And giving examples for such beliefs of course is not really possible, precisely because it is taboo to argue that they are false.
It’s totally possible to say taboo things, I do it quite often.
But my point is more, this doesn’t seem to disprove the existence of the tension/Motte-Bailey/whatever dynamic that I’m pointing at.
Metz/NYT disagree. He doesn’t completely spell out why (it’s not his style), but, luckily, Scott himself did:
Metz/NYT considered Scott to be bad enough to deserve whatever inconveniences/punishments would come to him as a result of tying his alleged wrongthink to his real name, is the long and short of it.
Which racist and sexist policies?
None, if you buy the “we just have a curious itch to understand the most irrelevant orthodoxy you can think of” explanation. But if that’s a valid reason for rationalists to dig into things that are taboo because of their harmful consequences, is it not then also valid for Cade Metz to follow a curious itch to dig into rationalists’ private information?
Well, I don’t understand what that position has to do with doxxing someone. What does obsessively pointing out how a reigning orthodoxy is incorrect have to do with revealing someone’s private info and making it hard for them to do their jobs? The former is socially useful because a lot of orthodoxy’s result in bad policies or cause people to err in their private lives or whatever. The latter mostly isn’t.
Yes, sometimes someone the two co-incide e.g. revealing that the church uses heliocentric models to calculate celesitial movements or watergate or whatever. But that’s quite rare and I note Matz didn’t provide any argument that doxxing scott is like one of those cases.
Consider a counterfacual where Scott in his private life crusading against DEI policies in a visible way. Then people benifitting from those policies may want to know that “there’s this political activist who’s advocating for policies that harm you and the scope of his influence is way bigger than you thought”. Which would clearly be useful info for a decent chunk of readers. Knowing his name would be useful!
Instead, it’s just “we gotta say his name. It’s so obvious, you know?” OK. So what? Who does that help? Why’s the knowledge valuable? I have not seen a good answer to those questions. Or consider: if Matz for some bizarre reason decided to figure out who “Algon” on LW is and wrote an article revealing that I’m X because “it’s true” I’d say that’s a waste of people’s time and a bit of a dick move.
Yes, he should still be allowed to do so, because regulating free-speech well is hard and I’d rather eat the costs than deal with poor regulations. Doesn’t change the dickishness of it.
I think once you get concrete about it in the discourse, this basically translates to “supports racist and sexist policies”, albeit from the perspective of those who are pro these policies.
Let’s take autogynephilia theory as an example of a taboo belief, both because it’s something I’m familiar with and because it’s something Scott Alexander has spoken out against, so we’re not putting Scott Alexander in any uncomfortable position about it.
Autogynephilia theory has become taboo for various reasons. Some people argue that they should still disseminate it because it’s true, even if it doesn’t have any particular policy implications, but of course that leads to paradoxes where those people themselves tend to have privacy and reputation concerns where they’re not happy about having true things about themselves shared publicly.
The alternate argument is on the basis of “a lot of orthodoxy’s result in bad policies or cause people to err in their private lives or whatever”, but when you get specific about what they are mean rather than dismissing it as “or whatever”, it’s something like “autogynephilia theory is important to know so autogynephiles don’t end up thinking they’re trans and transitioning”, which in other language would mean something like “most trans women shouldn’t have transitioned, and we need policies to ensure that fewer transition in the future”. Which is generally considered an anti-trans position!
Now you might say, well, that position is a good position. But that’s a spicy argument to make, so a lot of the time people fall back on “autogynephilia theory is true and we should have a strong presumption in favor of saying true things”.
Now, there’s also the whole “the discourse is not real life” issue, where the people who advocate for some belief might not be representative of the applications of that belief.
That seems basically correct? And also fine. If you think lots of people are making mistakes that will hurt themselves/others/you and you can convince people about this by sharing info, that’s basically fine to me.
I still don’t understand what this has to do with doxxing someone. I suspect we’re talking past each other right now.
What paradoxes, which people, which things? This isn’t a gotcha: I’m just struggling to parse this sentence right now. I can’t think of any concrete examples that fit. Maybe some “there are autogenphyliacs who claim to be trans but aren’t really and they’d be unhappy if this fact was shared because that would harm their reputation”? If that were true, and someone discovered a specific autogenphyliac who thinks they’re not really trans but presents as such and someone outed them, I would call that a dick move.
So I’m not sure what the paradox is. One stab at a potential paradox: a rational agent would come to similair conclusions if you spread the hypotheticaly true info that 99.99% of trans-females are autogenphyliacs, then a rational agent would conclude that any particular trans-woman is really a cis autogenphyliac. Which means you’re basically doxxing them by providing info that would in this world be relevant to societies making decisions about stuff like who’s allowed to compete in women’s sports.
I guess this is true but it also seems like an extreme case to me. Most people aren’t that rational, and depending on the society, are willing to believe others about kinda-unlikely things about themselves. So in a less extreme hypothetical, say 99.99% vs 90%, I can see people believing most supposedly trans women aren’t trans, but belives any specific person who claims they’re a trans-woman.
EDIT: I believe that a signficant fraction of conflicts aren’t mostly mistakes. But even there, the costs of attempts to restrict speech are quite high.
I mean insofar as people insist they’re interested in it for political reasons, it makes sense to distinguish this from the doxxing and say that there’s no legitimate political use for Scott Alexander’s name.
The trouble is that often people de-emphasize their political motivations, as Scott Alexander did when he framed it around the most irrelevant orthodoxy you can think of, that one is simply interested in out of a curious itch. The most plausible motivation I can think of for making this frame is to avoid being associated with the political motivation.
But regardless of the whether that explanation is true, if one says that there’s a strong presumption in sharing truth to the point where people who dig into inconsequential dogmas that are taboo to question because they cover up moral contradictions that people are afraid will cause genocidal harm if unleashed, then it sure seems like this strong presumption in favor of truth also legitimizes mild cases of doxxing.
Michael Bailey tends to insist that it’s bad to speculate about hidden motives that scientists like him might have for his research, yet when he explains his own research, he insists that he should study people’s hidden motivation using only the justification of truth and curiosity.
OK, now I understand the connection to doxing much more clearly. Thank you. To be clear, I do not endorse legalizing a no-doxxing rule.
I still disagree because it didn’t look like Metz had any reason to doxx Scott beyond “just because”. There were no big benifits to readers or any story about why there was no harm done to Scott in spite of his protests.
Whereas if I’m a journalist and encounter someone who says “if you release information about genetic differences in intelligence that will cause a genocide” I can give reasons for why that is unlikely. And I can give reasons for why I the associated common-bundle-of-beliefs-and-values ie. orthodoxy is not inconsequential, that there are likely, large (albeit not genocide large) harms that this orthodoxy is causing.
I mean I’m not arguing Cade Metz should have doxxed Scott Alexander, I’m just arguing that there is a tension between common rationalist ideology that one should have a strong presumption in favor of telling the truth, and that Cade Metz shouldn’t have doxxed Scott Alexander. As far as I can tell, this common rationalist ideology was a cover for spicier views that you have no issue admitting to, so I’m not exactly saying that there’s any contradiction in your vibe. More that there’s a contradiction in Scott Alexander’s (at least at the time of writing Kolmogorov Complicity).
I’m not sure what my own resolution to the paradox/contradiction is. Maybe that the root problem seems to be that people create information to bolster their side in political discourse, rather than to inform their ideology about how to address problems that they care about. In the latter case, creating information does real productive work, but in the former case, information mostly turns into a weapon, which incentivizes creating some of the most cursed pieces of information known to the world.
His doxing Scott was in an article that also contained lies, lies which made the doxing more harmful. He wouldn’t have just posted Scott’s real name in a context where no lies were involved.
Your argument rests on a false dichotomy. There are definitely other options than ‘wanting to know truth for no reason at all’ and ‘wanting to know truth to support racist policies’. It is at least plausibly the case that beneficial, non-discriminatory policies could result from knowledge currently considered taboo. It could at least be relevant to other things and therefore useful to know!
What plausible benefit is there to knowing Scott’s real name? What could it be relevant to?
People do sometimes make the case that knowing more information about sex and race differences can be helpful for women and black people. It’s a fine approach to make, if one can actually make it work out in practice. My point is just that the other two approaches also exist.
You’re conflating between “have important consequences” and “can be used as weapons in discourse”
I think this is clearly true, but the application is a bit dubious. There’s a difference between “we have to talk about the bell curve here even though the object-level benefit is very dubious because of the principle that we oppose censorship” and “let’s doxx someone”. I don’t think it’s inconsistent to be on board with the first (which I think a lot of rationalists have proven to be, and which is an example of what you claimed exists) but not the second (which is the application here).
Scott tried hard to avoid getting into the race/IQ controversy. Like, in the private email LGS shared, Scott states “I will appreciate if you NEVER TELL ANYONE I SAID THIS”. Isn’t this the opposite of “it’s self-evidently good for the truth to be known”? And yes there’s a SSC/ACX community too (not “rationalist” necessarily), but Metz wasn’t talking about the community there.
My opinion as a rationalist is that I’d like the whole race/IQ issue to f**k off so we don’t have to talk or think about it, but certain people like to misrepresent Scott and make unreasonable claims, which ticks me off, so I counterargue, just as I pushed a video by Shaun once when I thought somebody on ACX sounded a bit racist to me on the race/IQ topic.
Scott and myself are consequentialists. As such, it’s not self-evidently good for the truth to be known. I think some taboos should be broached, but not “self-evidently” and often not by us. But if people start making BS arguments against people I like? I will call BS on that, even if doing so involves some discussion of the taboo topic. But I didn’t wake up this morning having any interest in doing that.
I agree that Scott Alexander’s position is that it’s not self-evidently good for the truth about his own views to be known. I’m just saying there’s a bunch of times he’s alluded to or outright endorsed it being self-evidently good for the truth to be known in general, in order to defend himself when criticized for being interested in the truth about taboo topics.
I for one am definitely worse off.
I now have to read Scott on Substack instead of SSC.
Scott doesn’t write sweet things that could attract nasty flies anymore.