I think mentioning it early on sends a bad signal. Most groups that talk about the end of the world don’t have very much credibility among skeptical people, so if telling people we talk about the end of the world is one of the first pieces of evidence we give them about us, they’re liable to update on that evidence and decide their time is better spent reading other sites. I’d be OK with an offhand link to “existential risks” somewhere in the second half of the homepage text, but putting it in the second sentence is a mistake, in my view.
“What they emphasize about themselves doesn’t match their priorities” also sends a bad signal leading to loss of credibility.
This may fall in the “you can’t polish a turd” category. Talking about the end of the world is inherently Bayseian evidence for crackpotness. Thinking of the problem as “we need to change how we present talking about the end of the world” can’t help. Assuming you present it at all, anything you can do to change how you present can also be done by genuine crackpots, so changing how you present it should not affect what a rational listener thinks at all.
Disagree. By giving lots of evidence of non-crackpottishness before discussing the end of the world (having lots of intelligent discussion of biases etc.), then by the time someone sees discussion of the end of the world, their prior on LW being an intelligent community may be strong enough that they’re not driven away.
Well, there’s a whole range of crackpots, ranging from the flat-earthers who are obviously not using good reasoning to anyone who reads a few paragraphs, to groups who sound logical and erudite as long as you don’t have expertise in the subject they’re talking about. Insofar as LW is confused with (or actually is) some kind of crackpots, it’s crackpots more towards the latter end of the scale.
Sure. And insofar as it’s easy for us, we should do our best to avoid being classified as crackpots of the first type :)
Avoiding classification as crackpots of the second type seems harder. The main thing seems to be having lots of high status, respectable people agree with the things you say. Nick Bostrom (Oxford professor) and Elon Musk (billionaire tech entrepreneur) seem to have done more for the credibility of AI risk than any object-level argument could, for instance.
I think mentioning it early on sends a bad signal. Most groups that talk about the end of the world don’t have very much credibility among skeptical people, so if telling people we talk about the end of the world is one of the first pieces of evidence we give them about us, they’re liable to update on that evidence and decide their time is better spent reading other sites. I’d be OK with an offhand link to “existential risks” somewhere in the second half of the homepage text, but putting it in the second sentence is a mistake, in my view.
“What they emphasize about themselves doesn’t match their priorities” also sends a bad signal leading to loss of credibility.
This may fall in the “you can’t polish a turd” category. Talking about the end of the world is inherently Bayseian evidence for crackpotness. Thinking of the problem as “we need to change how we present talking about the end of the world” can’t help. Assuming you present it at all, anything you can do to change how you present can also be done by genuine crackpots, so changing how you present it should not affect what a rational listener thinks at all.
Disagree. By giving lots of evidence of non-crackpottishness before discussing the end of the world (having lots of intelligent discussion of biases etc.), then by the time someone sees discussion of the end of the world, their prior on LW being an intelligent community may be strong enough that they’re not driven away.
Well, there’s a whole range of crackpots, ranging from the flat-earthers who are obviously not using good reasoning to anyone who reads a few paragraphs, to groups who sound logical and erudite as long as you don’t have expertise in the subject they’re talking about. Insofar as LW is confused with (or actually is) some kind of crackpots, it’s crackpots more towards the latter end of the scale.
Sure. And insofar as it’s easy for us, we should do our best to avoid being classified as crackpots of the first type :)
Avoiding classification as crackpots of the second type seems harder. The main thing seems to be having lots of high status, respectable people agree with the things you say. Nick Bostrom (Oxford professor) and Elon Musk (billionaire tech entrepreneur) seem to have done more for the credibility of AI risk than any object-level argument could, for instance.