Well, there’s a whole range of crackpots, ranging from the flat-earthers who are obviously not using good reasoning to anyone who reads a few paragraphs, to groups who sound logical and erudite as long as you don’t have expertise in the subject they’re talking about. Insofar as LW is confused with (or actually is) some kind of crackpots, it’s crackpots more towards the latter end of the scale.
Sure. And insofar as it’s easy for us, we should do our best to avoid being classified as crackpots of the first type :)
Avoiding classification as crackpots of the second type seems harder. The main thing seems to be having lots of high status, respectable people agree with the things you say. Nick Bostrom (Oxford professor) and Elon Musk (billionaire tech entrepreneur) seem to have done more for the credibility of AI risk than any object-level argument could, for instance.
Well, there’s a whole range of crackpots, ranging from the flat-earthers who are obviously not using good reasoning to anyone who reads a few paragraphs, to groups who sound logical and erudite as long as you don’t have expertise in the subject they’re talking about. Insofar as LW is confused with (or actually is) some kind of crackpots, it’s crackpots more towards the latter end of the scale.
Sure. And insofar as it’s easy for us, we should do our best to avoid being classified as crackpots of the first type :)
Avoiding classification as crackpots of the second type seems harder. The main thing seems to be having lots of high status, respectable people agree with the things you say. Nick Bostrom (Oxford professor) and Elon Musk (billionaire tech entrepreneur) seem to have done more for the credibility of AI risk than any object-level argument could, for instance.