Preventing existential risk is part of what this site is about. Do you think it shouldn’t be mentioned at all, or do you think it should be described some other way?
Have you prevented any existential risks? Has anyone here? Has the site?
Is that something users actually interact with here?
In what way is preventing existential risk relevant to what a user coming here is looking at, seeing, interacting with, or doing?
And what introduction would be necessary to explain why Less Wrong, as a community, is congratulating itself for something it hasn’t done yet, and which no user will interact with or contribute to in a meaningful way as a part of interacting with Less Wrong?
There are people here who are working on preventing UAI—I’m not sure they’re right (I have my doubts about provable Friendliness), but it’s definitely part of the history of and purpose for the the site.
While Yudkowsky is hardly the only person to work on practical self-improvement, it amazes me that it took a long-range threat to get people to work seriously on the sunk-cost fallacy and such—and to work seriously on teaching how to notice biases and give them up.
Most people aren’t interested in existential risk, but some of the people who are interested in the site obviously are.
Granted, but is it a core aspect of the site? Is it something your users need to know, to know what Less Wrong is about?
Beyond that, does it signal the right things about Less Wrong? (What kinds of groups are worried about existential threats? Would you consider worrying about existential threats, in the general case rather than this specific case, to be a sign of a healthy or unhealthy community?)
When it comes to existential threats, humanity has already cried wolf too many times.
Most of the time the explanation of the threat was completely stupid, but nonetheless, most people are already inoculated against this kind of message in general.
I think mentioning it early on sends a bad signal. Most groups that talk about the end of the world don’t have very much credibility among skeptical people, so if telling people we talk about the end of the world is one of the first pieces of evidence we give them about us, they’re liable to update on that evidence and decide their time is better spent reading other sites. I’d be OK with an offhand link to “existential risks” somewhere in the second half of the homepage text, but putting it in the second sentence is a mistake, in my view.
“What they emphasize about themselves doesn’t match their priorities” also sends a bad signal leading to loss of credibility.
This may fall in the “you can’t polish a turd” category. Talking about the end of the world is inherently Bayseian evidence for crackpotness. Thinking of the problem as “we need to change how we present talking about the end of the world” can’t help. Assuming you present it at all, anything you can do to change how you present can also be done by genuine crackpots, so changing how you present it should not affect what a rational listener thinks at all.
Disagree. By giving lots of evidence of non-crackpottishness before discussing the end of the world (having lots of intelligent discussion of biases etc.), then by the time someone sees discussion of the end of the world, their prior on LW being an intelligent community may be strong enough that they’re not driven away.
Well, there’s a whole range of crackpots, ranging from the flat-earthers who are obviously not using good reasoning to anyone who reads a few paragraphs, to groups who sound logical and erudite as long as you don’t have expertise in the subject they’re talking about. Insofar as LW is confused with (or actually is) some kind of crackpots, it’s crackpots more towards the latter end of the scale.
Sure. And insofar as it’s easy for us, we should do our best to avoid being classified as crackpots of the first type :)
Avoiding classification as crackpots of the second type seems harder. The main thing seems to be having lots of high status, respectable people agree with the things you say. Nick Bostrom (Oxford professor) and Elon Musk (billionaire tech entrepreneur) seem to have done more for the credibility of AI risk than any object-level argument could, for instance.
Others have to decide on branding and identity, but I consider MIRI and the the Future of Humanity as having very different core missions than LessWrong, so adding those missions in a description of LW muddies the presentation, particularly for outreach material.
To the point, I think “we’re saving the world from Unfriendly AI” is not effective general outreach for LessWrong’s core mission, and beating around the bush with “existential threats” would elicit a “Huh? What?” in most readers. And is there really much going on about other existential threats on LW? Nuclear proliferation? Biological warfare? Asteroid collisions?
Preventing existential risk is part of what this site is about.
I don’t think that existential risk really a part of the LessWrong Blog/Forum/Wiki site mission, it’s just one of the particular areas of interest of many here, like effective altruism, or Reactionary politics.
CFAR makes a good institutional match to the Blog/Forum/Wiki of LessWrong, with the mission of developing, delivering, and testing? training to the end of becoming LessWrong, focusing on the same subject areas and influences as LessWrong itself.
Preventing existential risk is part of what this site is about. Do you think it shouldn’t be mentioned at all, or do you think it should be described some other way?
Have you prevented any existential risks? Has anyone here? Has the site?
Is that something users actually interact with here?
In what way is preventing existential risk relevant to what a user coming here is looking at, seeing, interacting with, or doing?
And what introduction would be necessary to explain why Less Wrong, as a community, is congratulating itself for something it hasn’t done yet, and which no user will interact with or contribute to in a meaningful way as a part of interacting with Less Wrong?
There are people here who are working on preventing UAI—I’m not sure they’re right (I have my doubts about provable Friendliness), but it’s definitely part of the history of and purpose for the the site.
While Yudkowsky is hardly the only person to work on practical self-improvement, it amazes me that it took a long-range threat to get people to work seriously on the sunk-cost fallacy and such—and to work seriously on teaching how to notice biases and give them up.
Most people aren’t interested in existential risk, but some of the people who are interested in the site obviously are.
Granted, but is it a core aspect of the site? Is it something your users need to know, to know what Less Wrong is about?
Beyond that, does it signal the right things about Less Wrong? (What kinds of groups are worried about existential threats? Would you consider worrying about existential threats, in the general case rather than this specific case, to be a sign of a healthy or unhealthy community?)
When it comes to existential threats, humanity has already cried wolf too many times.
Most of the time the explanation of the threat was completely stupid, but nonetheless, most people are already inoculated against this kind of message in general.
I think mentioning it early on sends a bad signal. Most groups that talk about the end of the world don’t have very much credibility among skeptical people, so if telling people we talk about the end of the world is one of the first pieces of evidence we give them about us, they’re liable to update on that evidence and decide their time is better spent reading other sites. I’d be OK with an offhand link to “existential risks” somewhere in the second half of the homepage text, but putting it in the second sentence is a mistake, in my view.
“What they emphasize about themselves doesn’t match their priorities” also sends a bad signal leading to loss of credibility.
This may fall in the “you can’t polish a turd” category. Talking about the end of the world is inherently Bayseian evidence for crackpotness. Thinking of the problem as “we need to change how we present talking about the end of the world” can’t help. Assuming you present it at all, anything you can do to change how you present can also be done by genuine crackpots, so changing how you present it should not affect what a rational listener thinks at all.
Disagree. By giving lots of evidence of non-crackpottishness before discussing the end of the world (having lots of intelligent discussion of biases etc.), then by the time someone sees discussion of the end of the world, their prior on LW being an intelligent community may be strong enough that they’re not driven away.
Well, there’s a whole range of crackpots, ranging from the flat-earthers who are obviously not using good reasoning to anyone who reads a few paragraphs, to groups who sound logical and erudite as long as you don’t have expertise in the subject they’re talking about. Insofar as LW is confused with (or actually is) some kind of crackpots, it’s crackpots more towards the latter end of the scale.
Sure. And insofar as it’s easy for us, we should do our best to avoid being classified as crackpots of the first type :)
Avoiding classification as crackpots of the second type seems harder. The main thing seems to be having lots of high status, respectable people agree with the things you say. Nick Bostrom (Oxford professor) and Elon Musk (billionaire tech entrepreneur) seem to have done more for the credibility of AI risk than any object-level argument could, for instance.
Others have to decide on branding and identity, but I consider MIRI and the the Future of Humanity as having very different core missions than LessWrong, so adding those missions in a description of LW muddies the presentation, particularly for outreach material.
To the point, I think “we’re saving the world from Unfriendly AI” is not effective general outreach for LessWrong’s core mission, and beating around the bush with “existential threats” would elicit a “Huh? What?” in most readers. And is there really much going on about other existential threats on LW? Nuclear proliferation? Biological warfare? Asteroid collisions?
I don’t think that existential risk really a part of the LessWrong Blog/Forum/Wiki site mission, it’s just one of the particular areas of interest of many here, like effective altruism, or Reactionary politics.
CFAR makes a good institutional match to the Blog/Forum/Wiki of LessWrong, with the mission of developing, delivering, and testing? training to the end of becoming LessWrong, focusing on the same subject areas and influences as LessWrong itself.