Have you prevented any existential risks? Has anyone here? Has the site?
Is that something users actually interact with here?
In what way is preventing existential risk relevant to what a user coming here is looking at, seeing, interacting with, or doing?
And what introduction would be necessary to explain why Less Wrong, as a community, is congratulating itself for something it hasn’t done yet, and which no user will interact with or contribute to in a meaningful way as a part of interacting with Less Wrong?
There are people here who are working on preventing UAI—I’m not sure they’re right (I have my doubts about provable Friendliness), but it’s definitely part of the history of and purpose for the the site.
While Yudkowsky is hardly the only person to work on practical self-improvement, it amazes me that it took a long-range threat to get people to work seriously on the sunk-cost fallacy and such—and to work seriously on teaching how to notice biases and give them up.
Most people aren’t interested in existential risk, but some of the people who are interested in the site obviously are.
Granted, but is it a core aspect of the site? Is it something your users need to know, to know what Less Wrong is about?
Beyond that, does it signal the right things about Less Wrong? (What kinds of groups are worried about existential threats? Would you consider worrying about existential threats, in the general case rather than this specific case, to be a sign of a healthy or unhealthy community?)
When it comes to existential threats, humanity has already cried wolf too many times.
Most of the time the explanation of the threat was completely stupid, but nonetheless, most people are already inoculated against this kind of message in general.
Have you prevented any existential risks? Has anyone here? Has the site?
Is that something users actually interact with here?
In what way is preventing existential risk relevant to what a user coming here is looking at, seeing, interacting with, or doing?
And what introduction would be necessary to explain why Less Wrong, as a community, is congratulating itself for something it hasn’t done yet, and which no user will interact with or contribute to in a meaningful way as a part of interacting with Less Wrong?
There are people here who are working on preventing UAI—I’m not sure they’re right (I have my doubts about provable Friendliness), but it’s definitely part of the history of and purpose for the the site.
While Yudkowsky is hardly the only person to work on practical self-improvement, it amazes me that it took a long-range threat to get people to work seriously on the sunk-cost fallacy and such—and to work seriously on teaching how to notice biases and give them up.
Most people aren’t interested in existential risk, but some of the people who are interested in the site obviously are.
Granted, but is it a core aspect of the site? Is it something your users need to know, to know what Less Wrong is about?
Beyond that, does it signal the right things about Less Wrong? (What kinds of groups are worried about existential threats? Would you consider worrying about existential threats, in the general case rather than this specific case, to be a sign of a healthy or unhealthy community?)
When it comes to existential threats, humanity has already cried wolf too many times.
Most of the time the explanation of the threat was completely stupid, but nonetheless, most people are already inoculated against this kind of message in general.