The risk from recursive self-improvement is either dramatic enough to outweigh the low probability of the event or likely enough to outweight the probability of other existential risks. This is the idea everything revolves around in this community (it’s not obvious, but I believe so).
Umm, this is not the SIAI blog. It is “Less Wrong: a community blog devoted to refining the art of human rationality”.
The idea everything revolves around in this community is what comes after the ‘:’ in the preceding sentence.
Besides its history and the logo with a link to the SIAI that you can see in the top right corner, I believe that you underestimate the importance of artificial intelligence and associated risks within this community. As I said, it is not obvious, but when Yudkowsky came up with LessWrong.com it was against the background of the SIAI.
Perhaps you overestimate the extent to which google search results on a term reflect the importance of the concept to which the word refers.
I note that:
The best posts on ‘rationality’ are among those that do not use the word ‘rationality’*.
Similar to ‘Omega’ and ‘Clippy’, AI is a useful agent to include when discussing questions of instrumental rationality. It allows us to consider highly rational agents in the abstract without all the bullshit and normative dead weight that gets thrown into conversations whenever the agents in question are humans.
Eliezer explicitly forbade discussion of FAI/Singularity topics on lesswrong.com for the first few months because he didn’t want discussion of such topics to be the primary focus of the community.
Again, “refining the art of human rationality” is the central idea that everything here revolves around. That doesn’t mean that FAI and related topics aren’t important, but lesswrong.com would continue to thrive (albeit less so) if all discussion of singularity ceased.
Umm, this is not the SIAI blog. It is “Less Wrong: a community blog devoted to refining the art of human rationality”.
The idea everything revolves around in this community is what comes after the ‘:’ in the preceding sentence.
Google site:lesswrong.com “artificial intelligence” 4,860 results
Google site:lesswrong.com rationality 4,180 results
Besides its history and the logo with a link to the SIAI that you can see in the top right corner, I believe that you underestimate the importance of artificial intelligence and associated risks within this community. As I said, it is not obvious, but when Yudkowsky came up with LessWrong.com it was against the background of the SIAI.
Google site:lesswrong.com “me” 5,360 results
Google site:lesswrong.com “I” 7,520 results
Google site:lesswrong.com “it” 7,640 results
Google site:lesswrong.com “a” 7,710 results
Perhaps you overestimate the extent to which google search results on a term reflect the importance of the concept to which the word refers.
I note that:
The best posts on ‘rationality’ are among those that do not use the word ‘rationality’*.
Similar to ‘Omega’ and ‘Clippy’, AI is a useful agent to include when discussing questions of instrumental rationality. It allows us to consider highly rational agents in the abstract without all the bullshit and normative dead weight that gets thrown into conversations whenever the agents in question are humans.
Eliezer explicitly forbade discussion of FAI/Singularity topics on lesswrong.com for the first few months because he didn’t want discussion of such topics to be the primary focus of the community.
Again, “refining the art of human rationality” is the central idea that everything here revolves around. That doesn’t mean that FAI and related topics aren’t important, but lesswrong.com would continue to thrive (albeit less so) if all discussion of singularity ceased.