Besides its history and the logo with a link to the SIAI that you can see in the top right corner, I believe that you underestimate the importance of artificial intelligence and associated risks within this community. As I said, it is not obvious, but when Yudkowsky came up with LessWrong.com it was against the background of the SIAI.
Perhaps you overestimate the extent to which google search results on a term reflect the importance of the concept to which the word refers.
I note that:
The best posts on ‘rationality’ are among those that do not use the word ‘rationality’*.
Similar to ‘Omega’ and ‘Clippy’, AI is a useful agent to include when discussing questions of instrumental rationality. It allows us to consider highly rational agents in the abstract without all the bullshit and normative dead weight that gets thrown into conversations whenever the agents in question are humans.
Eliezer explicitly forbade discussion of FAI/Singularity topics on lesswrong.com for the first few months because he didn’t want discussion of such topics to be the primary focus of the community.
Again, “refining the art of human rationality” is the central idea that everything here revolves around. That doesn’t mean that FAI and related topics aren’t important, but lesswrong.com would continue to thrive (albeit less so) if all discussion of singularity ceased.
Google site:lesswrong.com “artificial intelligence” 4,860 results
Google site:lesswrong.com rationality 4,180 results
Besides its history and the logo with a link to the SIAI that you can see in the top right corner, I believe that you underestimate the importance of artificial intelligence and associated risks within this community. As I said, it is not obvious, but when Yudkowsky came up with LessWrong.com it was against the background of the SIAI.
Google site:lesswrong.com “me” 5,360 results
Google site:lesswrong.com “I” 7,520 results
Google site:lesswrong.com “it” 7,640 results
Google site:lesswrong.com “a” 7,710 results
Perhaps you overestimate the extent to which google search results on a term reflect the importance of the concept to which the word refers.
I note that:
The best posts on ‘rationality’ are among those that do not use the word ‘rationality’*.
Similar to ‘Omega’ and ‘Clippy’, AI is a useful agent to include when discussing questions of instrumental rationality. It allows us to consider highly rational agents in the abstract without all the bullshit and normative dead weight that gets thrown into conversations whenever the agents in question are humans.
Eliezer explicitly forbade discussion of FAI/Singularity topics on lesswrong.com for the first few months because he didn’t want discussion of such topics to be the primary focus of the community.
Again, “refining the art of human rationality” is the central idea that everything here revolves around. That doesn’t mean that FAI and related topics aren’t important, but lesswrong.com would continue to thrive (albeit less so) if all discussion of singularity ceased.