First change: LW is not quite the point of focus that it was. There was a rationalist diaspora into social media, and “Slate Star Codex” (and its associated subreddits?) became a more prominent locus of rationalist discussion. The most important “LW-ish” forums that I know about now, might be those which focus on quasi-technical discussion of AI issues like “alignment”. I call them the most important because of...
Second change: The era of deep learning, and of commercialized AI in the guise of “machine learning”, arrived. The fact that these algorithms are not limited to the resources of a single computer, but can in principle tap the resources of an entire data center or even the entire cloud of a major tech corporation, means that we have also arrived at the final stage of the race towards superintelligence.
In the past, taking over the world meant building or taking over the strongest superpower. Now it simply means being the first to create strongly superhuman intelligence; and saving the world means identifying a value system that will make an autonomous AI “friendly”, and working to ensure that the winner of the mind race is guided by friendly rather than unfriendly values. Every other concern is temporary, and any good work done towards other causes, will potentially be undone by unfriendly AI, if unfriendly values win the AI race.
(I do not say with 100% certainty that this is the nature of the world, but this scenario has sufficient internal logic that, if it does not apply to reality, there must be some other factor which somehow overrides it.)
I would certainly appreciate a workable solution brought about by those means, but I would take care to mention that this approach might not necessarily be the first or only approach. Ideally, I would like to asssemble a balanced portfolio of conspiracies.
If I take a PIDOOMA number of 90% expectation of failure over any timescale for an individual scheme, and assume strict success or failure, then seven is the minimum required to have a better than even chance of having a plan succeed, thirteen for a 75% chance, and a twenty-nine to be rolling for ones. Even though this probably doesn’t correlate to any real probabilities of success, the 7-13-29 is the benchmark I’m going to use going in.
Previous LW discussions on taking over the world (last updated in 2013).
Comments of mine on “utopian hope versus reality” (dating from 2012).
Since that era, a few things have happened.
First change: LW is not quite the point of focus that it was. There was a rationalist diaspora into social media, and “Slate Star Codex” (and its associated subreddits?) became a more prominent locus of rationalist discussion. The most important “LW-ish” forums that I know about now, might be those which focus on quasi-technical discussion of AI issues like “alignment”. I call them the most important because of...
Second change: The era of deep learning, and of commercialized AI in the guise of “machine learning”, arrived. The fact that these algorithms are not limited to the resources of a single computer, but can in principle tap the resources of an entire data center or even the entire cloud of a major tech corporation, means that we have also arrived at the final stage of the race towards superintelligence.
In the past, taking over the world meant building or taking over the strongest superpower. Now it simply means being the first to create strongly superhuman intelligence; and saving the world means identifying a value system that will make an autonomous AI “friendly”, and working to ensure that the winner of the mind race is guided by friendly rather than unfriendly values. Every other concern is temporary, and any good work done towards other causes, will potentially be undone by unfriendly AI, if unfriendly values win the AI race.
(I do not say with 100% certainty that this is the nature of the world, but this scenario has sufficient internal logic that, if it does not apply to reality, there must be some other factor which somehow overrides it.)
I would certainly appreciate a workable solution brought about by those means, but I would take care to mention that this approach might not necessarily be the first or only approach. Ideally, I would like to asssemble a balanced portfolio of conspiracies.
If I take a PIDOOMA number of 90% expectation of failure over any timescale for an individual scheme, and assume strict success or failure, then seven is the minimum required to have a better than even chance of having a plan succeed, thirteen for a 75% chance, and a twenty-nine to be rolling for ones. Even though this probably doesn’t correlate to any real probabilities of success, the 7-13-29 is the benchmark I’m going to use going in.
Those are important distinctions. Thanks for the clarification!