First, it is being mentioned in the mainstream—there was a New York Times article about it recently.
Secondly, I can think of another monumental, civilisation-filtering event that took a long time to enter mainstream thought—nuclear war. I’ve been reading Bertrand Russel’s autobiography recently, and am up to the point where he begins campaigning against the possibility of nuclear destruction. In 1948 he made a speech to the House of Lords (UK’s upper chamber), explaining that more and more nations would attempt to acquire nuclear weapons, until mutual annihilation seemed certain. His fellow Lords agreed with this, but believed the matter to be a problem for their grandchildren.
Looking back even further, for decades after the concept of a nuclear bomb was first formulated, the possibility of nuclear was was only seriously discussed amongst physicists.
I think your second point is stronger. However, I don’t think a single AI rewiring itself is the only way it can go FOOM. Assume the AI is as intelligent as a human; put it on faster hardware (or let it design its own faster hardware) and you’ve got something that’s like a human brain, but faster. Let it replicate itself, and you’ve got the equivalent of a team of humans, but which have the advantages of shared memory and instantaneous communication.
Now, if humans can design an AI, surely a team 1,000,000 human equivalents running 1000x faster can design an improved AI?
Two counters to the majoritarian argument:
First, it is being mentioned in the mainstream—there was a New York Times article about it recently.
Secondly, I can think of another monumental, civilisation-filtering event that took a long time to enter mainstream thought—nuclear war. I’ve been reading Bertrand Russel’s autobiography recently, and am up to the point where he begins campaigning against the possibility of nuclear destruction. In 1948 he made a speech to the House of Lords (UK’s upper chamber), explaining that more and more nations would attempt to acquire nuclear weapons, until mutual annihilation seemed certain. His fellow Lords agreed with this, but believed the matter to be a problem for their grandchildren.
Looking back even further, for decades after the concept of a nuclear bomb was first formulated, the possibility of nuclear was was only seriously discussed amongst physicists.
I think your second point is stronger. However, I don’t think a single AI rewiring itself is the only way it can go FOOM. Assume the AI is as intelligent as a human; put it on faster hardware (or let it design its own faster hardware) and you’ve got something that’s like a human brain, but faster. Let it replicate itself, and you’ve got the equivalent of a team of humans, but which have the advantages of shared memory and instantaneous communication.
Now, if humans can design an AI, surely a team 1,000,000 human equivalents running 1000x faster can design an improved AI?