FWIW, I have been a long time reader of SF, have long been a believer of strong AI, am familiar with friendly and unfriendly AIs and the idea of the singularity, but hadn’t heard much serious discussion on development of superintelligence. My experience and beliefs are probably not entirely normal, but arose from a context close to normal.
My thought process until I started reading LessWrong and related sites was basically split between “scientists are developing bigger and bigger supercomputers, but they are all assigned to narrow tasks—playing chess, obscure math problems, managing complicated data traffic” and “intelligence is a difficult task akin to teaching a computer to walk bipedally or recognize complex visual images, which will teke forever with lots of dead ends”. Most of what I had read in terms of spontaneous AI was fairly silly SF premises (lost packets on the internet become sentient!) or in the far future, after many decades of work on AI finally resulting in a super-AI.
I also believe that science reporting downplays the AI aspects of computer advances. Siri, self-driving cars, etc. are no longer referred to as AI in the way they would have been when I was growing up; AI is by definition something that is science fiction or well off in the future. Anything that we have now is framed as just an interesting program, not an ‘intelligence’ of any sort.
FWIW, I have been a long time reader of SF, have long been a believer of strong AI, am familiar with friendly and unfriendly AIs and the idea of the singularity, but hadn’t heard much serious discussion on development of superintelligence. My experience and beliefs are probably not entirely normal, but arose from a context close to normal.
My thought process until I started reading LessWrong and related sites was basically split between “scientists are developing bigger and bigger supercomputers, but they are all assigned to narrow tasks—playing chess, obscure math problems, managing complicated data traffic” and “intelligence is a difficult task akin to teaching a computer to walk bipedally or recognize complex visual images, which will teke forever with lots of dead ends”. Most of what I had read in terms of spontaneous AI was fairly silly SF premises (lost packets on the internet become sentient!) or in the far future, after many decades of work on AI finally resulting in a super-AI.
I also believe that science reporting downplays the AI aspects of computer advances. Siri, self-driving cars, etc. are no longer referred to as AI in the way they would have been when I was growing up; AI is by definition something that is science fiction or well off in the future. Anything that we have now is framed as just an interesting program, not an ‘intelligence’ of any sort.