Why don’t people (outside small groups like LW) advocate the creation of superintelligence much? If it is Friendly, it would have tremendous benefits. If superintelligence’s creation isn’t being advocated out of fears of it being unFriendly, then why don’t more people advocate FAI research? Is it just too long-term for people to really care about? Do people not think managing the risks is tractable?
I think something else is going on. The responses to this question about the feasibility of strong AI mostly stated that it was possible, though selection bias is probably largely at play, as knowledgable people would be more likely to answer than the ignorant would be.
Surely AI is a concept that’s more and more present in the Western culture, but only as fictional, as far as I can tell. No man in the street takes it seriously, as in “it’s really starting to happen”. Possibly the media are paving the way for a change in that, as the insurgence of AI related movies seems to suggest, but I would bet it’s still an idea very far from their realm of possibilities. Also, once the reality of an AI would be estabilished, it would still be a jump to believe in the possibility of an intelligence superior to human’s, a leap that for me is tiny but for many I suspect would not be so small (self-importance and all that).
FWIW, I have been a long time reader of SF, have long been a believer of strong AI, am familiar with friendly and unfriendly AIs and the idea of the singularity, but hadn’t heard much serious discussion on development of superintelligence. My experience and beliefs are probably not entirely normal, but arose from a context close to normal.
My thought process until I started reading LessWrong and related sites was basically split between “scientists are developing bigger and bigger supercomputers, but they are all assigned to narrow tasks—playing chess, obscure math problems, managing complicated data traffic” and “intelligence is a difficult task akin to teaching a computer to walk bipedally or recognize complex visual images, which will teke forever with lots of dead ends”. Most of what I had read in terms of spontaneous AI was fairly silly SF premises (lost packets on the internet become sentient!) or in the far future, after many decades of work on AI finally resulting in a super-AI.
I also believe that science reporting downplays the AI aspects of computer advances. Siri, self-driving cars, etc. are no longer referred to as AI in the way they would have been when I was growing up; AI is by definition something that is science fiction or well off in the future. Anything that we have now is framed as just an interesting program, not an ‘intelligence’ of any sort.
If you’re not reading about futurism, it’s unlikely to come up. There aren’t any former presidential candidates giving lectures about it, so most people have never heard of it. Politics isn’t about policy as Robin Hanson likes to say.
Why don’t people (outside small groups like LW) advocate the creation of superintelligence much? If it is Friendly, it would have tremendous benefits. If superintelligence’s creation isn’t being advocated out of fears of it being unFriendly, then why don’t more people advocate FAI research? Is it just too long-term for people to really care about? Do people not think managing the risks is tractable?
One answer could be that people don’t really think that a superintelligence is possible. It doesn’t even enter in their model of the world.
Like this? https://youtube.com/watch?v=xKk4Cq56d1Y
I think something else is going on. The responses to this question about the feasibility of strong AI mostly stated that it was possible, though selection bias is probably largely at play, as knowledgable people would be more likely to answer than the ignorant would be.
Surely AI is a concept that’s more and more present in the Western culture, but only as fictional, as far as I can tell.
No man in the street takes it seriously, as in “it’s really starting to happen”. Possibly the media are paving the way for a change in that, as the insurgence of AI related movies seems to suggest, but I would bet it’s still an idea very far from their realm of possibilities. Also, once the reality of an AI would be estabilished, it would still be a jump to believe in the possibility of an intelligence superior to human’s, a leap that for me is tiny but for many I suspect would not be so small (self-importance and all that).
But other than self-importance, why don’t people take it seriously? Is it otherwise just due to the absurdity and availability heuristics?
FWIW, I have been a long time reader of SF, have long been a believer of strong AI, am familiar with friendly and unfriendly AIs and the idea of the singularity, but hadn’t heard much serious discussion on development of superintelligence. My experience and beliefs are probably not entirely normal, but arose from a context close to normal.
My thought process until I started reading LessWrong and related sites was basically split between “scientists are developing bigger and bigger supercomputers, but they are all assigned to narrow tasks—playing chess, obscure math problems, managing complicated data traffic” and “intelligence is a difficult task akin to teaching a computer to walk bipedally or recognize complex visual images, which will teke forever with lots of dead ends”. Most of what I had read in terms of spontaneous AI was fairly silly SF premises (lost packets on the internet become sentient!) or in the far future, after many decades of work on AI finally resulting in a super-AI.
I also believe that science reporting downplays the AI aspects of computer advances. Siri, self-driving cars, etc. are no longer referred to as AI in the way they would have been when I was growing up; AI is by definition something that is science fiction or well off in the future. Anything that we have now is framed as just an interesting program, not an ‘intelligence’ of any sort.
If you’re not reading about futurism, it’s unlikely to come up. There aren’t any former presidential candidates giving lectures about it, so most people have never heard of it. Politics isn’t about policy as Robin Hanson likes to say.