I think largish fraction of the population have worries about human extinction / the end of the world. Very few associate this with the phrase “existential risk”—I for one had never heard the term until after I had started reading about the technological singularity and related ideas. Perhaps rebranding of a sort would help you further the cause. Ditto for FAI—I think ‘Ethical Artficial Intelligence’ would get the idea across well enough and might sound less flakey to certain audiences.
“Ethical Artificial Intelligence” sounds great and makes sense without having to know the background of the technological singularity as “Friendly Artificial Intelligence” does. Every time I try to mention FAI to someone without any background on the topic I always have to take two steps back in the conversation and it becomes quickly confusing. I think I could mention Ethical AI and then continue on with whatever point I was making without any kind of background and it would still make the right connections.
I also expect it would appeal to a demographic likely to support the concept as well. People who worry about ethical food, business, healthcare etc… would be likely to worry about existential risk on many levels.
In fact I think I’ll just go ahead and start using Ethical AI from now on. I’m sure people in the FAI community would understand what I’m talking about.
It may be true that many are worried about ‘the end of the world’, however consider how many of them think that it was predicted by the Mayan calandar to occur on Dec. 21 2012, and how many actively want it to happen because they believe it will herald the coming of God’s Kingdom on Earth, Olam Haba, or whatever.
We could rebrand ‘existential risk’ as ‘end time’ and gain vast numbers of followers. But I doubt that would actually be desirable.
I do think that Ethical Artificial Intelligence would strike a better chord with most than Friendly, though. ‘Friendly’ does sound a bit unserious.
I think largish fraction of the population have worries about human extinction / the end of the world. Very few associate this with the phrase “existential risk”—I for one had never heard the term until after I had started reading about the technological singularity and related ideas. Perhaps rebranding of a sort would help you further the cause. Ditto for FAI—I think ‘Ethical Artficial Intelligence’ would get the idea across well enough and might sound less flakey to certain audiences.
“Ethical Artificial Intelligence” sounds great and makes sense without having to know the background of the technological singularity as “Friendly Artificial Intelligence” does. Every time I try to mention FAI to someone without any background on the topic I always have to take two steps back in the conversation and it becomes quickly confusing. I think I could mention Ethical AI and then continue on with whatever point I was making without any kind of background and it would still make the right connections.
I also expect it would appeal to a demographic likely to support the concept as well. People who worry about ethical food, business, healthcare etc… would be likely to worry about existential risk on many levels.
In fact I think I’ll just go ahead and start using Ethical AI from now on. I’m sure people in the FAI community would understand what I’m talking about.
It may be true that many are worried about ‘the end of the world’, however consider how many of them think that it was predicted by the Mayan calandar to occur on Dec. 21 2012, and how many actively want it to happen because they believe it will herald the coming of God’s Kingdom on Earth, Olam Haba, or whatever.
We could rebrand ‘existential risk’ as ‘end time’ and gain vast numbers of followers. But I doubt that would actually be desirable.
I do think that Ethical Artificial Intelligence would strike a better chord with most than Friendly, though. ‘Friendly’ does sound a bit unserious.