“The Errors, Insights and Lessons of Famous AI Predictions – and What They Mean for the Future.”
Cited by 3 (of which 1 is a self-cite)
“Why We Need Friendly AI.”
Cited by 5
“Exploratory Engineering in AI.”
Cited by 1 (self cite)
“Embryo Selection for Cognitive Enhancement: Curiosity or Game-Changer?”
Cited by 4 (of which 1 is a self cite)
“Safety Engineering for Artificial General Intelligence.”
Cited by 11 (of which 6 are self cites)
“How Hard Is Artificial Intelligence? Evolutionary Arguments and Selection Effects.”
Cited by 7 (of which 1 is a self cite)
“Advantages of Artificial Intelligences, Uploads, and Digital Minds.”
Cited by 8 (of which 2 are self cites)
“Coalescing Minds: Brain Uploading-Related Group Mind Scenarios.”
Cited by 7 (of which 2 are self cites)
MIRI seems to have done OK re the general public and generally more people seem willing to voice concern over AI related X-risk but almost nobody seems willing to associate it with MIRI or lesswrong which is a bit sad.
Has MIRI engaged any kind of PR firms for dealing with this? Either ones with an academic or public focus?
Has MIRI pulled back from trying to get academic publications? I noticed there’s no new journal articles for 2015.
Is this due to low impact factor?
https://intelligence.org/all-publications/
“The Errors, Insights and Lessons of Famous AI Predictions – and What They Mean for the Future.”
Cited by 3 (of which 1 is a self-cite)
“Why We Need Friendly AI.”
Cited by 5
“Exploratory Engineering in AI.”
Cited by 1 (self cite)
“Embryo Selection for Cognitive Enhancement: Curiosity or Game-Changer?”
Cited by 4 (of which 1 is a self cite)
“Safety Engineering for Artificial General Intelligence.”
Cited by 11 (of which 6 are self cites)
“How Hard Is Artificial Intelligence? Evolutionary Arguments and Selection Effects.”
Cited by 7 (of which 1 is a self cite)
“Advantages of Artificial Intelligences, Uploads, and Digital Minds.”
Cited by 8 (of which 2 are self cites)
“Coalescing Minds: Brain Uploading-Related Group Mind Scenarios.”
Cited by 7 (of which 2 are self cites)
MIRI seems to have done OK re the general public and generally more people seem willing to voice concern over AI related X-risk but almost nobody seems willing to associate it with MIRI or lesswrong which is a bit sad.
Has MIRI engaged any kind of PR firms for dealing with this? Either ones with an academic or public focus?