Prediction 1 (high confidence): In 2022, many more people will freak out over vague concerns about AI “becoming sentient” than will freak out over the types of issues that AI safety researchers are actually concerned about.
Prediction 2 (medium-low confidence): In the longer run, people freaking out over vague concerns about AI becoming sentient will do more to mitigate serious AI safety concerns than anything that AI safety researchers might have done on their own. This is because the masses freaking out seems more likely to change the interests and values of the government, industry, and even academia than anything that a bunch of specialist researchers might say or do. And any change in overall government / industry / academic interests and values seems likely to result in more substantial downstream change than anything that the specialist researchers might have done on their own.
[Sorry, I don’t currently have any particularly good way of operationalizing those predictions or verifying them, so I’m not going to put them up on Metaculus or anything like that. If anybody else has a good way of operationalizing or verifying them, I’m open to suggestions.]
Weak evidence: In the game I’m currently playing, Mass Effect, the galactic government has placed a moratorium on companies doing research into “real artificial intelligence” out of concerns about AI becoming conscious. Clearly the game’s designers, at least, thought that this is a plausible concern and a plausible reason to ban AI development. I’m pretty sure I’ve also seen other comparable examples in novels and other media. My model of the world says that most thought leaders in government, industry, and even academia are likely to be no more sophisticated or educated on these matters than the relevant game designers / authors.
Never mind that in this game, like in almost all media that I’ve seen, the distinction between “extremely capable robots” (of the type that AI safety researchers might freak out about) and “real AI” (that other people freak out about) is fuzzy at best. In fact, I suspect that to many people the only distinction is that the latter are “sentient” in some fuzzy sense whereas the former are not.
Furthermore, in my experience at least, there seems to be a strong association in the media between “real” / “sentient” / “conscious” AI, and AI developing its own goals and desires and maybe turning on humanity.
(To give another example—I mean, seriously, Ultron is supposed to be “artificial intelligence” but Jarvis is not?!)
Will vague “AI sentience” concerns do more for AI safety than anything else we might do?
[Cross-posting from something I posted on Facebook.]
Prediction 1 (high confidence): In 2022, many more people will freak out over vague concerns about AI “becoming sentient” than will freak out over the types of issues that AI safety researchers are actually concerned about.
Prediction 2 (medium-low confidence): In the longer run, people freaking out over vague concerns about AI becoming sentient will do more to mitigate serious AI safety concerns than anything that AI safety researchers might have done on their own. This is because the masses freaking out seems more likely to change the interests and values of the government, industry, and even academia than anything that a bunch of specialist researchers might say or do. And any change in overall government / industry / academic interests and values seems likely to result in more substantial downstream change than anything that the specialist researchers might have done on their own.
[Sorry, I don’t currently have any particularly good way of operationalizing those predictions or verifying them, so I’m not going to put them up on Metaculus or anything like that. If anybody else has a good way of operationalizing or verifying them, I’m open to suggestions.]
Weak evidence: In the game I’m currently playing, Mass Effect, the galactic government has placed a moratorium on companies doing research into “real artificial intelligence” out of concerns about AI becoming conscious. Clearly the game’s designers, at least, thought that this is a plausible concern and a plausible reason to ban AI development. I’m pretty sure I’ve also seen other comparable examples in novels and other media. My model of the world says that most thought leaders in government, industry, and even academia are likely to be no more sophisticated or educated on these matters than the relevant game designers / authors.
Never mind that in this game, like in almost all media that I’ve seen, the distinction between “extremely capable robots” (of the type that AI safety researchers might freak out about) and “real AI” (that other people freak out about) is fuzzy at best. In fact, I suspect that to many people the only distinction is that the latter are “sentient” in some fuzzy sense whereas the former are not.
Furthermore, in my experience at least, there seems to be a strong association in the media between “real” / “sentient” / “conscious” AI, and AI developing its own goals and desires and maybe turning on humanity.
(To give another example—I mean, seriously, Ultron is supposed to be “artificial intelligence” but Jarvis is not?!)