To be perfectly honest, mentioning Artificial Intelligence at all might be the wrong way to start a discussion about the risks of superintelligence. The vast majority of us (myself included) have only very basic ideas of how even modern computers work much less the scientific / mathematical background to really engage with AI theory, not to mention that we’re so inundated with inaccurate ideas about AI from fiction that it would probably be easier just to dodge the misconceptions entirely. An additional concern is that “serious people” who might otherwise be capable of understanding the issue won’t want to be associated with a seemingly fantastical and/or nerdy discussion and will tune you out just on that basis.
Short of just starting with That Alien Message, either because Mr Yudkowski’s prose style is a little dense(1) at times or because you want to put your own mark on it, I would suggest something along the same lines of replacing AI with a biological superintelligence. Throwing in applause lights like having the “wise fools” who unleash the smartpocalypse being a or the AI-equivalent being a would probably make it more palatable to targeted audiences, but might be too far on the dark side for your tastes.
I’ve actually come up with a cool short story idea based on the concept, essentially genetically modified Humboldt squid with satellite internet connections killing off humanity by being a bit overprotective of depleted fish stocks, which I might post as a reply to this post after some polishing.
Not intended as a criticism at all, but it is certainly above a high school reading level and unfortunately that is an issue when we’re trying to talk to a wide audience.
To be perfectly honest, mentioning Artificial Intelligence at all might be the wrong way to start a discussion about the risks of superintelligence.
I think that’s giving up too much ground. Talking about AI risk while trying to avoid mentioning anything synthetic or artificial or robotic is like talking about asteroid risk while trying to avoid mentioning outer space.
The vast majority of us (myself included) have only very basic ideas of how even modern computers work much less the scientific / mathematical background to really engage with AI theory
But is that necessary for understanding and accepting any of the Five Theses?
not to mention that we’re so inundated with inaccurate ideas about AI from fiction that it would probably be easier just to dodge the misconceptions entirely.
Don’t a lot of those misconceptions help us? People are primed to be scared that AI is a problem. We then only have to mold that emotion to be less anthropomorphic and reactionary; we don’t have to invent emotion out of whole cloth.
An additional concern is that “serious people” who might otherwise be capable of understanding the issue won’t want to be associated with a seemingly fantastical and/or nerdy discussion
It’s a deliberate feature of my mix that it’s (I hope) optimized for philosophical, narrative, abstractive thinkers—the sort who usually prefer Eliezer’s fleshy narratives over Luke’s skeletal arguments. Both groups are important, but I prioritized making one for the Eliezer crowd because: (a) I think I have a better grasp on how to appeal to them; (b) they’re the sort of crowd that isn’t always drawn to, or in conversation with, programmer culture; and (c) Luke’s non-academic writings on this topic are already fairly well consolidated and organized. Eliezer’s are all over the place, so starting to gather them here gets returns faster.
Short of just starting with That Alien Message
I think That Alien Message is one of the more background-demanding equationless articles Yudkowsky’s written. It’s directed at combating some very specific and sophisticated mistakes about AI, and taking away the moral requires, I think, enough of a background with the AI project to have some quite specific and complicated (false) expectations already in mind.
I’m not sure even someone who’s read all 20 of the posts I listed would be ready yet for That Alien Message, unless by chance they spontaneously asserted a highly specific relevant doubt (e.g., ‘I don’t think anything could get much smarter than a human, therefore I’m not very concerned about AGI’).
I would suggest something along the same lines of replacing AI with a biological superintelligence.
I think the main problem with this is that we think of biological intelligences as moral patients. We think they have rights, are sentient, etc. That adds a lot more problems and complications than we started with. Also, I’m not sure Friendliness is technologically possible for a biological AGI of the sort we’re likely to make (for the same reason it may be technologically impossible for a whole-brain emulation).
Throwing in applause lights like having the “wise fools” who unleash the smartpocalypse being a or the AI-equivalent being a would probably make it more palatable to targeted audiences, but might be too far on the dark side for your tastes.
I’m less worried about whether it’s dark-side than about whether it’s ineffective. Exploiting ‘evil robot apocalypse’ memes dovetails with the general message we’re trying to convey. Exploiting ‘GMOs and big companies are intrinsically evil’ actively contradicts a lot of the message we want to convey. E.g., it capitalizes on ‘don’t tamper with Nature’. One of the most important take-aways from LessWrong is that if we don’t take control over our own destiny—if we don’t play God, in a safe and responsible way—then the future will be valueless. That’s in direct tension with trying to squick people out with a gross bioengineered intelligence that will be seen as Bad because it’s a human intervention, and not because it’s a mathematically unrigorous or meta-ethically unsophisticated intervention.
Which isn’t to say that we shouldn’t appeal to that audience. If they’re especially likely to misunderstand the problem, that might make it all the more valuable to alter their world-view. But it will only be useful to tap into their ‘don’t play God’ mentality if we, in the same cutting strike, demonstrate the foolishness of that stance.
I think a lot of smart high schoolers could read the sequence I provided above. If they aren’t exceptional enough to do so, they’re probably better off starting with CFAR-style stuff rather than MIRI-style stuff anyway, since I’d expect reading comprehension skills to correlate with argument evaluation skills.
To be perfectly honest, mentioning Artificial Intelligence at all might be the wrong way to start a discussion about the risks of superintelligence. The vast majority of us (myself included) have only very basic ideas of how even modern computers work much less the scientific / mathematical background to really engage with AI theory, not to mention that we’re so inundated with inaccurate ideas about AI from fiction that it would probably be easier just to dodge the misconceptions entirely. An additional concern is that “serious people” who might otherwise be capable of understanding the issue won’t want to be associated with a seemingly fantastical and/or nerdy discussion and will tune you out just on that basis.
Short of just starting with That Alien Message, either because Mr Yudkowski’s prose style is a little dense(1) at times or because you want to put your own mark on it, I would suggest something along the same lines of replacing AI with a biological superintelligence. Throwing in applause lights like having the “wise fools” who unleash the smartpocalypse being a or the AI-equivalent being a would probably make it more palatable to targeted audiences, but might be too far on the dark side for your tastes.
I’ve actually come up with a cool short story idea based on the concept, essentially genetically modified Humboldt squid with satellite internet connections killing off humanity by being a bit overprotective of depleted fish stocks, which I might post as a reply to this post after some polishing.
Not intended as a criticism at all, but it is certainly above a high school reading level and unfortunately that is an issue when we’re trying to talk to a wide audience.
I think that’s giving up too much ground. Talking about AI risk while trying to avoid mentioning anything synthetic or artificial or robotic is like talking about asteroid risk while trying to avoid mentioning outer space.
But is that necessary for understanding and accepting any of the Five Theses?
Don’t a lot of those misconceptions help us? People are primed to be scared that AI is a problem. We then only have to mold that emotion to be less anthropomorphic and reactionary; we don’t have to invent emotion out of whole cloth.
It’s a deliberate feature of my mix that it’s (I hope) optimized for philosophical, narrative, abstractive thinkers—the sort who usually prefer Eliezer’s fleshy narratives over Luke’s skeletal arguments. Both groups are important, but I prioritized making one for the Eliezer crowd because: (a) I think I have a better grasp on how to appeal to them; (b) they’re the sort of crowd that isn’t always drawn to, or in conversation with, programmer culture; and (c) Luke’s non-academic writings on this topic are already fairly well consolidated and organized. Eliezer’s are all over the place, so starting to gather them here gets returns faster.
I think That Alien Message is one of the more background-demanding equationless articles Yudkowsky’s written. It’s directed at combating some very specific and sophisticated mistakes about AI, and taking away the moral requires, I think, enough of a background with the AI project to have some quite specific and complicated (false) expectations already in mind.
I’m not sure even someone who’s read all 20 of the posts I listed would be ready yet for That Alien Message, unless by chance they spontaneously asserted a highly specific relevant doubt (e.g., ‘I don’t think anything could get much smarter than a human, therefore I’m not very concerned about AGI’).
I think the main problem with this is that we think of biological intelligences as moral patients. We think they have rights, are sentient, etc. That adds a lot more problems and complications than we started with. Also, I’m not sure Friendliness is technologically possible for a biological AGI of the sort we’re likely to make (for the same reason it may be technologically impossible for a whole-brain emulation).
I’m less worried about whether it’s dark-side than about whether it’s ineffective. Exploiting ‘evil robot apocalypse’ memes dovetails with the general message we’re trying to convey. Exploiting ‘GMOs and big companies are intrinsically evil’ actively contradicts a lot of the message we want to convey. E.g., it capitalizes on ‘don’t tamper with Nature’. One of the most important take-aways from LessWrong is that if we don’t take control over our own destiny—if we don’t play God, in a safe and responsible way—then the future will be valueless. That’s in direct tension with trying to squick people out with a gross bioengineered intelligence that will be seen as Bad because it’s a human intervention, and not because it’s a mathematically unrigorous or meta-ethically unsophisticated intervention.
Which isn’t to say that we shouldn’t appeal to that audience. If they’re especially likely to misunderstand the problem, that might make it all the more valuable to alter their world-view. But it will only be useful to tap into their ‘don’t play God’ mentality if we, in the same cutting strike, demonstrate the foolishness of that stance.
I think a lot of smart high schoolers could read the sequence I provided above. If they aren’t exceptional enough to do so, they’re probably better off starting with CFAR-style stuff rather than MIRI-style stuff anyway, since I’d expect reading comprehension skills to correlate with argument evaluation skills.