Actually, lesswrong AGI warnings don’t sound like they could be the plot of a successful movie. In a movie, John Connor organizes humanity to fight against skynet. That does not seem plausible with LW-typical nanobot scenarios.
To me, nanobots don’t them like the are central to LW stories about AI risk.
If you would ask people on an LW census “If AI causes the extinction of humans, how likely do you think that nanobots play a huge part in that”, I would expect the median percentage to be single digits or lower.
I agree that Nanobots are not a necessary part of AI takeover scenarios. However, I perceive them as a very illustrative kind of “the AI is smart enough for plans that make resistance futile and make AI takeover fast” scenarios.
The word “typical” is probably misleading, sorry; most scenarios on LW do not include Nanobots. OTOH, LW is a place where such scenarios are at least taken seriously.
So p(scenario contains Nanobots|LW or rationality community is the place of discussion of the scenario) is probably not very high, but p(LW or rationality community is the place of discussion of the scenario|scenario contains Nanobots) probably is...?
This seems pretty false. There is at least one pretty successful fiction book written about the intelligence explosion (which, imo, would have been better if in subsequent books gur uhznaf qvqa’g fheivir).
Gnargh. Of course someone has a counterexample. But I don’t think that is the typical lw AGI warning scenario. However, this could become a “no true Scotsman” discussion...
You mean like Gwern’s It Looks Like You’re Trying To Take Over The World? I think that made a good short story. Though I don’t think it would make a good movie, since there’s little in the way of cool visuals.
Greg Egan’s Crystal Nights is also more similar to the usual way things are imagined, though uhznavgl vf fnirq ol gur hayvxryl qrhf rk znpuvan bs vg orvat rnfvre sbe gur fvzhyngrq pvivyvmngvba gb znxr n cbpxrg qvzrafvba guna gnxr bire gur jbeyq.
Edit: There are also likely tons more such books written by Ted Chiang, Vernor Vinge, Greg Egan, and others, which I haven’t read yet so can’t list with confidence and without spoilers to myself.
Thanks for the list! Yes, it is possible to imagine stories that involve a superintelligence.
I could not imagine a movie/successful story where everybody is killed by an AGI within seconds because it has prepared that in secrecy and nobody realized it, and nobody could do anything about it. Seems like lacking a happy end and even a story.
However, I am glad to be corrected, and will check the links, the stories will surely be interesting!
Minor corrections: “Crystal Nights” does not have an H in the first word and is by Greg Egan. (The linked copy is on his own website, in fact, which also includes a number of his other works.)
I don’t understand this question. Why would the answer to that question matter? (In your post, you write “If the answer is yes to all of the above, I’d be a little more skeptical.”) Also, the “story” is not really popular. Outside of LessWrong discussions and few other places, people seem to think that every expectation about the future that involves a superintelligent agentic AGI sounds like science fiction and therefore does not have to be taken seriously.
Sorry for not being clear. My question was whether LW really likes the nanobot story because we think it might happen within our own lifetimes. If we knew for a fact that human-destroying-nanobots would take another 100 years to develop, would discussing them still be just as interesting?
Side note: I don’t think the “sci-fi bias” concept is super coherent in my head, I wrote this post as best as I can, but I fully acknowledge that its not fully fleshed out.
Yes, people care about things that are expected to happen today rather than in 1,000 years or later. That is a problem that people fighting against climate change have been pointing out for a long time.
At the same time, with respect to AI, my impression is that many people do not react to developments that will quickly have strong implications, while some others write a lot about caring about humanity’s long-term future.
Actually, lesswrong AGI warnings don’t sound like they could be the plot of a successful movie. In a movie, John Connor organizes humanity to fight against skynet. That does not seem plausible with LW-typical nanobot scenarios.
To me, nanobots don’t them like the are central to LW stories about AI risk.
If you would ask people on an LW census “If AI causes the extinction of humans, how likely do you think that nanobots play a huge part in that”, I would expect the median percentage to be single digits or lower.
I agree that Nanobots are not a necessary part of AI takeover scenarios. However, I perceive them as a very illustrative kind of “the AI is smart enough for plans that make resistance futile and make AI takeover fast” scenarios.
The word “typical” is probably misleading, sorry; most scenarios on LW do not include Nanobots. OTOH, LW is a place where such scenarios are at least taken seriously.
So p(scenario contains Nanobots|LW or rationality community is the place of discussion of the scenario) is probably not very high, but p(LW or rationality community is the place of discussion of the scenario|scenario contains Nanobots) probably is...?
This seems pretty false. There is at least one pretty successful fiction book written about the intelligence explosion (which, imo, would have been better if in subsequent books gur uhznaf qvqa’g fheivir).
Gnargh. Of course someone has a counterexample. But I don’t think that is the typical lw AGI warning scenario. However, this could become a “no true Scotsman” discussion...
You mean like Gwern’s It Looks Like You’re Trying To Take Over The World? I think that made a good short story. Though I don’t think it would make a good movie, since there’s little in the way of cool visuals.
Greg Egan’s Crystal Nights is also more similar to the usual way things are imagined, though uhznavgl vf fnirq ol gur hayvxryl qrhf rk znpuvan bs vg orvat rnfvre sbe gur fvzhyngrq pvivyvmngvba gb znxr n cbpxrg qvzrafvba guna gnxr bire gur jbeyq.
Crystal Nights is also very similar to Eliezer’s That Alien Message / Alicorn’s Starwink.
Edit: There are also likely tons more such books written by Ted Chiang, Vernor Vinge, Greg Egan, and others, which I haven’t read yet so can’t list with confidence and without spoilers to myself.
Thanks for the list! Yes, it is possible to imagine stories that involve a superintelligence.
I could not imagine a movie/successful story where everybody is killed by an AGI within seconds because it has prepared that in secrecy and nobody realized it, and nobody could do anything about it. Seems like lacking a happy end and even a story.
However, I am glad to be corrected, and will check the links, the stories will surely be interesting!
Minor corrections: “Crystal Nights” does not have an H in the first word and is by Greg Egan. (The linked copy is on his own website, in fact, which also includes a number of his other works.)
Thanks! I remember consciously thinking both those things, but somehow did the opposite of that.
Nanobots destroying all humans at once are indeed poor sci-fi. But how much of this story’s popularity hinges on it happening within our lifetimes?
I don’t understand this question. Why would the answer to that question matter? (In your post, you write “If the answer is yes to all of the above, I’d be a little more skeptical.”) Also, the “story” is not really popular. Outside of LessWrong discussions and few other places, people seem to think that every expectation about the future that involves a superintelligent agentic AGI sounds like science fiction and therefore does not have to be taken seriously.
Sorry for not being clear. My question was whether LW really likes the nanobot story because we think it might happen within our own lifetimes. If we knew for a fact that human-destroying-nanobots would take another 100 years to develop, would discussing them still be just as interesting?
Side note: I don’t think the “sci-fi bias” concept is super coherent in my head, I wrote this post as best as I can, but I fully acknowledge that its not fully fleshed out.
Yes, people care about things that are expected to happen today rather than in 1,000 years or later. That is a problem that people fighting against climate change have been pointing out for a long time. At the same time, with respect to AI, my impression is that many people do not react to developments that will quickly have strong implications, while some others write a lot about caring about humanity’s long-term future.