I don’t understand this question. Why would the answer to that question matter? (In your post, you write “If the answer is yes to all of the above, I’d be a little more skeptical.”) Also, the “story” is not really popular. Outside of LessWrong discussions and few other places, people seem to think that every expectation about the future that involves a superintelligent agentic AGI sounds like science fiction and therefore does not have to be taken seriously.
Sorry for not being clear. My question was whether LW really likes the nanobot story because we think it might happen within our own lifetimes. If we knew for a fact that human-destroying-nanobots would take another 100 years to develop, would discussing them still be just as interesting?
Side note: I don’t think the “sci-fi bias” concept is super coherent in my head, I wrote this post as best as I can, but I fully acknowledge that its not fully fleshed out.
Yes, people care about things that are expected to happen today rather than in 1,000 years or later. That is a problem that people fighting against climate change have been pointing out for a long time.
At the same time, with respect to AI, my impression is that many people do not react to developments that will quickly have strong implications, while some others write a lot about caring about humanity’s long-term future.
I don’t understand this question. Why would the answer to that question matter? (In your post, you write “If the answer is yes to all of the above, I’d be a little more skeptical.”) Also, the “story” is not really popular. Outside of LessWrong discussions and few other places, people seem to think that every expectation about the future that involves a superintelligent agentic AGI sounds like science fiction and therefore does not have to be taken seriously.
Sorry for not being clear. My question was whether LW really likes the nanobot story because we think it might happen within our own lifetimes. If we knew for a fact that human-destroying-nanobots would take another 100 years to develop, would discussing them still be just as interesting?
Side note: I don’t think the “sci-fi bias” concept is super coherent in my head, I wrote this post as best as I can, but I fully acknowledge that its not fully fleshed out.
Yes, people care about things that are expected to happen today rather than in 1,000 years or later. That is a problem that people fighting against climate change have been pointing out for a long time. At the same time, with respect to AI, my impression is that many people do not react to developments that will quickly have strong implications, while some others write a lot about caring about humanity’s long-term future.