To me, nanobots don’t them like the are central to LW stories about AI risk.
If you would ask people on an LW census “If AI causes the extinction of humans, how likely do you think that nanobots play a huge part in that”, I would expect the median percentage to be single digits or lower.
I agree that Nanobots are not a necessary part of AI takeover scenarios. However, I perceive them as a very illustrative kind of “the AI is smart enough for plans that make resistance futile and make AI takeover fast” scenarios.
The word “typical” is probably misleading, sorry; most scenarios on LW do not include Nanobots. OTOH, LW is a place where such scenarios are at least taken seriously.
So p(scenario contains Nanobots|LW or rationality community is the place of discussion of the scenario) is probably not very high, but p(LW or rationality community is the place of discussion of the scenario|scenario contains Nanobots) probably is...?
To me, nanobots don’t them like the are central to LW stories about AI risk.
If you would ask people on an LW census “If AI causes the extinction of humans, how likely do you think that nanobots play a huge part in that”, I would expect the median percentage to be single digits or lower.
I agree that Nanobots are not a necessary part of AI takeover scenarios. However, I perceive them as a very illustrative kind of “the AI is smart enough for plans that make resistance futile and make AI takeover fast” scenarios.
The word “typical” is probably misleading, sorry; most scenarios on LW do not include Nanobots. OTOH, LW is a place where such scenarios are at least taken seriously.
So p(scenario contains Nanobots|LW or rationality community is the place of discussion of the scenario) is probably not very high, but p(LW or rationality community is the place of discussion of the scenario|scenario contains Nanobots) probably is...?