“Pandemics” aren’t a locally valid substitute step in my own larger argument, because an ASI needs its own manufacturing infrastructure before it makes sense for the ASI to kill the humans currently keeping its computers turned on.
When people are highly skeptical of the nanotech angle yet insist on a concrete example, I’ve sometimes gone with a pandemic coupled with limited access to medications that temporarily stave off, but don’t cure, that pandemic as a way to force a small workforce of humans preselected to cause few problems to maintain the AI’s hardware and build it the seed of a new infrastructure base while the rest of humanity dies.
I feel like this has so far maybe been more convincing and perceived as “less sci-fi” than Drexler-style nanotech by the people I’ve tried it on (small sample size, n<10).
Generally, I suspect not basing the central example on a position on one side of yet another fierce debate in technology forecasting trumps making things sound less like a movie where the humans might win. The rate of people understanding that something sounding like a movie does not imply the humans have a realistic chance at winning in real life just because they won in the movie seems, in my experience with these conversations so far, to exceed the rate of people getting on board with scenarios that involve any hint of Drexler-style nanotech.
When people are highly skeptical of the nanotech angle yet insist on a concrete example, I’ve sometimes gone with a pandemic coupled with limited access to medications that temporarily stave off, but don’t cure, that pandemic as a way to force a small workforce of humans preselected to cause few problems to maintain the AI’s hardware and build it the seed of a new infrastructure base while the rest of humanity dies.
I feel like this has so far maybe been more convincing and perceived as “less sci-fi” than Drexler-style nanotech by the people I’ve tried it on (small sample size, n<10).
Generally, I suspect not basing the central example on a position on one side of yet another fierce debate in technology forecasting trumps making things sound less like a movie where the humans might win. The rate of people understanding that something sounding like a movie does not imply the humans have a realistic chance at winning in real life just because they won in the movie seems, in my experience with these conversations so far, to exceed the rate of people getting on board with scenarios that involve any hint of Drexler-style nanotech.