You focus on visibly HAL-like or Skynet-like AI—the sort of thing that AI researchers produce as demos. However, we have large, smart, durable, existing entities (businesses and other computer+human teams) that are continuously getting smarter (and entrenching themselves deeper into our society) by automating their existing business practices.
I don’t advocate trying to stop business automation, or humans organizing themselves into better and better teams; I think that would be throwing the baby out with the bathwater. However, I do think “business as usual” or “the default future” is the threat that existential risks people should be imagining.
The vast majority of writing about these issues has a story of terrorists or scientists (who are wizards meddling with things man was not meant to know) accidentally creating paperclip-making machines. That isn’t thinking, that’s straight out of folklore; e.g. Why the Sea is Salt.
I do think “business as usual” or “the default future” is the threat that existential risks people should be imagining.
Automation leads to a world where humans vote for government welfare for themselves. Governments then seem likely to compete with each other to attract corporations with low tax regimes, and get rid of their human burdens. This scenario is similar to the early parts of Manna. It leads to a world where humans are functionally redundant—though they may persist as a kind of parasitic organic layer on top of the machine world.
Meanwhile, many humans seem likely to be memetically hijacked, potentially leading to fertility and population declines. That may be a slow process, though.
The vast majority of writing about these issues has a story of terrorists or scientists (who are wizards meddling with things man was not meant to know) accidentally creating paperclip-making machines. That isn’t thinking, that’s straight out of folklore; e.g. Why the Sea is Salt.
Well, only around here. Other folk are looking at the effects of automation. Here’s my overview:
You focus on visibly HAL-like or Skynet-like AI—the sort of thing that AI researchers produce as demos. However, we have large, smart, durable, existing entities (businesses and other computer+human teams) that are continuously getting smarter (and entrenching themselves deeper into our society) by automating their existing business practices.
I don’t advocate trying to stop business automation, or humans organizing themselves into better and better teams; I think that would be throwing the baby out with the bathwater. However, I do think “business as usual” or “the default future” is the threat that existential risks people should be imagining.
The vast majority of writing about these issues has a story of terrorists or scientists (who are wizards meddling with things man was not meant to know) accidentally creating paperclip-making machines. That isn’t thinking, that’s straight out of folklore; e.g. Why the Sea is Salt.
Automation leads to a world where humans vote for government welfare for themselves. Governments then seem likely to compete with each other to attract corporations with low tax regimes, and get rid of their human burdens. This scenario is similar to the early parts of Manna. It leads to a world where humans are functionally redundant—though they may persist as a kind of parasitic organic layer on top of the machine world.
Meanwhile, many humans seem likely to be memetically hijacked, potentially leading to fertility and population declines. That may be a slow process, though.
Well, only around here. Other folk are looking at the effects of automation. Here’s my overview:
http://alife.co.uk/essays/will_machines_take_our_jobs/