I do think “business as usual” or “the default future” is the threat that existential risks people should be imagining.
Automation leads to a world where humans vote for government welfare for themselves. Governments then seem likely to compete with each other to attract corporations with low tax regimes, and get rid of their human burdens. This scenario is similar to the early parts of Manna. It leads to a world where humans are functionally redundant—though they may persist as a kind of parasitic organic layer on top of the machine world.
Meanwhile, many humans seem likely to be memetically hijacked, potentially leading to fertility and population declines. That may be a slow process, though.
The vast majority of writing about these issues has a story of terrorists or scientists (who are wizards meddling with things man was not meant to know) accidentally creating paperclip-making machines. That isn’t thinking, that’s straight out of folklore; e.g. Why the Sea is Salt.
Well, only around here. Other folk are looking at the effects of automation. Here’s my overview:
Automation leads to a world where humans vote for government welfare for themselves. Governments then seem likely to compete with each other to attract corporations with low tax regimes, and get rid of their human burdens. This scenario is similar to the early parts of Manna. It leads to a world where humans are functionally redundant—though they may persist as a kind of parasitic organic layer on top of the machine world.
Meanwhile, many humans seem likely to be memetically hijacked, potentially leading to fertility and population declines. That may be a slow process, though.
Well, only around here. Other folk are looking at the effects of automation. Here’s my overview:
http://alife.co.uk/essays/will_machines_take_our_jobs/