As Luke mentioned, I am in the process of writing “Responses to Catastrophic AGI Risk”: A journal-bound summary of the AI risk problem, and a taxonomy of the societal proposals (e.g. denial of the risk, no action, legal and economic controls, differential technological development) and AI design proposals (e.g. AI confinement, chaining, Oracle AI, FAI) that have been made.
One of the categories is “They Will Need Us”—claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us. Currently this section is pretty empty:
Supporting a mutually beneficial legal or economic arrangement is the view that AGIs will need humans. For example, Butler (1863) argues that machines will need us to help them reproduce, and Lucas (1961) suggests that machines could never show Gödelian sentences true, though humans can see them as true.
But I’m certain that I’ve heard this claim made more often than in just those two sources. Does anyone remember having seen such arguments somewhere else? While “academically reputable” sources (papers, books) are preferred, blog posts and websites are fine as well.
Note that this claim is distinct from the claim that (due to general economic theory) it’s more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we’re looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.
Wanted: “The AIs will need humans” arguments
As Luke mentioned, I am in the process of writing “Responses to Catastrophic AGI Risk”: A journal-bound summary of the AI risk problem, and a taxonomy of the societal proposals (e.g. denial of the risk, no action, legal and economic controls, differential technological development) and AI design proposals (e.g. AI confinement, chaining, Oracle AI, FAI) that have been made.
One of the categories is “They Will Need Us”—claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us. Currently this section is pretty empty:
But I’m certain that I’ve heard this claim made more often than in just those two sources. Does anyone remember having seen such arguments somewhere else? While “academically reputable” sources (papers, books) are preferred, blog posts and websites are fine as well.
Note that this claim is distinct from the claim that (due to general economic theory) it’s more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we’re looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.