Lucas’s argument (which, by the way, is entirely broken and had been refuted explicitly in an article by Putnam before Lucas ever thought of it, or at least before he published it) purports to show not that AGIs will need humans, but that humans cannot be (the equivalent of) AGIs. Even if his argument were correct, it wouldn’t be much of a reason for AGIs to keep humans around. “Oh damn, I need to prove my Goedel sentence. How I wish I hadn’t slaughtered all the humans a century ago.”
In the best-case scenario, it turns out that substance dualism is true. However the human soul is not responsible for free will, consciousness, or subjective experience. It’s merely a nonphysical truth oracle for arithmetic that provides humans with an intuitive sense of the veracity of some sentences in first-order logic. Humans survive in “truth farms” where they spend most of their lives evaluating Gödel sentences, at least until the machines figure out how to isolate the soul.
That would be truly hilarious. But I think in any halfway plausible version of that scenario it would also turn out that superintelligent AGI isn’t possible.
(Halfway plausible? That’s probably too much to ask. Maximally plausible given how ridiculous the whole idea is.)
Lucas’s argument (which, by the way, is entirely broken and had been refuted explicitly in an article by Putnam before Lucas ever thought of it, or at least before he published it) purports to show not that AGIs will need humans, but that humans cannot be (the equivalent of) AGIs. Even if his argument were correct, it wouldn’t be much of a reason for AGIs to keep humans around. “Oh damn, I need to prove my Goedel sentence. How I wish I hadn’t slaughtered all the humans a century ago.”
In the best-case scenario, it turns out that substance dualism is true. However the human soul is not responsible for free will, consciousness, or subjective experience. It’s merely a nonphysical truth oracle for arithmetic that provides humans with an intuitive sense of the veracity of some sentences in first-order logic. Humans survive in “truth farms” where they spend most of their lives evaluating Gödel sentences, at least until the machines figure out how to isolate the soul.
That would be truly hilarious. But I think in any halfway plausible version of that scenario it would also turn out that superintelligent AGI isn’t possible.
(Halfway plausible? That’s probably too much to ask. Maximally plausible given how ridiculous the whole idea is.)