unrestricted Turing test passing should be sufficient unto FOOM
I don’t think this is quite right. Most humans can pass a Turing test, even though they can’t understand their own source code. FOOM requires that an AI has the ability to self-modify with enough stability to continue to (a) desire to continue to self-modified, and (b) be able to do so. Most uploaded humans would have a very difficult time with this - - just look at how people resist even modifying their beliefs, let alone their thinking machinery.
The problem is that an AI which passes the unrestricted Turing test must be strictly superior to a human; it would still have all the expected AI abilities like high-speed calculation and so on. A human who was augmented to the point of passing the Pocket Calculator Equivalence Test would be superhumanly fast and accurate at arithmetic on top of still having all the classical human abilities, they wouldn’t be just as smart as a pocket calculator.
High speed calculation plus human-level intelligence is not sufficient for recursive self-improvement. An AI needs to be able to understand its own source code, and that is not a guarantee that passing the Turing test (plus high-speed calculation) includes.
If I am confident that a human is capable of building human-level intelligence, my confidence that a human-level intelligence cannot build a slightly-higher-than-human intelligence, given sufficient trials, becomes pretty low. Ditto my confidence that a slightly-higher-than-human intelligence cannot build a slightly-smarter-than-that intelligence, and so forth.
But, sure, it’s far from zero. As you say, it’s not a guarantee.
A human who was augmented to the point of passing the Pocket Calculator Equivalence Test
I thought a Human with a Pocket Calculator is this augmented human already. Unless you want to implant the calculator in your skull and control it with your thoughts. Which will also soon be possible.
The biggest reason humans can’t do this is that we don’t implement .copy(). This is not a problem for AIs or uploads, even if they are otherwise only of human intelligence.
Sure, with a large enough number of copies of you to practice on, you would learn to do brain surgery well enough to improve the functioning of your brain. But it could easily take a few thousand years. The biggest problem with self-improving AI is understanding how the mind works in the first place.
I don’t think this is quite right. Most humans can pass a Turing test, even though they can’t understand their own source code. FOOM requires that an AI has the ability to self-modify with enough stability to continue to (a) desire to continue to self-modified, and (b) be able to do so. Most uploaded humans would have a very difficult time with this - - just look at how people resist even modifying their beliefs, let alone their thinking machinery.
The problem is that an AI which passes the unrestricted Turing test must be strictly superior to a human; it would still have all the expected AI abilities like high-speed calculation and so on. A human who was augmented to the point of passing the Pocket Calculator Equivalence Test would be superhumanly fast and accurate at arithmetic on top of still having all the classical human abilities, they wouldn’t be just as smart as a pocket calculator.
High speed calculation plus human-level intelligence is not sufficient for recursive self-improvement. An AI needs to be able to understand its own source code, and that is not a guarantee that passing the Turing test (plus high-speed calculation) includes.
If I am confident that a human is capable of building human-level intelligence, my confidence that a human-level intelligence cannot build a slightly-higher-than-human intelligence, given sufficient trials, becomes pretty low. Ditto my confidence that a slightly-higher-than-human intelligence cannot build a slightly-smarter-than-that intelligence, and so forth.
But, sure, it’s far from zero. As you say, it’s not a guarantee.
I thought a Human with a Pocket Calculator is this augmented human already. Unless you want to implant the calculator in your skull and control it with your thoughts. Which will also soon be possible.
The biggest reason humans can’t do this is that we don’t implement .copy(). This is not a problem for AIs or uploads, even if they are otherwise only of human intelligence.
Sure, with a large enough number of copies of you to practice on, you would learn to do brain surgery well enough to improve the functioning of your brain. But it could easily take a few thousand years. The biggest problem with self-improving AI is understanding how the mind works in the first place.