I don’t know, actually. I’m not the one making these forecasts. It’s usually described as some broad-based increase of AI competence but not cashed out any further than that. I’ll remark that if there isn’t a sharp sudden bit of headline news, chances of a significant public reaction drop even further.
Sorry, what I meant is, what would you consider an event that ought to be taken seriously but won’t be? Eh, that’s not right, presumably that’s long past, like Deep Blue or maybe first quine.
What would you consider an even that an AI researcher not sold on AI x-risks ought to take seriously but likely will not? A version of Watson which can write web apps from vague human instructions? A perfect simulation of C.elegans? A human mind upload?
Even I think they’d take a mind upload seriously—that might really produce a huge public update though probably not in any sane direction—though I don’t expect that to happen before a neuromorphic UFAI is produced from the same knowledge base. They normatively ought to take a spider upload seriously. Something passing a restricted version of a Turing test might make a big public brouhaha, but even with a restricted test I’m not sure I expect any genuinely significant version of that before the end of the world (unrestricted Turing test passing should be sufficient unto FOOM). I’m not sure what you ‘ought’ to take seriously if you didn’t take computers seriously in the first place. Aubrey was very specific in his prediction that I disagree with, people who forecast watershed opinion-changing events for AI are less so at least as far as I can recall.
unrestricted Turing test passing should be sufficient unto FOOM
I don’t think this is quite right. Most humans can pass a Turing test, even though they can’t understand their own source code. FOOM requires that an AI has the ability to self-modify with enough stability to continue to (a) desire to continue to self-modified, and (b) be able to do so. Most uploaded humans would have a very difficult time with this - - just look at how people resist even modifying their beliefs, let alone their thinking machinery.
The problem is that an AI which passes the unrestricted Turing test must be strictly superior to a human; it would still have all the expected AI abilities like high-speed calculation and so on. A human who was augmented to the point of passing the Pocket Calculator Equivalence Test would be superhumanly fast and accurate at arithmetic on top of still having all the classical human abilities, they wouldn’t be just as smart as a pocket calculator.
High speed calculation plus human-level intelligence is not sufficient for recursive self-improvement. An AI needs to be able to understand its own source code, and that is not a guarantee that passing the Turing test (plus high-speed calculation) includes.
If I am confident that a human is capable of building human-level intelligence, my confidence that a human-level intelligence cannot build a slightly-higher-than-human intelligence, given sufficient trials, becomes pretty low. Ditto my confidence that a slightly-higher-than-human intelligence cannot build a slightly-smarter-than-that intelligence, and so forth.
But, sure, it’s far from zero. As you say, it’s not a guarantee.
A human who was augmented to the point of passing the Pocket Calculator Equivalence Test
I thought a Human with a Pocket Calculator is this augmented human already. Unless you want to implant the calculator in your skull and control it with your thoughts. Which will also soon be possible.
The biggest reason humans can’t do this is that we don’t implement .copy(). This is not a problem for AIs or uploads, even if they are otherwise only of human intelligence.
Sure, with a large enough number of copies of you to practice on, you would learn to do brain surgery well enough to improve the functioning of your brain. But it could easily take a few thousand years. The biggest problem with self-improving AI is understanding how the mind works in the first place.
Consider first of all a machine that can pass an “AI-focused Turing test”, by which I mean convincing one of the AI team that built it that it’s a human being with a comparable level of AI expertise.
I suggest that such a machine is almost certainly “sufficient unto FOOM”, if the judge in the test is allowed to go into enough detail.
An ordinary Turing test doesn’t require the machine to imitate an AI expert but merely a human being. So for a “merely” Turing-passing AI not to be “sufficient unto FOOM” (at least as I understand that term) what’s needed is that there should be a big gap between making a machine that successfully imitates an ordinary human being, and making a machine that successfully imitates an AI expert.
It seems unlikely that there’s a very big gap architecturally between human AI experts and ordinary humans. So, to get a machine that passes an ordinary Turing test but isn’t close to being FOOM-ready, it seems like what’s needed is a way of passing an ordinary Turing test that works very differently from actual human thinking, and doesn’t “scale up” to harder problems like the ordinary human architecture apparently does.
Given that some machines have been quite successful in stupidly-crippled pseudo-Turing tests like the Loebner contest, I suppose this can’t be entirely ruled out, but it feels much harder to believe than a “narrow” chess-playing AI was even at the time of Hofstadter’s prediction.
Still, I think there might be room for the following definition: the strong Turing test consists of having your machine grilled by several judges, with different domains of expertise, each of whom gets to specify in broad terms (ahead of time) what sort of human being the machine is supposed to imitate. So then the machine might need to be able to convince competent physicists that it’s a physicist, competent literary critics that it’s a novelist, civil rights activists that it’s a black person who’s suffered from racial discrimination, etc.
I don’t know, actually. I’m not the one making these forecasts. It’s usually described as some broad-based increase of AI competence but not cashed out any further than that. I’ll remark that if there isn’t a sharp sudden bit of headline news, chances of a significant public reaction drop even further.
Sorry, what I meant is, what would you consider an event that ought to be taken seriously but won’t be? Eh, that’s not right, presumably that’s long past, like Deep Blue or maybe first quine.
What would you consider an even that an AI researcher not sold on AI x-risks ought to take seriously but likely will not? A version of Watson which can write web apps from vague human instructions? A perfect simulation of C.elegans? A human mind upload?
Even I think they’d take a mind upload seriously—that might really produce a huge public update though probably not in any sane direction—though I don’t expect that to happen before a neuromorphic UFAI is produced from the same knowledge base. They normatively ought to take a spider upload seriously. Something passing a restricted version of a Turing test might make a big public brouhaha, but even with a restricted test I’m not sure I expect any genuinely significant version of that before the end of the world (unrestricted Turing test passing should be sufficient unto FOOM). I’m not sure what you ‘ought’ to take seriously if you didn’t take computers seriously in the first place. Aubrey was very specific in his prediction that I disagree with, people who forecast watershed opinion-changing events for AI are less so at least as far as I can recall.
I don’t think this is quite right. Most humans can pass a Turing test, even though they can’t understand their own source code. FOOM requires that an AI has the ability to self-modify with enough stability to continue to (a) desire to continue to self-modified, and (b) be able to do so. Most uploaded humans would have a very difficult time with this - - just look at how people resist even modifying their beliefs, let alone their thinking machinery.
The problem is that an AI which passes the unrestricted Turing test must be strictly superior to a human; it would still have all the expected AI abilities like high-speed calculation and so on. A human who was augmented to the point of passing the Pocket Calculator Equivalence Test would be superhumanly fast and accurate at arithmetic on top of still having all the classical human abilities, they wouldn’t be just as smart as a pocket calculator.
High speed calculation plus human-level intelligence is not sufficient for recursive self-improvement. An AI needs to be able to understand its own source code, and that is not a guarantee that passing the Turing test (plus high-speed calculation) includes.
If I am confident that a human is capable of building human-level intelligence, my confidence that a human-level intelligence cannot build a slightly-higher-than-human intelligence, given sufficient trials, becomes pretty low. Ditto my confidence that a slightly-higher-than-human intelligence cannot build a slightly-smarter-than-that intelligence, and so forth.
But, sure, it’s far from zero. As you say, it’s not a guarantee.
I thought a Human with a Pocket Calculator is this augmented human already. Unless you want to implant the calculator in your skull and control it with your thoughts. Which will also soon be possible.
The biggest reason humans can’t do this is that we don’t implement .copy(). This is not a problem for AIs or uploads, even if they are otherwise only of human intelligence.
Sure, with a large enough number of copies of you to practice on, you would learn to do brain surgery well enough to improve the functioning of your brain. But it could easily take a few thousand years. The biggest problem with self-improving AI is understanding how the mind works in the first place.
I tend to agree, but I have to note the surface similarity with Hofstadter’s disproved “No, I’m bored with chess. Let’s talk about poetry.” prediction.
Consider first of all a machine that can pass an “AI-focused Turing test”, by which I mean convincing one of the AI team that built it that it’s a human being with a comparable level of AI expertise.
I suggest that such a machine is almost certainly “sufficient unto FOOM”, if the judge in the test is allowed to go into enough detail.
An ordinary Turing test doesn’t require the machine to imitate an AI expert but merely a human being. So for a “merely” Turing-passing AI not to be “sufficient unto FOOM” (at least as I understand that term) what’s needed is that there should be a big gap between making a machine that successfully imitates an ordinary human being, and making a machine that successfully imitates an AI expert.
It seems unlikely that there’s a very big gap architecturally between human AI experts and ordinary humans. So, to get a machine that passes an ordinary Turing test but isn’t close to being FOOM-ready, it seems like what’s needed is a way of passing an ordinary Turing test that works very differently from actual human thinking, and doesn’t “scale up” to harder problems like the ordinary human architecture apparently does.
Given that some machines have been quite successful in stupidly-crippled pseudo-Turing tests like the Loebner contest, I suppose this can’t be entirely ruled out, but it feels much harder to believe than a “narrow” chess-playing AI was even at the time of Hofstadter’s prediction.
Still, I think there might be room for the following definition: the strong Turing test consists of having your machine grilled by several judges, with different domains of expertise, each of whom gets to specify in broad terms (ahead of time) what sort of human being the machine is supposed to imitate. So then the machine might need to be able to convince competent physicists that it’s a physicist, competent literary critics that it’s a novelist, civil rights activists that it’s a black person who’s suffered from racial discrimination, etc.