whoever builds the first AI can take over the world, which makes building AI the ultimate arms race.
As the Wikipedians often say, “citation needed”. The first “AI” was built decades ago. It evidently failed to “take over the world”. Possibly someday a machine will take over the world—but it may not be the first one built.
In the opening sentence I used the (perhaps unwise) abbreviation “artificial general intelligence (AI)” because I meant AGI throughout the piece, but I wanted to be able to say just “AI” for convenience. Maybe I should have said “AGI” instead.
The first OS didn’t take over the world. The first search engine didn’t take over the world. The first government didn’t take over the world. The first agent of some type taking over the world is dramatic—but there’s no good reason to think that it will happen. History better supports models where pioneers typically get their lunch eaten by bigger fish coming up from behind them.
Doesn’t someone have to hit the ball back for it to be “tennis”? If anyone does so, we can then compare reference classes—and see who has the better set. Are you suggesting this sort of thing is not productive? On what grounds?
If we’re talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.
At an object level, if AI research goes secret at some point, it seems unlikely, though not impossible, that if team A develops human-level AGI, then team B will develop super-human-level AGI before team A does. If the research is fully public (which seems dubious but again isn’t impossible), then these advantages would be less pronounced, and it might well be that many teams could be in close competition even after human-level AGI. Still, because human-level AGI can be scaled to run very quickly, it seems likely it could bootstrap itself to stay in the lead.
If we’re talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.
Note that humans haven’t “taken over the world” in many senses of the phrase. We are massively outnumbered and out-massed by our own symbionts—and by other creatures.
Machine intelligence probably won’t be a “secret” technology for long—due to the economic pressure to embed it.
While its true that things will go faster in the future, that applies about equally to all players—in a phenomenon commonly known as “internet time”.
As has been pointed out numerious times on lesswrong, history is not a very good guide for dealing with AI since it is likely to be a singular (if you’ll excuse the pun) event in history. Perhaps the only other thing it can be compared with is life itself, and we currently have no information about how it arose (did the first self-replicating molecule lead to all life as we know it? Or were there many competing forms of life, one of which eventually won?)
What is meant by ‘known risk’ though? Do you mean ‘knowledge that AI is possible’, or ‘knowledge about what it will entail’? I agree with you completely that we have no information about the latter.
As has been pointed out numerious times on lesswrong, history is not a very good guide for dealing with AI since it is likely to be a singular (if you’ll excuse the pun) event in history. Perhaps the only other thing it can be compared with is life itself [...]
What, a new thinking technology? You can’t be serious.
As the Wikipedians often say, “citation needed”. The first “AI” was built decades ago. It evidently failed to “take over the world”. Possibly someday a machine will take over the world—but it may not be the first one built.
In the opening sentence I used the (perhaps unwise) abbreviation “artificial general intelligence (AI)” because I meant AGI throughout the piece, but I wanted to be able to say just “AI” for convenience. Maybe I should have said “AGI” instead.
The first OS didn’t take over the world. The first search engine didn’t take over the world. The first government didn’t take over the world. The first agent of some type taking over the world is dramatic—but there’s no good reason to think that it will happen. History better supports models where pioneers typically get their lunch eaten by bigger fish coming up from behind them.
Yes, let’s engage in reference class tennis instead of thinking about object level features.
Doesn’t someone have to hit the ball back for it to be “tennis”? If anyone does so, we can then compare reference classes—and see who has the better set. Are you suggesting this sort of thing is not productive? On what grounds?
If we’re talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.
At an object level, if AI research goes secret at some point, it seems unlikely, though not impossible, that if team A develops human-level AGI, then team B will develop super-human-level AGI before team A does. If the research is fully public (which seems dubious but again isn’t impossible), then these advantages would be less pronounced, and it might well be that many teams could be in close competition even after human-level AGI. Still, because human-level AGI can be scaled to run very quickly, it seems likely it could bootstrap itself to stay in the lead.
Note that humans haven’t “taken over the world” in many senses of the phrase. We are massively outnumbered and out-massed by our own symbionts—and by other creatures.
Machine intelligence probably won’t be a “secret” technology for long—due to the economic pressure to embed it.
While its true that things will go faster in the future, that applies about equally to all players—in a phenomenon commonly known as “internet time”.
Looks like someone already did.
And I’m not just suggesting this is not productive, I’m saying it’s not productive. My reasoning is standard: see here and also here.
Standard? Invoking reference classes is a form of arguing by analogy. It’s a basic thinking tool. Don’t knock it if you don’t know how to use it.
Don’t be obnoxious. I linked to two posts that discuss the issue in depth. There’s no need to reduce my comment to one meaningless word.
As has been pointed out numerious times on lesswrong, history is not a very good guide for dealing with AI since it is likely to be a singular (if you’ll excuse the pun) event in history. Perhaps the only other thing it can be compared with is life itself, and we currently have no information about how it arose (did the first self-replicating molecule lead to all life as we know it? Or were there many competing forms of life, one of which eventually won?)
History shows a variety of singular events. But it because they are singular, you can;t quantify their risk. So there is a contradiction between saying uFAI is a definite, known, risk, and saying it is an unprecedented singularity.)
What is meant by ‘known risk’ though? Do you mean ‘knowledge that AI is possible’, or ‘knowledge about what it will entail’? I agree with you completely that we have no information about the latter.
The latter.
What, a new thinking technology? You can’t be serious.