Doesn’t someone have to hit the ball back for it to be “tennis”? If anyone does so, we can then compare reference classes—and see who has the better set. Are you suggesting this sort of thing is not productive? On what grounds?
If we’re talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.
At an object level, if AI research goes secret at some point, it seems unlikely, though not impossible, that if team A develops human-level AGI, then team B will develop super-human-level AGI before team A does. If the research is fully public (which seems dubious but again isn’t impossible), then these advantages would be less pronounced, and it might well be that many teams could be in close competition even after human-level AGI. Still, because human-level AGI can be scaled to run very quickly, it seems likely it could bootstrap itself to stay in the lead.
If we’re talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.
Note that humans haven’t “taken over the world” in many senses of the phrase. We are massively outnumbered and out-massed by our own symbionts—and by other creatures.
Machine intelligence probably won’t be a “secret” technology for long—due to the economic pressure to embed it.
While its true that things will go faster in the future, that applies about equally to all players—in a phenomenon commonly known as “internet time”.
Yes, let’s engage in reference class tennis instead of thinking about object level features.
Doesn’t someone have to hit the ball back for it to be “tennis”? If anyone does so, we can then compare reference classes—and see who has the better set. Are you suggesting this sort of thing is not productive? On what grounds?
If we’re talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.
At an object level, if AI research goes secret at some point, it seems unlikely, though not impossible, that if team A develops human-level AGI, then team B will develop super-human-level AGI before team A does. If the research is fully public (which seems dubious but again isn’t impossible), then these advantages would be less pronounced, and it might well be that many teams could be in close competition even after human-level AGI. Still, because human-level AGI can be scaled to run very quickly, it seems likely it could bootstrap itself to stay in the lead.
Note that humans haven’t “taken over the world” in many senses of the phrase. We are massively outnumbered and out-massed by our own symbionts—and by other creatures.
Machine intelligence probably won’t be a “secret” technology for long—due to the economic pressure to embed it.
While its true that things will go faster in the future, that applies about equally to all players—in a phenomenon commonly known as “internet time”.
Looks like someone already did.
And I’m not just suggesting this is not productive, I’m saying it’s not productive. My reasoning is standard: see here and also here.
Standard? Invoking reference classes is a form of arguing by analogy. It’s a basic thinking tool. Don’t knock it if you don’t know how to use it.
Don’t be obnoxious. I linked to two posts that discuss the issue in depth. There’s no need to reduce my comment to one meaningless word.