If someone wants to question the importance of facing this problem, they really instead need to argue that a superintelligence isn’t possible (not even a modest one), or that the future will turn out to be close to the best possible just by everyone pushing forward their own research without any concern for the big picture, or perhaps that we really don’t care very much about the far future and distant strangers and should pursue AI progress just for the immediate benefits.
False dilemma. For example, someone may think that superintelligences cannot arise quickly. Or they may think that human improvement to our own intelligent will make us as effective superintelligences well before we solve the AI problem (because it is just that tricky).
The point is eventual possibility of an intelligence significantly stronger than that of current humans, with “humans growing up” a special case of that. The latter doesn’t resolve the problem, because “growing out of humans” doesn’t automatically preserve values, this is a problem that must be solved in any case where vanilla humans are left behind, no matter in what manner or how slowly that happens.
Do you mean that the set of possible objections I gave isn’t complete? If so, I didn’t mean to imply that it was.
For example, someone may think that superintelligences cannot arise quickly.
And therefore we’re powerless to do anything to prevent the default outcome? What about the Modest Superintelligences post that I linked to?
Or they may think that human improvement to our own intelligent will make us as effective superintelligences well before we solve the AI problem (because it is just that tricky).
If someone has a strong intuition to that effect, then I’d ask them to consider how to safely improve our own intelligence.
False dilemma. For example, someone may think that superintelligences cannot arise quickly. Or they may think that human improvement to our own intelligent will make us as effective superintelligences well before we solve the AI problem (because it is just that tricky).
The point is eventual possibility of an intelligence significantly stronger than that of current humans, with “humans growing up” a special case of that. The latter doesn’t resolve the problem, because “growing out of humans” doesn’t automatically preserve values, this is a problem that must be solved in any case where vanilla humans are left behind, no matter in what manner or how slowly that happens.
Do you mean that the set of possible objections I gave isn’t complete? If so, I didn’t mean to imply that it was.
And therefore we’re powerless to do anything to prevent the default outcome? What about the Modest Superintelligences post that I linked to?
If someone has a strong intuition to that effect, then I’d ask them to consider how to safely improve our own intelligence.