Our claims do not contradict. If FAI succeeds, then so does UFAI prevention, so UFAI prevention is in some sense a subproblem. But, UFAI-prevention remains a more important problem.
There are 3 possible states of the world in 50 years. No AI, UFAI, and FAI.
Utility (No AI)-Utility (UFAI) > > Utility (FAI) - Utility of (No AI)
I must point out that “the FAI problem” could refer to one of two things: creating FAI before UFAI, or the pure technical problem of building FAI given essentially unlimited time. The former (which is basically what UFAI prevention amounts to) is, I expect, far harder than the latter.
So, for the benefit of anyone reading, UFAI prevention is 1) at least as easy as creating FAI before UFAI (which will involve more than just software development, probably) but 2) much harder than building FAI itself.
I must point out that “the FAI problem” could refer to one of two things: creating FAI before UFAI, or the pure technical problem of building FAI given essentially unlimited time.
If we are entertaining abstract problems from fantasy worlds there is also the case of unlimited resources to consider.
I’m trying to be more pragmatic than that. The average person, when they read “how hard is it to build FAI?” probably does not think of the task of building FAI while trying to prevent UFAI. They think of solving decision theory and metaethics and implementation of CEV or whatever. There’s a sensible notion of how hard it is to build FAI on its own, without involving UFAI-prevention. That’s what I’m talking about.
And I don’t want people to confuse those things. It’s one thing to say UFAI prevention is as easy as “building FAI (before UFAI)”. But it’s much harder than, you know, just building FAI in a world without the UFAI threat, which is what I think people will think of when you say “no, we just have to build FAI”. Well, yes, but you don’t just have to build it, you have to build it before anyone else creates AGI.
Our claims do not contradict. If FAI succeeds, then so does UFAI prevention, so UFAI prevention is in some sense a subproblem. But, UFAI-prevention remains a more important problem.
There are 3 possible states of the world in 50 years. No AI, UFAI, and FAI.
Utility (No AI)-Utility (UFAI) > > Utility (FAI) - Utility of (No AI)
It isn’t a more difficult problem, though. It is an easier problem.
The idea that it is more important is Nick Bostrom’s “Maxipok” principle.
I must point out that “the FAI problem” could refer to one of two things: creating FAI before UFAI, or the pure technical problem of building FAI given essentially unlimited time. The former (which is basically what UFAI prevention amounts to) is, I expect, far harder than the latter.
So, for the benefit of anyone reading, UFAI prevention is 1) at least as easy as creating FAI before UFAI (which will involve more than just software development, probably) but 2) much harder than building FAI itself.
If we are entertaining abstract problems from fantasy worlds there is also the case of unlimited resources to consider.
I’m trying to be more pragmatic than that. The average person, when they read “how hard is it to build FAI?” probably does not think of the task of building FAI while trying to prevent UFAI. They think of solving decision theory and metaethics and implementation of CEV or whatever. There’s a sensible notion of how hard it is to build FAI on its own, without involving UFAI-prevention. That’s what I’m talking about.
And I don’t want people to confuse those things. It’s one thing to say UFAI prevention is as easy as “building FAI (before UFAI)”. But it’s much harder than, you know, just building FAI in a world without the UFAI threat, which is what I think people will think of when you say “no, we just have to build FAI”. Well, yes, but you don’t just have to build it, you have to build it before anyone else creates AGI.