To translate Graham’s statement back to the FAI problem: In Eliezer’s alignment talk, he discusses the value of solving a relaxed constraint version of the FAI problem by granting oneself unlimited computing power. Well, in the same way, the AGI problem can be seen as a relaxed constraint version of the FAI problem. One could argue that it’s a waste of time to try to make a secure version of AGI Approach X if we don’t even know if it’s possible to build an AGI using AGI Approach X. (I don’t agree with this view, but I don’t think it’s entirely unreasonable.)
Isn’t the point exactly that if you can’t solve the whole problem of (AGI + Alignment) then it would be better not even to try solving the relaxed problem (AGI)?
Maybe not, if you can keep your solution to AGI secret and suppress it if it turns out that there’s no way to solve the alignment problem in your framework.
Isn’t the point exactly that if you can’t solve the whole problem of (AGI + Alignment) then it would be better not even to try solving the relaxed problem (AGI)?
Maybe not, if you can keep your solution to AGI secret and suppress it if it turns out that there’s no way to solve the alignment problem in your framework.