I think Eliezer’s goal was mainly to illustrate the kind of difficulty FAI is, rather than the size of the difficulty. But they aren’t totally unrelated; basic conceptual progress and coming up with new formal approaches often requires a fair amount of serial time (especially where one insight is needed before you can even start working toward a second insight), and progress is often sporadic compared to more applied/well-understood technical goals.
It would usually be extremely tough to estimate how much work was left if you were actually in the “rocket alignment” hypothetical—e.g., to tell with confidence whether you were 4 years or 20 years away from solving “logical undiscreteness”. In the real world, similarly, I don’t think anyone knows how hard the AI alignment problem is. If we can change the character of the problem from “we’re confused about how to do this in principle” to “we fundamentally get how one could align an AGI in the real world, but we haven’t found code solutions for all the snags that come with implementation”, then it would be much less weird to me if you could predict how much work was still left.
I think Eliezer’s goal was mainly to illustrate the kind of difficulty FAI is, rather than the size of the difficulty. But they aren’t totally unrelated; basic conceptual progress and coming up with new formal approaches often requires a fair amount of serial time (especially where one insight is needed before you can even start working toward a second insight), and progress is often sporadic compared to more applied/well-understood technical goals.
It would usually be extremely tough to estimate how much work was left if you were actually in the “rocket alignment” hypothetical—e.g., to tell with confidence whether you were 4 years or 20 years away from solving “logical undiscreteness”. In the real world, similarly, I don’t think anyone knows how hard the AI alignment problem is. If we can change the character of the problem from “we’re confused about how to do this in principle” to “we fundamentally get how one could align an AGI in the real world, but we haven’t found code solutions for all the snags that come with implementation”, then it would be much less weird to me if you could predict how much work was still left.