The main problems that I see are as Eliezer told in the Singularity Summit: there are problems regarding AGI that we don’t know how to solve even in principle(I’m not sure if this applies to AGI ingeneral or only to FAGI). So it might well be that we won’t solve these problems ever.
The most difficult part will be to ensure the friendliness of the AI. The biggest danger is someone else carelessly making an AGI that is not friendly.
The main problems that I see are as Eliezer told in the Singularity Summit: there are problems regarding AGI that we don’t know how to solve even in principle(I’m not sure if this applies to AGI ingeneral or only to FAGI). So it might well be that we won’t solve these problems ever.
The most difficult part will be to ensure the friendliness of the AI. The biggest danger is someone else carelessly making an AGI that is not friendly.