As others have said, doesn’t much matter whether you use ZFC or any other system.
On the second point: one general danger is that your ability to build systems to do X will outpace your understanding of systems that do X. In this case, you might mess up and not quite get what you want. One way to try and remedy this is to develop a better formal understanding of “systems that do X.” AGI seems like an important case of this pattern, because there is some chance of serious negative consequences from failing, from building systems whose behavior you don’t quite understand. (I tend to think this probability is smaller than Eliezer, but I think most everyone who has thought about it seriously agrees that this is a possible problem.) At the moment we have no such formal understanding for AGI, so it might be worth thinking about.
As others have said, doesn’t much matter whether you use ZFC or any other system.
On the second point: one general danger is that your ability to build systems to do X will outpace your understanding of systems that do X. In this case, you might mess up and not quite get what you want. One way to try and remedy this is to develop a better formal understanding of “systems that do X.” AGI seems like an important case of this pattern, because there is some chance of serious negative consequences from failing, from building systems whose behavior you don’t quite understand. (I tend to think this probability is smaller than Eliezer, but I think most everyone who has thought about it seriously agrees that this is a possible problem.) At the moment we have no such formal understanding for AGI, so it might be worth thinking about.