I think it all boils down to very simple showstopper—considering you are building perfect simulation, how many atoms you need to simulate a atom?
Perfect simulation is not the only means of self-knowledge.
As for empirical knowledge, I’m not sure Eliezer expects an AI to take over the world with no observations/input at all, but he does think that people do far overestimate the amount of observations an effective AI would need.
(Also, for an AI, “building a new AI” and “self-improving” are pretty much the same thing. There isn’t anything magic about “self”. If the AI can write a better AI, it can write a better AI; whether it calls that code “self” or not makes no difference. Granted, it may be somewhat harder for the AI to make sure the new code has the same goal structure if it’s written from scratch, but there’s no particular reason it has to start from scratch.)
Perfect simulation is not the only means of self-knowledge.
As for empirical knowledge, I’m not sure Eliezer expects an AI to take over the world with no observations/input at all, but he does think that people do far overestimate the amount of observations an effective AI would need.
(Also, for an AI, “building a new AI” and “self-improving” are pretty much the same thing. There isn’t anything magic about “self”. If the AI can write a better AI, it can write a better AI; whether it calls that code “self” or not makes no difference. Granted, it may be somewhat harder for the AI to make sure the new code has the same goal structure if it’s written from scratch, but there’s no particular reason it has to start from scratch.)