I’m puzzled. Are you sure that’s your main objection? Because,
you make a different objection (I think) in your response to the sibling, and
it seems to me that since any simulation of this kind will be incomplete, and I assume the AI will seek the most efficient way to achieve its programmed goals, the scenario you describe is in fact horribly dangerous; the AI has an incentive to deceive us. (And somewhat like Wei Dai, I thought we were really talking about an AI goal system that talks about extrapolating human responses to various futures.)
It would be completely unfair of me to focus on the line, “as thorough as a film might be today”. But since it’s funny, I give you Cracked.com on Independence Day.
I’m puzzled. Are you sure that’s your main objection? Because,
you make a different objection (I think) in your response to the sibling, and
it seems to me that since any simulation of this kind will be incomplete, and I assume the AI will seek the most efficient way to achieve its programmed goals, the scenario you describe is in fact horribly dangerous; the AI has an incentive to deceive us. (And somewhat like Wei Dai, I thought we were really talking about an AI goal system that talks about extrapolating human responses to various futures.)
It would be completely unfair of me to focus on the line, “as thorough as a film might be today”. But since it’s funny, I give you Cracked.com on Independence Day.
To be honest, I was assuming we’re not talking about a “contained” UFAI, since that’s, you know, trivially unsafe.