You have a block world (as in eternalism)?) representation of the world that includes the AI system itself ( and the block world representation inside that system, and so forth?). My mind boggles at this a bit. How does it know what it will do before it makes the decision to do it? Formal proofs?
I suspect I need to see a formal (ish) spec of the system, so I can talk intelligently about how it might or might not fall into the pitfalls I see.
You have a block world (as in eternalism)?) representation of the world that includes the AI system itself ( and the block world representation inside that system, and so forth?). My mind boggles at this a bit. How does it know what it will do before it makes the decision to do it? Formal proofs?
I suspect I need to see a formal (ish) spec of the system, so I can talk intelligently about how it might or might not fall into the pitfalls I see.