The kind of constraint you propose would be very useful. We would have to first prove that there is a kind of topology in under general computation (because the machine can change its own language, so the solution can’t be language specific) that only allows non-suicidal trajectories under all possible inputs and self-modifications. (or perhaps at least with low probability, but this is not likely to be computable). I have looked, but not found such a thing in existing theory. There is work on topology of computation, but it’s something different from this. I may just be unaware of it, however.
Note that in the real-world scenario, we also have to worry about entropy battering around the design, so we need a margin of error for that too.
Finally, the finite-time solution is practical, but ultimately not satisfying. The short term solution to being in a building on fire may be to stay put. The long term solution may be to risk short-term harm for long-term survival. And so with only short-term solutions, one may end up in a dead end down the road. A practical limit on short-term advance simulation is that one still has to act in real time while the simulation runs. And if you want the simulation to take into account that simulations are occurring, we’re back to infinite regress...
No, didn’t read the sequences. I will do that. The link might be better named to something that indicates what it actually is. But I didn’t say the AIs would be safe (or super-intelligent, for that matter), and I don’t assume they would be. But those who create them may assume that.