Again, we have the capability to do this today: we can run two fully identical copies of a simulation program on two computers half way around the world and keep the two simulations in perfect synchronization. There are a few difficulties, but they were solved in games a while ago. Starcraft solves this problem every time you play multiplayer, for example.
No not at all—just the input feed from the sensors needs to be duplicated.
Clock issues are not relevant in a proper design because clock time has nothing to do with simulation time and simulation time is perfectly discrete and deterministic.
(4,5,6): These deal with errors and or downtime: the solutions are similar. Errors are handled with error correction and re-running that particular piece of code. Distributed deterministic simulation is well-studied in comp sci. Our worst case fallback in this use case is also somewhat easier—we can just pause both simulations until the error or downtime issue is handled.
(7). This is important even if you aren’t running two copies.
Running two large-scale computer systems that work exactly the same way is an hard task. Really hard. Similar somehow to keeping quantum entanglement in a large system.
Not even remotely close. One is part of current technical practice, the other is a distant research goal.
this argument hinges on how fault tolerant you regard identity to be. You can not isolate the computers from all error. Let’s say part of the error correction involves comparing the states of the two computers, would any correction be “death” for one of the simulated you’s?
bottom line: not interesting, revisit question when we have more data.
Again, we have the capability to do this today: we can run two fully identical copies of a simulation program on two computers half way around the world and keep the two simulations in perfect synchronization. There are a few difficulties, but they were solved in games a while ago. Starcraft solves this problem every time you play multiplayer, for example.
No not at all—just the input feed from the sensors needs to be duplicated.
Clock issues are not relevant in a proper design because clock time has nothing to do with simulation time and simulation time is perfectly discrete and deterministic.
(4,5,6): These deal with errors and or downtime: the solutions are similar. Errors are handled with error correction and re-running that particular piece of code. Distributed deterministic simulation is well-studied in comp sci. Our worst case fallback in this use case is also somewhat easier—we can just pause both simulations until the error or downtime issue is handled.
(7). This is important even if you aren’t running two copies.
Not even remotely close. One is part of current technical practice, the other is a distant research goal.
this argument hinges on how fault tolerant you regard identity to be. You can not isolate the computers from all error. Let’s say part of the error correction involves comparing the states of the two computers, would any correction be “death” for one of the simulated you’s?
bottom line: not interesting, revisit question when we have more data.