So a system S is computed in such a way that some interesting computational property (C) arises, and all the interim S-states are cached. I then execute a process P that at every step might be recomputing S or might be looking up the cached S-state, in such a way that no outside observer can tell the difference via any conceivable test. Yes?
So, sure, P might or might not cause C to arise, and we have no way of telling which is the case.
I’m not quite sure why this is particularly a problem for moral consequentialism. If C arising is a consequence we prefer to avoid, executing P in the first place is morally problematic in the same way that playing Russian Roulette is… it creates the possibility of a future state we prefer to avoid. And creating the cached S-states was definitely an act we prefer to avoid, and therefore immoral on moral-consequentialist grounds.
OK.
So a system S is computed in such a way that some interesting computational property (C) arises, and all the interim S-states are cached. I then execute a process P that at every step might be recomputing S or might be looking up the cached S-state, in such a way that no outside observer can tell the difference via any conceivable test. Yes?
So, sure, P might or might not cause C to arise, and we have no way of telling which is the case.
I’m not quite sure why this is particularly a problem for moral consequentialism. If C arising is a consequence we prefer to avoid, executing P in the first place is morally problematic in the same way that playing Russian Roulette is… it creates the possibility of a future state we prefer to avoid. And creating the cached S-states was definitely an act we prefer to avoid, and therefore immoral on moral-consequentialist grounds.