Yeah, add me to the “If anything morally interesting is happening here, it is happening when the mind-states are computed. Copying those computed-and-recorded states into various media, including ‘into memory,’ doesn’t have moral significance.”
More generally: any interesting computational properties of a system are interesting only during computation; the stored results of those computations lack those interesting properties.
More generally: any interesting computational properties of a system are interesting only during computation; the stored results of those computations lack those interesting properties.
So what happens if I do a mix? The computer can, at each step, choose randomly between reading a cached copy of brain-state t, and computing state(t) based on state(t-1). No outside observer can tell which option the machine chose at each step, and the internal states are ALSO the same. You can also imagine caching parts of the brain-state at every step, and recomputing other parts.
In any simulation, “compute state(t)” and “read a cached copy of state(t)” can blur into each other. And this is a problem philosophicaly, because they blur into each other in ways that don’t have externally-visible consequences. And this means we’ll be drawing moral distinctions based on an implementation choice with no physical consequences—and that seems like a problem, from a consequentialist point of view.
because they blur into each other in ways that don’t have externally-visible consequences.
Not true; look at polarix’s top-level comment.
A generalization of that idea is that torture represents, at minimum a causal chain with the torturer as the cause and the victim as the effect. Therefore changing some parameter of the torturer should result in some change of parameter of the victim. But if you’re just loading frames from memory, that does not occur. The causal chain is broken.
So a system S is computed in such a way that some interesting computational property (C) arises, and all the interim S-states are cached. I then execute a process P that at every step might be recomputing S or might be looking up the cached S-state, in such a way that no outside observer can tell the difference via any conceivable test. Yes?
So, sure, P might or might not cause C to arise, and we have no way of telling which is the case.
I’m not quite sure why this is particularly a problem for moral consequentialism. If C arising is a consequence we prefer to avoid, executing P in the first place is morally problematic in the same way that playing Russian Roulette is… it creates the possibility of a future state we prefer to avoid. And creating the cached S-states was definitely an act we prefer to avoid, and therefore immoral on moral-consequentialist grounds.
I can make a physical argument for this: if I can have subjective experiences while my computation is frozen, why do all available external observations of my subjective experience process (ie: my life) seem to show that I require time and calories in order to experience things?
Yeah, add me to the “If anything morally interesting is happening here, it is happening when the mind-states are computed. Copying those computed-and-recorded states into various media, including ‘into memory,’ doesn’t have moral significance.”
More generally: any interesting computational properties of a system are interesting only during computation; the stored results of those computations lack those interesting properties.
So what happens if I do a mix? The computer can, at each step, choose randomly between reading a cached copy of brain-state t, and computing state(t) based on state(t-1). No outside observer can tell which option the machine chose at each step, and the internal states are ALSO the same. You can also imagine caching parts of the brain-state at every step, and recomputing other parts.
In any simulation, “compute state(t)” and “read a cached copy of state(t)” can blur into each other. And this is a problem philosophicaly, because they blur into each other in ways that don’t have externally-visible consequences. And this means we’ll be drawing moral distinctions based on an implementation choice with no physical consequences—and that seems like a problem, from a consequentialist point of view.
Not true; look at polarix’s top-level comment.
A generalization of that idea is that torture represents, at minimum a causal chain with the torturer as the cause and the victim as the effect. Therefore changing some parameter of the torturer should result in some change of parameter of the victim. But if you’re just loading frames from memory, that does not occur. The causal chain is broken.
OK.
So a system S is computed in such a way that some interesting computational property (C) arises, and all the interim S-states are cached. I then execute a process P that at every step might be recomputing S or might be looking up the cached S-state, in such a way that no outside observer can tell the difference via any conceivable test. Yes?
So, sure, P might or might not cause C to arise, and we have no way of telling which is the case.
I’m not quite sure why this is particularly a problem for moral consequentialism. If C arising is a consequence we prefer to avoid, executing P in the first place is morally problematic in the same way that playing Russian Roulette is… it creates the possibility of a future state we prefer to avoid. And creating the cached S-states was definitely an act we prefer to avoid, and therefore immoral on moral-consequentialist grounds.
I can make a physical argument for this: if I can have subjective experiences while my computation is frozen, why do all available external observations of my subjective experience process (ie: my life) seem to show that I require time and calories in order to experience things?