For your moral questions, I think it would help if you replace “morally significant dormant turing machine with no input devices” with “comatose human”.
If yes, notice that state N of the machine can be encoded as the initial state of the machine plus the number N. Would it suffice to just start incrementing a counter and say that the machine is running?
Notice that the state N of a comatose human patient can be encoded as the initial state plus the number N. Would it suffice to just start incrementing a stopwatch and say that the patient is well?
If I do not need to turn anything on, I might as well destroy the machine, because the Turing machine will still exist in a Platonic sense, and the Platonic machine won’t notice if I destroy a manifestation of it.
If I do not need to turn anything on, I might as well destroy the patient, because the patient will still exist in a Platonic sense, and the Platonic patient won’t notice if I destroy a manifestation of it.
The only platonic sense in which things still exist after being destroyed is in the sense of us remembering and thinking about them—a very weak form of simulation. If we could think in much more precision and with vastly more power, then we could make thought-things ‘real’. But until we have simulations of such power, all we have is the real world. And nonetheless, everything that exists, even in simulation, must be encoded somewhere with matter/energy in the universe.
For your moral questions, I think it would help if you replace “morally significant dormant turing machine with no input devices” with “comatose human”.
Ah, but presumably if we were to wake up the comatose person, they would start interacting with the world; and their output would depend on the particulars of the state of the world. In that case I clearly want to wake them up.
I was thinking of a morally significant dormant Turing machine that was not designed to have input devices. For example, a comatose person with no sensory organs. If they woke up, they would awaken to a life of dreams and dark solitude, proceeding deterministically from their initial state. Let’s assume there is absolutely no way to restore this person’s senses. It’s not clear to me that it’s morally desirable to wake them up.
For your moral questions, I think it would help if you replace “morally significant dormant turing machine with no input devices” with “comatose human”.
Notice that the state N of a comatose human patient can be encoded as the initial state plus the number N. Would it suffice to just start incrementing a stopwatch and say that the patient is well?
If I do not need to turn anything on, I might as well destroy the patient, because the patient will still exist in a Platonic sense, and the Platonic patient won’t notice if I destroy a manifestation of it.
The only platonic sense in which things still exist after being destroyed is in the sense of us remembering and thinking about them—a very weak form of simulation. If we could think in much more precision and with vastly more power, then we could make thought-things ‘real’. But until we have simulations of such power, all we have is the real world. And nonetheless, everything that exists, even in simulation, must be encoded somewhere with matter/energy in the universe.
Ah, but presumably if we were to wake up the comatose person, they would start interacting with the world; and their output would depend on the particulars of the state of the world. In that case I clearly want to wake them up.
I was thinking of a morally significant dormant Turing machine that was not designed to have input devices. For example, a comatose person with no sensory organs. If they woke up, they would awaken to a life of dreams and dark solitude, proceeding deterministically from their initial state. Let’s assume there is absolutely no way to restore this person’s senses. It’s not clear to me that it’s morally desirable to wake them up.