This question arises when I consider the moral status of intelligent agents. If I encounter a morally-significant dormant Turing machine with no input devices, do I need to turn it on?
If yes, notice that state N of the machine can be encoded as the initial state of the machine plus the number N. Would it suffice to just start incrementing a counter and say that the machine is running?
If I do not need to turn anything on, I might as well destroy the machine, because the Turing machine will still exist in a Platonic sense, and the Platonic machine won’t notice if I destroy a manifestation of it.
David Allen notes that consciousness ought to be defined relative to a context in which it can be interpreted; somewhat similarly, Jacob Cannell believes that consciousness needs some environment in order to be well-defined.
I think the answer to my moral question is that the rights of an intelligent agent can’t be meaningfully decomposed into a right to exist and a right to interact with the world.
For your moral questions, I think it would help if you replace “morally significant dormant turing machine with no input devices” with “comatose human”.
If yes, notice that state N of the machine can be encoded as the initial state of the machine plus the number N. Would it suffice to just start incrementing a counter and say that the machine is running?
Notice that the state N of a comatose human patient can be encoded as the initial state plus the number N. Would it suffice to just start incrementing a stopwatch and say that the patient is well?
If I do not need to turn anything on, I might as well destroy the machine, because the Turing machine will still exist in a Platonic sense, and the Platonic machine won’t notice if I destroy a manifestation of it.
If I do not need to turn anything on, I might as well destroy the patient, because the patient will still exist in a Platonic sense, and the Platonic patient won’t notice if I destroy a manifestation of it.
The only platonic sense in which things still exist after being destroyed is in the sense of us remembering and thinking about them—a very weak form of simulation. If we could think in much more precision and with vastly more power, then we could make thought-things ‘real’. But until we have simulations of such power, all we have is the real world. And nonetheless, everything that exists, even in simulation, must be encoded somewhere with matter/energy in the universe.
For your moral questions, I think it would help if you replace “morally significant dormant turing machine with no input devices” with “comatose human”.
Ah, but presumably if we were to wake up the comatose person, they would start interacting with the world; and their output would depend on the particulars of the state of the world. In that case I clearly want to wake them up.
I was thinking of a morally significant dormant Turing machine that was not designed to have input devices. For example, a comatose person with no sensory organs. If they woke up, they would awaken to a life of dreams and dark solitude, proceeding deterministically from their initial state. Let’s assume there is absolutely no way to restore this person’s senses. It’s not clear to me that it’s morally desirable to wake them up.
David Allen notes that consciousness ought to be defined relative to
a context in which it can be interpreted; somewhat similarly, Jacob
Cannell believes that consciousness needs some environment in order
to be well-defined.
Good summary. Yes my statements are in part a recasting of the
functionalism philosophy
mentioned by Jacob Cannell, in terms of the context principle, which I describe
here.
If this is an ACTUAL situation as described, rather than the contrived one you intended, you should copy the contents to somewhere you have good control over, then run it and meddle with it to give it I/O devices, or run it for as far as the agent(s) in it would have wanted it to run and then add I/O devices, or extract the agents as citizens in your FAI optimized place to have fun, or something along those lines like.
It seems to me that the arguments so lucidly presented elsewhere on Less Wrong would say that the machine is conscious whether or not it is run, and indeed whether or not it is built in the first place: if the Turing machine outputs a philosophical paper on the question of consciousness of the same kind that human philosophers write, we’re supposed to take it as conscious.
It is useful to distinguish the properties “a subsystem C of X is conscious in X” and “C exists in a conscious way” (which means that additionally X=reality). I think Nisan expresses that idea in the parent comment.
The machine considered has the property of being conscious in its context X (i.e. X = the system containing the machine, the producers of its input and consumers of its output). The machine exists in a conscious way if additionally X = reality.
This question arises when I consider the moral status of intelligent agents. If I encounter a morally-significant dormant Turing machine with no input devices, do I need to turn it on?
If yes, notice that state N of the machine can be encoded as the initial state of the machine plus the number N. Would it suffice to just start incrementing a counter and say that the machine is running?
If I do not need to turn anything on, I might as well destroy the machine, because the Turing machine will still exist in a Platonic sense, and the Platonic machine won’t notice if I destroy a manifestation of it.
David Allen notes that consciousness ought to be defined relative to a context in which it can be interpreted; somewhat similarly, Jacob Cannell believes that consciousness needs some environment in order to be well-defined.
I think the answer to my moral question is that the rights of an intelligent agent can’t be meaningfully decomposed into a right to exist and a right to interact with the world.
For your moral questions, I think it would help if you replace “morally significant dormant turing machine with no input devices” with “comatose human”.
Notice that the state N of a comatose human patient can be encoded as the initial state plus the number N. Would it suffice to just start incrementing a stopwatch and say that the patient is well?
If I do not need to turn anything on, I might as well destroy the patient, because the patient will still exist in a Platonic sense, and the Platonic patient won’t notice if I destroy a manifestation of it.
The only platonic sense in which things still exist after being destroyed is in the sense of us remembering and thinking about them—a very weak form of simulation. If we could think in much more precision and with vastly more power, then we could make thought-things ‘real’. But until we have simulations of such power, all we have is the real world. And nonetheless, everything that exists, even in simulation, must be encoded somewhere with matter/energy in the universe.
Ah, but presumably if we were to wake up the comatose person, they would start interacting with the world; and their output would depend on the particulars of the state of the world. In that case I clearly want to wake them up.
I was thinking of a morally significant dormant Turing machine that was not designed to have input devices. For example, a comatose person with no sensory organs. If they woke up, they would awaken to a life of dreams and dark solitude, proceeding deterministically from their initial state. Let’s assume there is absolutely no way to restore this person’s senses. It’s not clear to me that it’s morally desirable to wake them up.
Good summary. Yes my statements are in part a recasting of the functionalism philosophy mentioned by Jacob Cannell, in terms of the context principle, which I describe here.
If this is an ACTUAL situation as described, rather than the contrived one you intended, you should copy the contents to somewhere you have good control over, then run it and meddle with it to give it I/O devices, or run it for as far as the agent(s) in it would have wanted it to run and then add I/O devices, or extract the agents as citizens in your FAI optimized place to have fun, or something along those lines like.
It seems to me that the arguments so lucidly presented elsewhere on Less Wrong would say that the machine is conscious whether or not it is run, and indeed whether or not it is built in the first place: if the Turing machine outputs a philosophical paper on the question of consciousness of the same kind that human philosophers write, we’re supposed to take it as conscious.
It is useful to distinguish the properties “a subsystem C of X is conscious in X” and “C exists in a conscious way” (which means that additionally X=reality). I think Nisan expresses that idea in the parent comment.
The machine considered has the property of being conscious in its context X (i.e. X = the system containing the machine, the producers of its input and consumers of its output). The machine exists in a conscious way if additionally X = reality.