But of course, it’s trivial to write a different Turing machine
which writes on a tape (call it Tape B) the entire history of
Machine A’s computation (as well as its output), and this indeed has
the required richness for me to be comfortable in calling Tape B
conscious.
In what context can Tape B be labeled conscious?
A history of consciousness does not seem to me to be the same as
consciousness. A full debug trace of a program is simply not the same
thing as the original program.
If however you create a Machine C that replays Tape B, I would grant
that Machine C reproduces the consciousness of Machine A.
This gets into hairy territory with no clear “conscious”/”not conscious” boundary between a spectrum of different variations, but I’d say that the interpretive framework needed to trace a thought from the log on Tape B is essentially the same as the interpretive framework needed to trace it from the action of Machine A on the start tape. They’re isomorphic mathematical objects.
I claim that the “interpretive framework” you refer to is essential in
the labeling of Tape B as conscious. Without specifying the context,
the consciousness of Tape B is unknown.
I claim that the “interpretive framework” you refer to is essential in the labeling of Tape B as conscious. Without specifying the context, the consciousness of Tape B is unknown.
You might be interested in the thought experiment of a so-called “joke interpretation”, which maps the random molecule oscillations in (say) a rock onto a conscious mind, and asks what the difference is between this and a more “reasonable” map from a brain to a mind. There’s a good discussion of this in Good and Real.
I would restate the thought experiment as such. A state sequence
measured from a rock is used to generate a look-up table that maps from
the rock state sequence to a pre-measured consciousness state sequence.
This is essentially an encryption of the consciousness state
sequence using the rock state sequence as a
one-time pad. The
consciousness state sequence can be generated by replaying the rock
state sequence through the look-up table. With the final question
being: is the rock conscious?
In the model I’ve outlined in my comments, consciousness exists at
the level the consciousness abstraction is present. In this case that
abstraction is not present at the level of the rock, but only at the
level of the system that uses the look-up table, and only for the
duration of the sequence. The states measured from the rock are used
to generate the consciousness, but they are not the consciousness.
In this case that abstraction is not present at the level of the rock, but only at the level of the system that uses the look-up table
What is the “system that uses the look-up table”? Do you require a particular kind of physical system in order for consciousness to exist? If not, what if the “system” which replays the sequence is a human with a pen and paper? Does the system truly exhibit the original consciousness sequence, in addition to the human’s existing consciousness?
The system that replays the sequence can be anything, including a
human with pen and paper.
Does the system truly exhibit the original consciousness sequence,
in addition to the human’s existing consciousness?
Yes, assuming that the measured consciousness sequence captured the
essential elements of the original consciousness.
To “see” the original consciousness in this system you must adopt the
correct context; the context that resolves the consciousness
abstraction within the system. From that context you will not see
the human. If you see a human following instructions and making notes,
you will not see the consciousness he is generating.
Consider a chess program playing a game against itself. If we glance
at the monitor we would see the game as it progresses. If instead we
only could examine the quarks that make up the computer, we would be
completely blind to the chess program abstraction.
In my comments I have been working on the idea that consciousness is
an abstraction. The context in which the consciousness abstraction
exists, is where consciousness can be found.
So a mono-consciousness would still have a context that supports a
consciousness abstraction. I don’t see any problem with that. However
the consciousness might be like a
feral child, no table
manners and very strange to us.
How about this. If a consciousness tells a joke in a forest where
no other consciousness can hear it, is the joke still funny?
In what context can Tape B be labeled conscious?
A history of consciousness does not seem to me to be the same as consciousness. A full debug trace of a program is simply not the same thing as the original program.
If however you create a Machine C that replays Tape B, I would grant that Machine C reproduces the consciousness of Machine A.
This gets into hairy territory with no clear “conscious”/”not conscious” boundary between a spectrum of different variations, but I’d say that the interpretive framework needed to trace a thought from the log on Tape B is essentially the same as the interpretive framework needed to trace it from the action of Machine A on the start tape. They’re isomorphic mathematical objects.
I agree with everything you say here.
I claim that the “interpretive framework” you refer to is essential in the labeling of Tape B as conscious. Without specifying the context, the consciousness of Tape B is unknown.
You might be interested in the thought experiment of a so-called “joke interpretation”, which maps the random molecule oscillations in (say) a rock onto a conscious mind, and asks what the difference is between this and a more “reasonable” map from a brain to a mind. There’s a good discussion of this in Good and Real.
I skimmed the material and see what you mean.
I would restate the thought experiment as such. A state sequence measured from a rock is used to generate a look-up table that maps from the rock state sequence to a pre-measured consciousness state sequence. This is essentially an encryption of the consciousness state sequence using the rock state sequence as a one-time pad. The consciousness state sequence can be generated by replaying the rock state sequence through the look-up table. With the final question being: is the rock conscious?
In the model I’ve outlined in my comments, consciousness exists at the level the consciousness abstraction is present. In this case that abstraction is not present at the level of the rock, but only at the level of the system that uses the look-up table, and only for the duration of the sequence. The states measured from the rock are used to generate the consciousness, but they are not the consciousness.
What is the “system that uses the look-up table”? Do you require a particular kind of physical system in order for consciousness to exist? If not, what if the “system” which replays the sequence is a human with a pen and paper? Does the system truly exhibit the original consciousness sequence, in addition to the human’s existing consciousness?
Ah, Chinese room questions.
The system that replays the sequence can be anything, including a human with pen and paper.
Yes, assuming that the measured consciousness sequence captured the essential elements of the original consciousness.
To “see” the original consciousness in this system you must adopt the correct context; the context that resolves the consciousness abstraction within the system. From that context you will not see the human. If you see a human following instructions and making notes, you will not see the consciousness he is generating.
Consider a chess program playing a game against itself. If we glance at the monitor we would see the game as it progresses. If instead we only could examine the quarks that make up the computer, we would be completely blind to the chess program abstraction.
Is a mono-consciousness then impossible?
Thanks for the clarification.
In my comments I have been working on the idea that consciousness is an abstraction. The context in which the consciousness abstraction exists, is where consciousness can be found.
So a mono-consciousness would still have a context that supports a consciousness abstraction. I don’t see any problem with that. However the consciousness might be like a feral child, no table manners and very strange to us.
How about this. If a consciousness tells a joke in a forest where no other consciousness can hear it, is the joke still funny?
I’ll need more details. What is a mono-consciousness?
I was thinking of a magnetic monopole. A single consciousness, that does not interact with any others