Consciousness is a roughly defined and
(leaky)
abstraction.
So this leads to the conclusion that, if a Turing machine computes
consciousness and summarizes its output in a static representation
on a tape, the tape is conscious.
Without context the content of the tape has no meaning. So the
consciousness that has been output on the tape, is only a
consciousness in the context that can use it to generate the
consciousness abstraction.
It is the set of “stuff” that produces the consciousness abstraction
that can be called conscious. In a Turing machine, this “stuff” would
be the tape plus the machine that gives the tape the necessary
context.
It seems that consciousness requires some type of thought, and that
thought requires the system to self-modify. A static representation of
the Turing machine then does not meet this requirement.
So a Turing machine that is not running is not conscious.
Consciousness is a roughly defined and (leaky) abstraction.
Without context the content of the tape has no meaning. So the consciousness that has been output on the tape, is only a consciousness in the context that can use it to generate the consciousness abstraction.
It is the set of “stuff” that produces the consciousness abstraction that can be called conscious. In a Turing machine, this “stuff” would be the tape plus the machine that gives the tape the necessary context.
As Nisan asked above: Is this Turing machine conscious if you don’t run it?
It seems that consciousness requires some type of thought, and that thought requires the system to self-modify. A static representation of the Turing machine then does not meet this requirement.
So a Turing machine that is not running is not conscious.
Is there another perspective to consider?