Arguably it could simulate itself seeing red and replace itself with the simulation.
I think the distinction between ‘knowing all about’ and ‘seeing’ red is captured in my box analogy. The brain state is a box. There is another box inside it, call this ‘understanding’. We call something inside the first box ‘experienced’. So the paradox hear is the two distinct states [experiencing (red) ] and [experiencing ( [understanding (red) ] ) ] are both brought under the header [knowing (red)], and this is really confusing.
Interesting thought experiment. Do we know an AI would enter a different mental state though?
I am finding it difficult to imagine the difference between software “knowing all about” and “seeing red”
We could program it that way.
Arguably it could simulate itself seeing red and replace itself with the simulation.
I think the distinction between ‘knowing all about’ and ‘seeing’ red is captured in my box analogy. The brain state is a box. There is another box inside it, call this ‘understanding’. We call something inside the first box ‘experienced’. So the paradox hear is the two distinct states [experiencing (red) ] and [experiencing ( [understanding (red) ] ) ] are both brought under the header [knowing (red)], and this is really confusing.