I’m not going to say the goalposts are moving, but I definitely don’t know where they are any more. I was talking about red-eye filters built into cameras. You seemed to be suggesting that they do have “internal representations” of shape, but not of color, even though they recognize both shape and color in the same way. I’m trying to see what the difference is.
Essentially, why can a computer have an internal representation of shape without saying “wow, what a beautiful building” but an internal representation of color would lead it to say “wow, what a beautiful sunset”?
It doesn’t relate to giving an internal system an internal represetnation of colour like ours. If you put the filter on, you don’t go from red to black, you go from #FF0000 to #000000, or something.
Okay, so… we can’t make computers that go from red to black, and we can’t ourselves understand what it’s like to go from #FF0000 to #000000, and this means what?
To me it means the things we use to do processing are very different. Say, a whole brain emulation would have our experience of color, and if we get really really good at cognitive surgery, we might be able to extract the minimum necessary bits to contain that experience of color, and bolt it onto a red-eye filter. Why bother, though? What’s the relevant difference?
I don’t see how a wodge of bits, in isolation from context, could be said to “contain” anything processing, let alone anything depending on actual physics. It;s hard to see how it could even contain any definite meaning, absent context. What does 100110001011101 mean?
Say, a whole brain emulation would have our experience of color, and if we get really really good at cognitive surgery, we might be able to extract the minimum necessary bits to contain that experience of color, and bolt it onto a red-eye filter. Why bother, though? What’s the relevant difference?
The point of discussing the engineering of colour qualia is that it relates to the level of understanding of how consciousness works. Emulations bypass the need to understand something in order to duplicate it, and so are
not relevant to the initial claim that the implementation of (colour) qualia is not understood within current science.
We can give a computer an internal representation of shape, but not of colour as we experience it.
How would it function differently if it did have “an internal representation of color as we experience it”?
It would have conscious qualia.
That’s hard to answer without specifying more about the nature of the AI, but it might say things like “what a beautiful sunset”.
I’m not going to say the goalposts are moving, but I definitely don’t know where they are any more. I was talking about red-eye filters built into cameras. You seemed to be suggesting that they do have “internal representations” of shape, but not of color, even though they recognize both shape and color in the same way. I’m trying to see what the difference is.
Essentially, why can a computer have an internal representation of shape without saying “wow, what a beautiful building” but an internal representation of color would lead it to say “wow, what a beautiful sunset”?
I don’t know why you are talking about filters.
If you think you can write seeRed(), please supply some pseudocode.
What was wrong with this comment?
It doesn’t relate to giving an internal system an internal represetnation of colour like ours. If you put the filter on, you don’t go from red to black, you go from #FF0000 to #000000, or something.
Okay, so… we can’t make computers that go from red to black, and we can’t ourselves understand what it’s like to go from #FF0000 to #000000, and this means what?
To me it means the things we use to do processing are very different. Say, a whole brain emulation would have our experience of color, and if we get really really good at cognitive surgery, we might be able to extract the minimum necessary bits to contain that experience of color, and bolt it onto a red-eye filter. Why bother, though? What’s the relevant difference?
I don’t see how a wodge of bits, in isolation from context, could be said to “contain” anything processing, let alone anything depending on actual physics. It;s hard to see how it could even contain any definite meaning, absent context. What does 100110001011101 mean?
Sorry- “minimum necessary (pieces of brain)”, I meant to say. Like, probably not motor control, or language, or maybe memory.
The point of discussing the engineering of colour qualia is that it relates to the level of understanding of how consciousness works. Emulations bypass the need to understand something in order to duplicate it, and so are not relevant to the initial claim that the implementation of (colour) qualia is not understood within current science.