See my original comment. It’s relatively easy (well, at least it is if you accept that we could build conscious AIs in the first place) to construct an explanation of why an information-processing system would behave as if it had qualia and why it would even represent qualia internally. But that only explains why it behaves as if it had qualia, not why it actually has them.
I did read that before commenting, but I misinterpreted it, and now I still find myself unable to understand it. The way I read it, it seem to equivocate between knowing something as in representing it in your physical brain and knowing something as in representing it in the ‘shadow brain’. You know which one is intended where, but I can’t figure it out.
Not entirely sure what you’re asking, but nothing too radical. I just thought about it and realized that my model was indeed incoherent about whether or not it presumed the existence of some causal arrows. My philosophy of mind was already functionalist, so I just dropped the epiphenomenalist component from it.
A bigger impact was that I’ll need to rethink some parts of my model of personal identity, but I haven’t gotten around that yet.
See my original comment. It’s relatively easy (well, at least it is if you accept that we could build conscious AIs in the first place) to construct an explanation of why an information-processing system would behave as if it had qualia and why it would even represent qualia internally. But that only explains why it behaves as if it had qualia, not why it actually has them.
I did read that before commenting, but I misinterpreted it, and now I still find myself unable to understand it. The way I read it, it seem to equivocate between knowing something as in representing it in your physical brain and knowing something as in representing it in the ‘shadow brain’. You know which one is intended where, but I can’t figure it out.
Never mind.
Can you describe the qualia associated with going from epiphenominalism to functionalism/physicalism/wherever you went?
Not entirely sure what you’re asking, but nothing too radical. I just thought about it and realized that my model was indeed incoherent about whether or not it presumed the existence of some causal arrows. My philosophy of mind was already functionalist, so I just dropped the epiphenomenalist component from it.
A bigger impact was that I’ll need to rethink some parts of my model of personal identity, but I haven’t gotten around that yet.