I’m saying that we should expect experience to feel as if made of fundamental, ineffable parts, even though we know that it is not.
I don ’t see why. Saying that eperience is really complex neurall activity isn’t enough to explain that, because thought
is really complex neural activity as well, and we can comminicate and unpack concepts.
So, qualia aren’t the problem for a turing machine they appear to be.
Can you write the code for SeeRed() ? Or are you saying that TMs would have ineffable concepts?
. Qualia don’t get special attention just because they feel different. They have a perfectly natural explanation,
You’ve inverted the problem: you have creatd the expectation that nothing mental is effable.
No, I’m saying that no basic, mental part will feel effable. Using our cognition, we can make complex notions of atoms and guitars, built up in our minds, and these will explain why our mental aspects feel fundamental, but they will still feel fundamental.
I’m saying that there are (something like) certain constructs in the brain, that are used whenever the most simple conscious thought or feeling is expressed. They’re even used when we don’t choose to express something, like when we look at something. We immediately see it’s components (surfaces, legs, handles), and the ones we can’t break down (lines, colours) feel like the most basic parts of those representations in our minds.
Perhaps the construct that we identify as red, is set of neurons XYZ firing. If so, whenever we notice (that is, other sets of neurons observe) that XYZ go off, we just take it to be ‘red’. It really appears to be red, and none of the other workings of the neurons can break it any further. It feels ineffable, because we are not privy to everything that’s going on. We can simply use a very restricted portion of the brain, to examine other chunks, and give them different labels.
However, we can use other neuronal patterns, to refer to and talk about atoms. Large groups of complex neural firings can observe and reflect upon experimental results that show that the brain is made of atoms.
Now, even though we can build up a model of atoms, and prove that the basic features of conscious experience (redness, lines, the hearing of a middle C) are made of atoms, the fact is, we’re still using complex neuronal patterns to think about these. The atom may be fundamental, but it takes a lot of complexity for me to think about the atom. Consciousness really is reducible to atoms, but when I inspect consciousness, it still feels like a big complex set of neurons that my conscious brain can’t understand. It still feels fundamental.
Experientially, redness doesn’t feel like atoms because our conscious minds cannot reduce it in experience, but they can prove that it is reducible. People make the jump that, because complex patterns in one part of the brain (one conscious part) cannot reduce another (conscious) part to mere atoms, it must be a fundamental part of reality. However, this does not follow logically—you can’t assume your conscious experience can comprehend everything you think and feel at the most fundamental level, purely by reflection.
I feel I’ve gone on too long, in trying to give an example of how something could feel basic but not be. I’m just saying we’re not privy to everything that’s going on, so we can’t make massive knowledge claims about it i.e. that a turing-machine couldn’t experience what we’re experiencing, purely by appeal to reflection. We just aren’t reflectively transparent.
I don ’t see why. Saying that eperience is really complex neurall activity isn’t enough to explain that, because thought is really complex neural activity as well, and we can comminicate and unpack concepts.
Can you write the code for SeeRed() ? Or are you saying that TMs would have ineffable concepts?
You’ve inverted the problem: you have creatd the expectation that nothing mental is effable.
No, I’m saying that no basic, mental part will feel effable. Using our cognition, we can make complex notions of atoms and guitars, built up in our minds, and these will explain why our mental aspects feel fundamental, but they will still feel fundamental.
I’m not continuing this discussion, it’s going nowhere new. I will offer Orthonormal’s sequence on qualia as explanatory however: http://lesswrong.com/lw/5n9/seeing_red_dissolving_marys_room_and_qualia/
You seem to be hinting, but are not quite saying, that qualia are basic and therefore ineffable, whilst thoughts are non-basic and therefore effable.
Confirming the above would be somewhere new.
I’m saying that there are (something like) certain constructs in the brain, that are used whenever the most simple conscious thought or feeling is expressed. They’re even used when we don’t choose to express something, like when we look at something. We immediately see it’s components (surfaces, legs, handles), and the ones we can’t break down (lines, colours) feel like the most basic parts of those representations in our minds.
Perhaps the construct that we identify as red, is set of neurons XYZ firing. If so, whenever we notice (that is, other sets of neurons observe) that XYZ go off, we just take it to be ‘red’. It really appears to be red, and none of the other workings of the neurons can break it any further. It feels ineffable, because we are not privy to everything that’s going on. We can simply use a very restricted portion of the brain, to examine other chunks, and give them different labels.
However, we can use other neuronal patterns, to refer to and talk about atoms. Large groups of complex neural firings can observe and reflect upon experimental results that show that the brain is made of atoms.
Now, even though we can build up a model of atoms, and prove that the basic features of conscious experience (redness, lines, the hearing of a middle C) are made of atoms, the fact is, we’re still using complex neuronal patterns to think about these. The atom may be fundamental, but it takes a lot of complexity for me to think about the atom. Consciousness really is reducible to atoms, but when I inspect consciousness, it still feels like a big complex set of neurons that my conscious brain can’t understand. It still feels fundamental.
Experientially, redness doesn’t feel like atoms because our conscious minds cannot reduce it in experience, but they can prove that it is reducible. People make the jump that, because complex patterns in one part of the brain (one conscious part) cannot reduce another (conscious) part to mere atoms, it must be a fundamental part of reality. However, this does not follow logically—you can’t assume your conscious experience can comprehend everything you think and feel at the most fundamental level, purely by reflection.
I feel I’ve gone on too long, in trying to give an example of how something could feel basic but not be. I’m just saying we’re not privy to everything that’s going on, so we can’t make massive knowledge claims about it i.e. that a turing-machine couldn’t experience what we’re experiencing, purely by appeal to reflection. We just aren’t reflectively transparent.