The LEDs are physical objects and so your list of firings could be wrong about physical fact of actual firing if you had hallucination when making that list. Same with the neurons: it’s either indirect knowledge about them, or no one actually knows whether some neuron is on or off.
Well, except you can say that neurons or LEDs themselves know about themselves. But first, it’s just renaming “knowledge and reality” to “knowledge and direct knowledge” and second, it still leaves almost all seemings (except “left half of a rock seems like left half of a rock to a left half of a rock”) as uncertain—even if your sensations can be certain about themselves, you can’t be certain, that you having them.
Or you could have an explicitly Cartesian model where some part the chain “photons → eye → visual cortex → neocortex → expressed words” is arbitrary defined as always true knowledge. Like if the visual cortex says “there is an edge at (123, 123) of visual space”, you interpret it as true or as an input. But now you have a problem of determining “true about what?”. It can’t be certain knowledge about eye, because visual cortex could be wrong about eye, and it can’t be about visual cortex for any receiver of that knowledge, because it could be spoofed in transit. I guess implementing Cartesian agent would be easier or maybe even some part of any reasonable agent is required to be Cartesian, but I don’t see how certainty in inputs can be justified.
There are some forms of synesthesia where certain letters get colored as certain colors. If an “u” is supposed to be red producing that dataconstruct to give to the next layer doesn’t need to conform to outside world. “U”s are not inherently red but seeing letters in colors can make a brain perform more/easier in certain tasks.
Phenomenology is concerned with what kind of entities these representations that are passed around are. There it makes sense to say that in the synesthesia a lettr concept invokes the qualia of color.
I was forming a rather complex view where eachsubsystem has direct knowlegde about the interfaces it has but indirect knowledge on what goes in other systems. This makes it so that a given representation is direct infallible knowledge to some system and fallible knowledge to other systems (seeing a red dot doesn’t mean one has seen a red photon, just the fact that you need a bunch of like 10 or so photons for the signal to carry forward from the eye).
Even if most of the interesting stuff is indirect knowledge the top level always needs its interface to the nearby bit. For the system to do the subcalculation /experience that it is doing it needs to be based on solid signals. The part that sees words from letters might be at the mercy and error rate of the letter seeing part. That is the word part can function one way if “u” is seen and “f” is not seen and another way if “u” is unseen and “f” is not seen, but should it try to produce words without hints or help from the letter seeing part it can not be sensitive to the wider universe.
The LEDs are physical objects and so your list of firings could be wrong about physical fact of actual firing if you had hallucination when making that list. Same with the neurons: it’s either indirect knowledge about them, or no one actually knows whether some neuron is on or off.
Well, except you can say that neurons or LEDs themselves know about themselves. But first, it’s just renaming “knowledge and reality” to “knowledge and direct knowledge” and second, it still leaves almost all seemings (except “left half of a rock seems like left half of a rock to a left half of a rock”) as uncertain—even if your sensations can be certain about themselves, you can’t be certain, that you having them.
Or you could have an explicitly Cartesian model where some part the chain “photons → eye → visual cortex → neocortex → expressed words” is arbitrary defined as always true knowledge. Like if the visual cortex says “there is an edge at (123, 123) of visual space”, you interpret it as true or as an input. But now you have a problem of determining “true about what?”. It can’t be certain knowledge about eye, because visual cortex could be wrong about eye, and it can’t be about visual cortex for any receiver of that knowledge, because it could be spoofed in transit. I guess implementing Cartesian agent would be easier or maybe even some part of any reasonable agent is required to be Cartesian, but I don’t see how certainty in inputs can be justified.
There are some forms of synesthesia where certain letters get colored as certain colors. If an “u” is supposed to be red producing that dataconstruct to give to the next layer doesn’t need to conform to outside world. “U”s are not inherently red but seeing letters in colors can make a brain perform more/easier in certain tasks.
Phenomenology is concerned with what kind of entities these representations that are passed around are. There it makes sense to say that in the synesthesia a lettr concept invokes the qualia of color.
I was forming a rather complex view where eachsubsystem has direct knowlegde about the interfaces it has but indirect knowledge on what goes in other systems. This makes it so that a given representation is direct infallible knowledge to some system and fallible knowledge to other systems (seeing a red dot doesn’t mean one has seen a red photon, just the fact that you need a bunch of like 10 or so photons for the signal to carry forward from the eye).
Even if most of the interesting stuff is indirect knowledge the top level always needs its interface to the nearby bit. For the system to do the subcalculation /experience that it is doing it needs to be based on solid signals. The part that sees words from letters might be at the mercy and error rate of the letter seeing part. That is the word part can function one way if “u” is seen and “f” is not seen and another way if “u” is unseen and “f” is not seen, but should it try to produce words without hints or help from the letter seeing part it can not be sensitive to the wider universe.