Yes, there are experiences, not only beliefs about them. But as with beliefs about external reality, beliefs can be imprecise.
It is possible to create a more precise description of how something seems to you and for which your internal representation with integer count of built things is just approximation. And you can even define some measure of the difference between experiences, instead of just talking about separate objects.
It is not extremely bad approximation to say “it seems like two sentences to me” so it is not like being sure in the absence of experience is the right way.
The only thing you can be sure of is that something exist, because otherwise nothing could produce any approximations. But if you can’t precisely specify temporal or spatial or whatever characteristics of your experience, there is no sense in which you can be sure what something seems to you.
Even with beliefs about internal events there is the direct evidence and then there is the pattern seen in them. On the neurnal level this means that a neuron is either on or off. Whatever it signfies or tells about is secondary but the firing event itself is the world here-now rather than “out there”. Now you could have more abstract parts of the brain that do not have direct access to what happens in the subconcious parts. There is the eye, there is the visual cortex and there is the neocortex. The neocortex might separately build a model for itself what happens in the visual cortex. This is inherently guesswork and is subject to uncertainty. However the concrete objects that the visual cortex passes up are “concrete firings” it would not make sense and the brain need not make a model of those.
I get that you are gesturing at a model where there is some nebolous truth and the more and sophisticated ways one can measured it then a more faitful representation can be given. Yes, if your measuring appartus has more LED lights in it to go off it will extract more bits from the thing measured. But if one installs additional lights then the trigger conditions of the old lights just retain rather than improving in some way. Sure you can be uncertaswin whether a ligth goes off because a photon was caught or because a earthquake tripped it. But the fact that the light did trip ie the data itself is not subject to this kind of speculation.
In principle I could just have a list of LED firings without a good model how such triggering could have come about. I would still have a seeming without knowing how to build anything from it.
The LEDs are physical objects and so your list of firings could be wrong about physical fact of actual firing if you had hallucination when making that list. Same with the neurons: it’s either indirect knowledge about them, or no one actually knows whether some neuron is on or off.
Well, except you can say that neurons or LEDs themselves know about themselves. But first, it’s just renaming “knowledge and reality” to “knowledge and direct knowledge” and second, it still leaves almost all seemings (except “left half of a rock seems like left half of a rock to a left half of a rock”) as uncertain—even if your sensations can be certain about themselves, you can’t be certain, that you having them.
Or you could have an explicitly Cartesian model where some part the chain “photons → eye → visual cortex → neocortex → expressed words” is arbitrary defined as always true knowledge. Like if the visual cortex says “there is an edge at (123, 123) of visual space”, you interpret it as true or as an input. But now you have a problem of determining “true about what?”. It can’t be certain knowledge about eye, because visual cortex could be wrong about eye, and it can’t be about visual cortex for any receiver of that knowledge, because it could be spoofed in transit. I guess implementing Cartesian agent would be easier or maybe even some part of any reasonable agent is required to be Cartesian, but I don’t see how certainty in inputs can be justified.
There are some forms of synesthesia where certain letters get colored as certain colors. If an “u” is supposed to be red producing that dataconstruct to give to the next layer doesn’t need to conform to outside world. “U”s are not inherently red but seeing letters in colors can make a brain perform more/easier in certain tasks.
Phenomenology is concerned with what kind of entities these representations that are passed around are. There it makes sense to say that in the synesthesia a lettr concept invokes the qualia of color.
I was forming a rather complex view where eachsubsystem has direct knowlegde about the interfaces it has but indirect knowledge on what goes in other systems. This makes it so that a given representation is direct infallible knowledge to some system and fallible knowledge to other systems (seeing a red dot doesn’t mean one has seen a red photon, just the fact that you need a bunch of like 10 or so photons for the signal to carry forward from the eye).
Even if most of the interesting stuff is indirect knowledge the top level always needs its interface to the nearby bit. For the system to do the subcalculation /experience that it is doing it needs to be based on solid signals. The part that sees words from letters might be at the mercy and error rate of the letter seeing part. That is the word part can function one way if “u” is seen and “f” is not seen and another way if “u” is unseen and “f” is not seen, but should it try to produce words without hints or help from the letter seeing part it can not be sensitive to the wider universe.
Yes, there are experiences, not only beliefs about them. But as with beliefs about external reality, beliefs can be imprecise.
It is possible to create a more precise description of how something seems to you and for which your internal representation with integer count of built things is just approximation. And you can even define some measure of the difference between experiences, instead of just talking about separate objects.
It is not extremely bad approximation to say “it seems like two sentences to me” so it is not like being sure in the absence of experience is the right way.
The only thing you can be sure of is that something exist, because otherwise nothing could produce any approximations. But if you can’t precisely specify temporal or spatial or whatever characteristics of your experience, there is no sense in which you can be sure what something seems to you.
Oh jeez, Signer and Slider are two different user names.
Even with beliefs about internal events there is the direct evidence and then there is the pattern seen in them. On the neurnal level this means that a neuron is either on or off. Whatever it signfies or tells about is secondary but the firing event itself is the world here-now rather than “out there”. Now you could have more abstract parts of the brain that do not have direct access to what happens in the subconcious parts. There is the eye, there is the visual cortex and there is the neocortex. The neocortex might separately build a model for itself what happens in the visual cortex. This is inherently guesswork and is subject to uncertainty. However the concrete objects that the visual cortex passes up are “concrete firings” it would not make sense and the brain need not make a model of those.
I get that you are gesturing at a model where there is some nebolous truth and the more and sophisticated ways one can measured it then a more faitful representation can be given. Yes, if your measuring appartus has more LED lights in it to go off it will extract more bits from the thing measured. But if one installs additional lights then the trigger conditions of the old lights just retain rather than improving in some way. Sure you can be uncertaswin whether a ligth goes off because a photon was caught or because a earthquake tripped it. But the fact that the light did trip ie the data itself is not subject to this kind of speculation.
In principle I could just have a list of LED firings without a good model how such triggering could have come about. I would still have a seeming without knowing how to build anything from it.
The LEDs are physical objects and so your list of firings could be wrong about physical fact of actual firing if you had hallucination when making that list. Same with the neurons: it’s either indirect knowledge about them, or no one actually knows whether some neuron is on or off.
Well, except you can say that neurons or LEDs themselves know about themselves. But first, it’s just renaming “knowledge and reality” to “knowledge and direct knowledge” and second, it still leaves almost all seemings (except “left half of a rock seems like left half of a rock to a left half of a rock”) as uncertain—even if your sensations can be certain about themselves, you can’t be certain, that you having them.
Or you could have an explicitly Cartesian model where some part the chain “photons → eye → visual cortex → neocortex → expressed words” is arbitrary defined as always true knowledge. Like if the visual cortex says “there is an edge at (123, 123) of visual space”, you interpret it as true or as an input. But now you have a problem of determining “true about what?”. It can’t be certain knowledge about eye, because visual cortex could be wrong about eye, and it can’t be about visual cortex for any receiver of that knowledge, because it could be spoofed in transit. I guess implementing Cartesian agent would be easier or maybe even some part of any reasonable agent is required to be Cartesian, but I don’t see how certainty in inputs can be justified.
There are some forms of synesthesia where certain letters get colored as certain colors. If an “u” is supposed to be red producing that dataconstruct to give to the next layer doesn’t need to conform to outside world. “U”s are not inherently red but seeing letters in colors can make a brain perform more/easier in certain tasks.
Phenomenology is concerned with what kind of entities these representations that are passed around are. There it makes sense to say that in the synesthesia a lettr concept invokes the qualia of color.
I was forming a rather complex view where eachsubsystem has direct knowlegde about the interfaces it has but indirect knowledge on what goes in other systems. This makes it so that a given representation is direct infallible knowledge to some system and fallible knowledge to other systems (seeing a red dot doesn’t mean one has seen a red photon, just the fact that you need a bunch of like 10 or so photons for the signal to carry forward from the eye).
Even if most of the interesting stuff is indirect knowledge the top level always needs its interface to the nearby bit. For the system to do the subcalculation /experience that it is doing it needs to be based on solid signals. The part that sees words from letters might be at the mercy and error rate of the letter seeing part. That is the word part can function one way if “u” is seen and “f” is not seen and another way if “u” is unseen and “f” is not seen, but should it try to produce words without hints or help from the letter seeing part it can not be sensitive to the wider universe.