Are you seriously saying that “You can not be sure how the world seems to you” has significant plausbility?
How sure are you, that this sentence seems to you the same it seemed to you 1ms ago? If you can’t precisely quantify difference between experiences, you can’t have perfect certainty in your beliefs about experience. And it gets worse when you leave the zone that the brain’s reflective capabilities were optimized for.
Past experiences do not directly seem to me and indeed I can’t make such crosstemporal comparisons. However the memorty image I have of the past is a seeming. This is often much less than what direct current experiencing is.
As compared to belief-in-belief one can have belief-in-experience but it can’t be all belief there needs to be some actual experience in there.
That is sentence-m1 and sentence-m2 might be mistakenly believed to be two slices of a crosstemporal object sentence-eternal. But what you actually percieve is two separate objects (in two separate perceptions).
Whatever kind of things we “read into” our perception (such as sentence-eternal) there is something we read “with”. Those kinds of things can’t fail to exist. And even with the kinds of things we build there is the fact whether or not they get built, when faced with some black on white whether a hallucination of sentence-eternal takes place or not.
Yes, there are experiences, not only beliefs about them. But as with beliefs about external reality, beliefs can be imprecise.
It is possible to create a more precise description of how something seems to you and for which your internal representation with integer count of built things is just approximation. And you can even define some measure of the difference between experiences, instead of just talking about separate objects.
It is not extremely bad approximation to say “it seems like two sentences to me” so it is not like being sure in the absence of experience is the right way.
The only thing you can be sure of is that something exist, because otherwise nothing could produce any approximations. But if you can’t precisely specify temporal or spatial or whatever characteristics of your experience, there is no sense in which you can be sure what something seems to you.
Even with beliefs about internal events there is the direct evidence and then there is the pattern seen in them. On the neurnal level this means that a neuron is either on or off. Whatever it signfies or tells about is secondary but the firing event itself is the world here-now rather than “out there”. Now you could have more abstract parts of the brain that do not have direct access to what happens in the subconcious parts. There is the eye, there is the visual cortex and there is the neocortex. The neocortex might separately build a model for itself what happens in the visual cortex. This is inherently guesswork and is subject to uncertainty. However the concrete objects that the visual cortex passes up are “concrete firings” it would not make sense and the brain need not make a model of those.
I get that you are gesturing at a model where there is some nebolous truth and the more and sophisticated ways one can measured it then a more faitful representation can be given. Yes, if your measuring appartus has more LED lights in it to go off it will extract more bits from the thing measured. But if one installs additional lights then the trigger conditions of the old lights just retain rather than improving in some way. Sure you can be uncertaswin whether a ligth goes off because a photon was caught or because a earthquake tripped it. But the fact that the light did trip ie the data itself is not subject to this kind of speculation.
In principle I could just have a list of LED firings without a good model how such triggering could have come about. I would still have a seeming without knowing how to build anything from it.
The LEDs are physical objects and so your list of firings could be wrong about physical fact of actual firing if you had hallucination when making that list. Same with the neurons: it’s either indirect knowledge about them, or no one actually knows whether some neuron is on or off.
Well, except you can say that neurons or LEDs themselves know about themselves. But first, it’s just renaming “knowledge and reality” to “knowledge and direct knowledge” and second, it still leaves almost all seemings (except “left half of a rock seems like left half of a rock to a left half of a rock”) as uncertain—even if your sensations can be certain about themselves, you can’t be certain, that you having them.
Or you could have an explicitly Cartesian model where some part the chain “photons → eye → visual cortex → neocortex → expressed words” is arbitrary defined as always true knowledge. Like if the visual cortex says “there is an edge at (123, 123) of visual space”, you interpret it as true or as an input. But now you have a problem of determining “true about what?”. It can’t be certain knowledge about eye, because visual cortex could be wrong about eye, and it can’t be about visual cortex for any receiver of that knowledge, because it could be spoofed in transit. I guess implementing Cartesian agent would be easier or maybe even some part of any reasonable agent is required to be Cartesian, but I don’t see how certainty in inputs can be justified.
There are some forms of synesthesia where certain letters get colored as certain colors. If an “u” is supposed to be red producing that dataconstruct to give to the next layer doesn’t need to conform to outside world. “U”s are not inherently red but seeing letters in colors can make a brain perform more/easier in certain tasks.
Phenomenology is concerned with what kind of entities these representations that are passed around are. There it makes sense to say that in the synesthesia a lettr concept invokes the qualia of color.
I was forming a rather complex view where eachsubsystem has direct knowlegde about the interfaces it has but indirect knowledge on what goes in other systems. This makes it so that a given representation is direct infallible knowledge to some system and fallible knowledge to other systems (seeing a red dot doesn’t mean one has seen a red photon, just the fact that you need a bunch of like 10 or so photons for the signal to carry forward from the eye).
Even if most of the interesting stuff is indirect knowledge the top level always needs its interface to the nearby bit. For the system to do the subcalculation /experience that it is doing it needs to be based on solid signals. The part that sees words from letters might be at the mercy and error rate of the letter seeing part. That is the word part can function one way if “u” is seen and “f” is not seen and another way if “u” is unseen and “f” is not seen, but should it try to produce words without hints or help from the letter seeing part it can not be sensitive to the wider universe.
How sure are you, that this sentence seems to you the same it seemed to you 1ms ago? If you can’t precisely quantify difference between experiences, you can’t have perfect certainty in your beliefs about experience. And it gets worse when you leave the zone that the brain’s reflective capabilities were optimized for.
Past experiences do not directly seem to me and indeed I can’t make such crosstemporal comparisons. However the memorty image I have of the past is a seeming. This is often much less than what direct current experiencing is.
As compared to belief-in-belief one can have belief-in-experience but it can’t be all belief there needs to be some actual experience in there.
That is sentence-m1 and sentence-m2 might be mistakenly believed to be two slices of a crosstemporal object sentence-eternal. But what you actually percieve is two separate objects (in two separate perceptions).
Whatever kind of things we “read into” our perception (such as sentence-eternal) there is something we read “with”. Those kinds of things can’t fail to exist. And even with the kinds of things we build there is the fact whether or not they get built, when faced with some black on white whether a hallucination of sentence-eternal takes place or not.
Yes, there are experiences, not only beliefs about them. But as with beliefs about external reality, beliefs can be imprecise.
It is possible to create a more precise description of how something seems to you and for which your internal representation with integer count of built things is just approximation. And you can even define some measure of the difference between experiences, instead of just talking about separate objects.
It is not extremely bad approximation to say “it seems like two sentences to me” so it is not like being sure in the absence of experience is the right way.
The only thing you can be sure of is that something exist, because otherwise nothing could produce any approximations. But if you can’t precisely specify temporal or spatial or whatever characteristics of your experience, there is no sense in which you can be sure what something seems to you.
Oh jeez, Signer and Slider are two different user names.
Even with beliefs about internal events there is the direct evidence and then there is the pattern seen in them. On the neurnal level this means that a neuron is either on or off. Whatever it signfies or tells about is secondary but the firing event itself is the world here-now rather than “out there”. Now you could have more abstract parts of the brain that do not have direct access to what happens in the subconcious parts. There is the eye, there is the visual cortex and there is the neocortex. The neocortex might separately build a model for itself what happens in the visual cortex. This is inherently guesswork and is subject to uncertainty. However the concrete objects that the visual cortex passes up are “concrete firings” it would not make sense and the brain need not make a model of those.
I get that you are gesturing at a model where there is some nebolous truth and the more and sophisticated ways one can measured it then a more faitful representation can be given. Yes, if your measuring appartus has more LED lights in it to go off it will extract more bits from the thing measured. But if one installs additional lights then the trigger conditions of the old lights just retain rather than improving in some way. Sure you can be uncertaswin whether a ligth goes off because a photon was caught or because a earthquake tripped it. But the fact that the light did trip ie the data itself is not subject to this kind of speculation.
In principle I could just have a list of LED firings without a good model how such triggering could have come about. I would still have a seeming without knowing how to build anything from it.
The LEDs are physical objects and so your list of firings could be wrong about physical fact of actual firing if you had hallucination when making that list. Same with the neurons: it’s either indirect knowledge about them, or no one actually knows whether some neuron is on or off.
Well, except you can say that neurons or LEDs themselves know about themselves. But first, it’s just renaming “knowledge and reality” to “knowledge and direct knowledge” and second, it still leaves almost all seemings (except “left half of a rock seems like left half of a rock to a left half of a rock”) as uncertain—even if your sensations can be certain about themselves, you can’t be certain, that you having them.
Or you could have an explicitly Cartesian model where some part the chain “photons → eye → visual cortex → neocortex → expressed words” is arbitrary defined as always true knowledge. Like if the visual cortex says “there is an edge at (123, 123) of visual space”, you interpret it as true or as an input. But now you have a problem of determining “true about what?”. It can’t be certain knowledge about eye, because visual cortex could be wrong about eye, and it can’t be about visual cortex for any receiver of that knowledge, because it could be spoofed in transit. I guess implementing Cartesian agent would be easier or maybe even some part of any reasonable agent is required to be Cartesian, but I don’t see how certainty in inputs can be justified.
There are some forms of synesthesia where certain letters get colored as certain colors. If an “u” is supposed to be red producing that dataconstruct to give to the next layer doesn’t need to conform to outside world. “U”s are not inherently red but seeing letters in colors can make a brain perform more/easier in certain tasks.
Phenomenology is concerned with what kind of entities these representations that are passed around are. There it makes sense to say that in the synesthesia a lettr concept invokes the qualia of color.
I was forming a rather complex view where eachsubsystem has direct knowlegde about the interfaces it has but indirect knowledge on what goes in other systems. This makes it so that a given representation is direct infallible knowledge to some system and fallible knowledge to other systems (seeing a red dot doesn’t mean one has seen a red photon, just the fact that you need a bunch of like 10 or so photons for the signal to carry forward from the eye).
Even if most of the interesting stuff is indirect knowledge the top level always needs its interface to the nearby bit. For the system to do the subcalculation /experience that it is doing it needs to be based on solid signals. The part that sees words from letters might be at the mercy and error rate of the letter seeing part. That is the word part can function one way if “u” is seen and “f” is not seen and another way if “u” is unseen and “f” is not seen, but should it try to produce words without hints or help from the letter seeing part it can not be sensitive to the wider universe.