More specifically, in this model, your credence when looking at the picture is that there is a checkerboard with consistently colored squares, a cylinder standing on the checkboard, and casting a shadow on it, which obviously doesn’t change the shade of the squares, but does make them look darker.
In your model, do you think there’s some sort of confused query-substitution going on, where we (at some level) confuse “is the color patch darker” with “is the square of the checkerboard darker”?
Because for me, the actual color patch (usually) seems darker, and I perceive myself as being able to distinguish that query.
Do the credences simply lack that distinction or something?
More generally, my correction to your credences/assertions model would be to point out that (in very specific ways) the assertions can end up “smarter”. Specifically, I think assertions are better at making crisp distinctions and better at logical reasoning. This puts assertions in a weird position.
I might not have explained the credence/propositional assertion distinction well enough. Imagine some sort of language model in AI, like GPT-3 or CLIP or whatever. For a language model, credences are its internal neuron activations and weights, while propositional assertions are the sequences of text tokens. The neuron activations and weights seem like they should definitely have a Bayesian interpretation as being beliefs, since they are optimized for accurate predictions, but this does not mean one can take the semantic meaning of the text strings at face value; the model isn’t optimized to emit true text strings, but instead optimized to emit text strings that match what humans say (or if it was an RL agent, maybe text strings that make humans do what it wants, or whatever).
My proposal is, what if humans have a similar split going on? This might be obscured a bit in this context, since we’re on LessWrong, which to a large degree has a goal of making propositional assertions act more like proper beliefs.
In your model, do you think there’s some sort of confused query-substitution going on, where we (at some level) confuse “is the color patch darker” with “is the square of the checkerboard darker”?
Yes, assuming I understand you correctly. It seems to me that there’s at least three queries at play:
Is the square on the checkerboard of a darker color?
Is there a shadow that darkens these squares?
Is the light emitted from this flat screen of a lower luminosity?
If I understand your question, “is the color patch darker?” maps to query 3?
The reason the illusion works is that for most people, query 3 isn’t part of their model (in the sense of credences). They can deal with the list of symbols as a propositional assertion, but it doesn’t map all the way into their senses. (Unless they have sufficient experience with it? I imagine artists would end up also having credences on it, due to experience with selecting colors. I’ve also heard that learning to see the actual visual shapes of what you’re drawing, rather than the abstracted representation, is an important step in becoming an artist.)
Do the credences simply lack that distinction or something?
The existence of the illusion would seem to imply that most people’s credences lack the distinction (or rather, lacks query 3, and thus finds it necessary to translate query 3 into query 2 or query 1). However, it’s not fundamental to the notion of credence vs propositional assertion that it lacks this. Rather, the homunculus problem seems to involve some sort of duality, either real or confused. I’m proposing that the duality is real, but in a different way than the homunculus fallacy does, where credences act like beliefs and propositional assertions can act in many ways.
This model doesn’t really make strong claims about the structure of the distinctions credences make, similar to how Bayesianism doesn’t make strong claims about the structure of the prior. But that said, there must obviously be some innate element, and there also seems to be some learned element, where they make the distinctions that you have experience with.
We’ve seen objects move in and out of light sources a ton, so we are very experienced in the distinction between “this object has a dark color” vs “there is a shadow on this object”. Meanwhile...
Wait actually, you’ve done some illustrations, right? I’m not sure how experienced you are with art (the illustrations you’ve posted to LessWrong have been sketches without photorealistic shading, if I recall correctly, but you might very well have done other stuff that I’m not aware of), so this might disprove some of my thoughts on how this works, if you have experience with shading things.
(Though in a way this is kinda peripheral to my idea… there’s lots of ways that credences could work that don’t match this.)
More generally, my correction to your credences/assertions model would be to point out that (in very specific ways) the assertions can end up “smarter”. Specifically, I think assertions are better at making crisp distinctions and better at logical reasoning. This puts assertions in a weird position.
Yes, and propositional assertions seem more “open-ended” and separable from the people thinking of them, while credences are more embedded in the person and their viewpoint. There’s a tradeoff, I’m just proposing seeing the tradeoff more as “credence-oriented individuals use propositional assertions as tools”.
In your model, do you think there’s some sort of confused query-substitution going on, where we (at some level) confuse “is the color patch darker” with “is the square of the checkerboard darker”?
Because for me, the actual color patch (usually) seems darker, and I perceive myself as being able to distinguish that query.
Do the credences simply lack that distinction or something?
More generally, my correction to your credences/assertions model would be to point out that (in very specific ways) the assertions can end up “smarter”. Specifically, I think assertions are better at making crisp distinctions and better at logical reasoning. This puts assertions in a weird position.
I might not have explained the credence/propositional assertion distinction well enough. Imagine some sort of language model in AI, like GPT-3 or CLIP or whatever. For a language model, credences are its internal neuron activations and weights, while propositional assertions are the sequences of text tokens. The neuron activations and weights seem like they should definitely have a Bayesian interpretation as being beliefs, since they are optimized for accurate predictions, but this does not mean one can take the semantic meaning of the text strings at face value; the model isn’t optimized to emit true text strings, but instead optimized to emit text strings that match what humans say (or if it was an RL agent, maybe text strings that make humans do what it wants, or whatever).
My proposal is, what if humans have a similar split going on? This might be obscured a bit in this context, since we’re on LessWrong, which to a large degree has a goal of making propositional assertions act more like proper beliefs.
Yes, assuming I understand you correctly. It seems to me that there’s at least three queries at play:
Is the square on the checkerboard of a darker color?
Is there a shadow that darkens these squares?
Is the light emitted from this flat screen of a lower luminosity?
If I understand your question, “is the color patch darker?” maps to query 3?
The reason the illusion works is that for most people, query 3 isn’t part of their model (in the sense of credences). They can deal with the list of symbols as a propositional assertion, but it doesn’t map all the way into their senses. (Unless they have sufficient experience with it? I imagine artists would end up also having credences on it, due to experience with selecting colors. I’ve also heard that learning to see the actual visual shapes of what you’re drawing, rather than the abstracted representation, is an important step in becoming an artist.)
The existence of the illusion would seem to imply that most people’s credences lack the distinction (or rather, lacks query 3, and thus finds it necessary to translate query 3 into query 2 or query 1). However, it’s not fundamental to the notion of credence vs propositional assertion that it lacks this. Rather, the homunculus problem seems to involve some sort of duality, either real or confused. I’m proposing that the duality is real, but in a different way than the homunculus fallacy does, where credences act like beliefs and propositional assertions can act in many ways.
This model doesn’t really make strong claims about the structure of the distinctions credences make, similar to how Bayesianism doesn’t make strong claims about the structure of the prior. But that said, there must obviously be some innate element, and there also seems to be some learned element, where they make the distinctions that you have experience with.
We’ve seen objects move in and out of light sources a ton, so we are very experienced in the distinction between “this object has a dark color” vs “there is a shadow on this object”. Meanwhile...
Wait actually, you’ve done some illustrations, right? I’m not sure how experienced you are with art (the illustrations you’ve posted to LessWrong have been sketches without photorealistic shading, if I recall correctly, but you might very well have done other stuff that I’m not aware of), so this might disprove some of my thoughts on how this works, if you have experience with shading things.
(Though in a way this is kinda peripheral to my idea… there’s lots of ways that credences could work that don’t match this.)
Yes, and propositional assertions seem more “open-ended” and separable from the people thinking of them, while credences are more embedded in the person and their viewpoint. There’s a tradeoff, I’m just proposing seeing the tradeoff more as “credence-oriented individuals use propositional assertions as tools”.