One model I’ve played around with is distinguishing two different sorts of beliefs, which for historical reasons I call “credences” and “propositional assertions”. My model doesn’t entirely hold water, I think, but it might be a useful starting point for inspiration for this topic.
Roughly speaking I define a “credence” to be a Bayesian belief in the naive sense. It updates according to what you perceive, and “from the inside” it just feels like the way the world is. I consider basic senses as well as aliefs to be under the “credence” label.
More specifically, in this model, your credence when looking at the picture is that there is a checkerboard with consistently colored squares, a cylinder standing on the checkboard, and casting a shadow on it, which obviously doesn’t change the shade of the squares, but does make them look darker.
In contrast, in this model, I assert that abstract conscious high-level verbal beliefs aren’t proper beliefs (in the Bayesian sense) at all; rather, they’re “propositional assertions”. They’re more like a sort of verbal game or something. People learn different ways of communicating verbally with each other, and these ways to a degree constrain their learned “rules of the game” to act like proper beliefs—but in some cases they can end up acting very very different from beliefs (e.g. signalling and such).
When doing theory of mind, we learn to mostly just accept the homunculus fallacy, because socially this leads to useful tools for talking theory of mind, even if they are not very accurate. You also learn to endorse the notion that you know your credences are wrong and irrational, even though your credences are what you “really” believe; e.g. you learn to endorse a proposition that “B” has the same color as “A”.
This model could probably be said to imply overly much separation of your rational mind away from the rest of your mind, in a way that is unrealistic. But it might be a useful inversion on the standard account of the situation, which engages in the homunculus fallacy?
More specifically, in this model, your credence when looking at the picture is that there is a checkerboard with consistently colored squares, a cylinder standing on the checkboard, and casting a shadow on it, which obviously doesn’t change the shade of the squares, but does make them look darker.
In your model, do you think there’s some sort of confused query-substitution going on, where we (at some level) confuse “is the color patch darker” with “is the square of the checkerboard darker”?
Because for me, the actual color patch (usually) seems darker, and I perceive myself as being able to distinguish that query.
Do the credences simply lack that distinction or something?
More generally, my correction to your credences/assertions model would be to point out that (in very specific ways) the assertions can end up “smarter”. Specifically, I think assertions are better at making crisp distinctions and better at logical reasoning. This puts assertions in a weird position.
I might not have explained the credence/propositional assertion distinction well enough. Imagine some sort of language model in AI, like GPT-3 or CLIP or whatever. For a language model, credences are its internal neuron activations and weights, while propositional assertions are the sequences of text tokens. The neuron activations and weights seem like they should definitely have a Bayesian interpretation as being beliefs, since they are optimized for accurate predictions, but this does not mean one can take the semantic meaning of the text strings at face value; the model isn’t optimized to emit true text strings, but instead optimized to emit text strings that match what humans say (or if it was an RL agent, maybe text strings that make humans do what it wants, or whatever).
My proposal is, what if humans have a similar split going on? This might be obscured a bit in this context, since we’re on LessWrong, which to a large degree has a goal of making propositional assertions act more like proper beliefs.
In your model, do you think there’s some sort of confused query-substitution going on, where we (at some level) confuse “is the color patch darker” with “is the square of the checkerboard darker”?
Yes, assuming I understand you correctly. It seems to me that there’s at least three queries at play:
Is the square on the checkerboard of a darker color?
Is there a shadow that darkens these squares?
Is the light emitted from this flat screen of a lower luminosity?
If I understand your question, “is the color patch darker?” maps to query 3?
The reason the illusion works is that for most people, query 3 isn’t part of their model (in the sense of credences). They can deal with the list of symbols as a propositional assertion, but it doesn’t map all the way into their senses. (Unless they have sufficient experience with it? I imagine artists would end up also having credences on it, due to experience with selecting colors. I’ve also heard that learning to see the actual visual shapes of what you’re drawing, rather than the abstracted representation, is an important step in becoming an artist.)
Do the credences simply lack that distinction or something?
The existence of the illusion would seem to imply that most people’s credences lack the distinction (or rather, lacks query 3, and thus finds it necessary to translate query 3 into query 2 or query 1). However, it’s not fundamental to the notion of credence vs propositional assertion that it lacks this. Rather, the homunculus problem seems to involve some sort of duality, either real or confused. I’m proposing that the duality is real, but in a different way than the homunculus fallacy does, where credences act like beliefs and propositional assertions can act in many ways.
This model doesn’t really make strong claims about the structure of the distinctions credences make, similar to how Bayesianism doesn’t make strong claims about the structure of the prior. But that said, there must obviously be some innate element, and there also seems to be some learned element, where they make the distinctions that you have experience with.
We’ve seen objects move in and out of light sources a ton, so we are very experienced in the distinction between “this object has a dark color” vs “there is a shadow on this object”. Meanwhile...
Wait actually, you’ve done some illustrations, right? I’m not sure how experienced you are with art (the illustrations you’ve posted to LessWrong have been sketches without photorealistic shading, if I recall correctly, but you might very well have done other stuff that I’m not aware of), so this might disprove some of my thoughts on how this works, if you have experience with shading things.
(Though in a way this is kinda peripheral to my idea… there’s lots of ways that credences could work that don’t match this.)
More generally, my correction to your credences/assertions model would be to point out that (in very specific ways) the assertions can end up “smarter”. Specifically, I think assertions are better at making crisp distinctions and better at logical reasoning. This puts assertions in a weird position.
Yes, and propositional assertions seem more “open-ended” and separable from the people thinking of them, while credences are more embedded in the person and their viewpoint. There’s a tradeoff, I’m just proposing seeing the tradeoff more as “credence-oriented individuals use propositional assertions as tools”.
One model I’ve played around with is distinguishing two different sorts of beliefs, which for historical reasons I call “credences” and “propositional assertions”. My model doesn’t entirely hold water, I think, but it might be a useful starting point for inspiration for this topic.
Roughly speaking I define a “credence” to be a Bayesian belief in the naive sense. It updates according to what you perceive, and “from the inside” it just feels like the way the world is. I consider basic senses as well as aliefs to be under the “credence” label.
More specifically, in this model, your credence when looking at the picture is that there is a checkerboard with consistently colored squares, a cylinder standing on the checkboard, and casting a shadow on it, which obviously doesn’t change the shade of the squares, but does make them look darker.
In contrast, in this model, I assert that abstract conscious high-level verbal beliefs aren’t proper beliefs (in the Bayesian sense) at all; rather, they’re “propositional assertions”. They’re more like a sort of verbal game or something. People learn different ways of communicating verbally with each other, and these ways to a degree constrain their learned “rules of the game” to act like proper beliefs—but in some cases they can end up acting very very different from beliefs (e.g. signalling and such).
When doing theory of mind, we learn to mostly just accept the homunculus fallacy, because socially this leads to useful tools for talking theory of mind, even if they are not very accurate. You also learn to endorse the notion that you know your credences are wrong and irrational, even though your credences are what you “really” believe; e.g. you learn to endorse a proposition that “B” has the same color as “A”.
This model could probably be said to imply overly much separation of your rational mind away from the rest of your mind, in a way that is unrealistic. But it might be a useful inversion on the standard account of the situation, which engages in the homunculus fallacy?
In your model, do you think there’s some sort of confused query-substitution going on, where we (at some level) confuse “is the color patch darker” with “is the square of the checkerboard darker”?
Because for me, the actual color patch (usually) seems darker, and I perceive myself as being able to distinguish that query.
Do the credences simply lack that distinction or something?
More generally, my correction to your credences/assertions model would be to point out that (in very specific ways) the assertions can end up “smarter”. Specifically, I think assertions are better at making crisp distinctions and better at logical reasoning. This puts assertions in a weird position.
I might not have explained the credence/propositional assertion distinction well enough. Imagine some sort of language model in AI, like GPT-3 or CLIP or whatever. For a language model, credences are its internal neuron activations and weights, while propositional assertions are the sequences of text tokens. The neuron activations and weights seem like they should definitely have a Bayesian interpretation as being beliefs, since they are optimized for accurate predictions, but this does not mean one can take the semantic meaning of the text strings at face value; the model isn’t optimized to emit true text strings, but instead optimized to emit text strings that match what humans say (or if it was an RL agent, maybe text strings that make humans do what it wants, or whatever).
My proposal is, what if humans have a similar split going on? This might be obscured a bit in this context, since we’re on LessWrong, which to a large degree has a goal of making propositional assertions act more like proper beliefs.
Yes, assuming I understand you correctly. It seems to me that there’s at least three queries at play:
Is the square on the checkerboard of a darker color?
Is there a shadow that darkens these squares?
Is the light emitted from this flat screen of a lower luminosity?
If I understand your question, “is the color patch darker?” maps to query 3?
The reason the illusion works is that for most people, query 3 isn’t part of their model (in the sense of credences). They can deal with the list of symbols as a propositional assertion, but it doesn’t map all the way into their senses. (Unless they have sufficient experience with it? I imagine artists would end up also having credences on it, due to experience with selecting colors. I’ve also heard that learning to see the actual visual shapes of what you’re drawing, rather than the abstracted representation, is an important step in becoming an artist.)
The existence of the illusion would seem to imply that most people’s credences lack the distinction (or rather, lacks query 3, and thus finds it necessary to translate query 3 into query 2 or query 1). However, it’s not fundamental to the notion of credence vs propositional assertion that it lacks this. Rather, the homunculus problem seems to involve some sort of duality, either real or confused. I’m proposing that the duality is real, but in a different way than the homunculus fallacy does, where credences act like beliefs and propositional assertions can act in many ways.
This model doesn’t really make strong claims about the structure of the distinctions credences make, similar to how Bayesianism doesn’t make strong claims about the structure of the prior. But that said, there must obviously be some innate element, and there also seems to be some learned element, where they make the distinctions that you have experience with.
We’ve seen objects move in and out of light sources a ton, so we are very experienced in the distinction between “this object has a dark color” vs “there is a shadow on this object”. Meanwhile...
Wait actually, you’ve done some illustrations, right? I’m not sure how experienced you are with art (the illustrations you’ve posted to LessWrong have been sketches without photorealistic shading, if I recall correctly, but you might very well have done other stuff that I’m not aware of), so this might disprove some of my thoughts on how this works, if you have experience with shading things.
(Though in a way this is kinda peripheral to my idea… there’s lots of ways that credences could work that don’t match this.)
Yes, and propositional assertions seem more “open-ended” and separable from the people thinking of them, while credences are more embedded in the person and their viewpoint. There’s a tradeoff, I’m just proposing seeing the tradeoff more as “credence-oriented individuals use propositional assertions as tools”.