Can you give examples of beliefs that aren’t about anticipation?
Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don’t relate to things that leave historical footprints. If you’ll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities.
In both cases my inability to constrain my anticipated experiences speaks to my limited ability to experience and not a limitation of the universe. The same principles of ‘belief’ apply even though it has incidentally fallen out of the scope which I am able to influence or verify even in principle.
Beliefs that aren’t easily testable also tend to be the kind of beliefs that have a lot of political associations, and thus tend not to act like beliefs as such so much as policies. Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. “communism is good” with “correctly implemented communism is good”, or “whites and blacks have equal average IQ” with “whites and blacks would have equal average IQ if they’d had the same cultural privileges/disadvantages”. (Apologies for the necessary political examples. Please don’t use this as an opportunity to talk about communism or race.)
Many “beliefs” that aren’t politically relevant—which excludes most scientific “knowledge” and much knowledge of your self, the people you know, what you want to do with your life, et cetera—are better characterized as knowledge, and not beliefs as such. The answers to questions like “do I have one hand, two hands, or three hands?” or “how do I get back to my house from my workplace?” aren’t generally beliefs so much as knowledge, and in my opinion “knowledge” is not only epistemologically but cognitively-neurologically a more accurate description, though I don’t really know enough about memory encoding to really back up that claim (though the difference is introspectively apparent). Either way, I still think that given our knowledge of the non-fundamental-ness of Bayes, we shouldn’t try too hard to stretch Bayes-ness to fit decision problems or cognitive algorithms that Bayes wasn’t meant to describe or solve, even if it’s technically possible to do so.
Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. “communism is good” with “correctly implemented communism is good”, or “whites and blacks have equal average IQ” with “whites and blacks would have equal average IQ if they’d had the same cultural privileges/disadvantages”.
I believe the common to term for that mistake is “no true Scotsman”.
Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don’t relate to things that leave historical footprints. If you’ll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities.
What do we lose by saying that doesn’t count as a belief? Some consistency when we describe how our minds manipulate anticipations (because we don’t separate out ones we can measure and ones we can’t, but reality does separate those, and our terminology fits reality)? Something else?
I’m not clear on the relevance of caring to beliefs. I would prefer that those I care about not be tortured, but once they’re out of my future light cone whatever happens to them is a sunk cost- I don’t see what I (or they) get from my preferring or believing things about them.
Oops, I just realized that in my hypothetical scenario by someone being tortured outside your light cone, I meant someone being tortured somewhere your two future light cones don’t intersect.
Indeed; being outside of my future light cone just means whatever I do has no impact on them. But now not only can I not impact them, but they’re also dead to me (as they, or any information they emit, won’t exist in my future). I still don’t see what impact caring about them has.
Right, but for my actions to have an effect on them, they have to be in my future light cone at the time of action. It sounds like you’re interested in events in my future light cone but will not be in any of the past light cones centered at my future intervals- like, for example, things that I can set in motion now which will not come to fruition until after I’m dead, or the person I care about pondering whether or not to jump into a black hole. Those things are worth caring about so long as they’re in my future light cone, and it’s meaningful to have beliefs about them to the degree that they could be in my past light cone in the future.
The best illustration I’ve seen thus far is this one.
(Side note: I desire few things more than a community where people automatically and regularly engage in analyses like the one linked to. Such a community would actually be significantly less wrong than any community thus far seen on Earth. When LessWrong tries to engage in causal analyses of why others believe what they believe it’s usually really bad: proffered explanations are variations on “memetic selection pressures”, “confirmation bias”, or other fully general “explanations”/rationalizations. I think this in itself is a damning critique of LessWrong, and I think some of the attitude that promotes such ignorance of the causes of others’ beliefs is apparent in posts like “Our Phyg Is Not Exclusive Enough”.)
I agree that that post is the sort of thing that I want more of on LW.
It seems to me like Steve_Rayhawk’s comment is all about anticipation- I hold position X because I anticipate it will have Y impact on the future. But I think I see the disconnect you’re talking about- the position one takes on global warming is based on anticipations one has about politics, not the climate, but it’s necessary (and/or reduces cognitive dissonance) to state the political position in terms of anticipations one has about the climate.
I don’t think public stated beliefs have to be about anticipation- but I do think that private beliefs have to be (should be?) about anticipation. I also think I’m much more sympathetic to the view that rationalizations can use the “beliefs are anticipation” argument as a weapon without finding the true anticipations in question (like Steve_Rayhawk did), but I don’t think that implies that “beliefs are anticipation” is naive or incorrect. Separating out positions, identities, and beliefs seems more helpful than overloading the world beliefs.
it’s necessary (and/or reduces cognitive dissonance) to state the political position in terms of anticipations one has about the climate.
I don’t think public stated beliefs have to be about anticipation
You seem to be modeling the AGW disputant’s decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having “actual belief about AGW” as a latent node that isn’t introspectively accessible. That’s surely the case sometimes, but I don’t think that’s usually the case. Given the non-distinguishability of beliefs and preferences (and the theoretical non-unique-decomposability (is there a standard economic term for that?) of decision policies) I’m not sure it’s wise to use “belief” to refer to only the (in many cases unidentifiable) “actual anticipation” part of decision policies, either for others or ourselves, especially when we don’t have enough time to be abnormally reflective about the causes and purposes of others’/our “beliefs”.
(Areas where such caution isn’t as necessary are e.g. decision science modeling of simple rational agents, or largescale economic models. But if you want to model actual people’s policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn’t work or is way too cumbersome. Does your experience differ from mine? You have a lot more modeling experience than I do. Also I get the impression that Steve disagrees with me at least a little bit, and his opinion is worth a lot more than mine.)
Another more theoretical reason I encourage caution about the “belief as anticipation” idea is that I don’t think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination, where your choice of belief (e.g. expecting a squared rather than a cubed modulus Born rule) is determined by the innate preference (drilled into you by ecological contingencies and natural selection) to coordinate your actions with the actions and decision policies of the agents around you, and where your utility function is about self-coordination (e.g. for purposes of dynamic consistency). The ‘pure’ “anticipation” aspect of beliefs only seems relevant in certain cases, e.g. when you don’t have “anthropic” uncertainty (e.g. uncertainty about the extent to which your contexts are ambiently determined by your decision policy). Unfortunately people like me always have a substantial amount of “anthropic” uncertainty, and it’s mostly only in counterfactual/toy problems where I can use the naive Bayesian approach to epistemology.
(Note that taking the general decision theoretic perspective doesn’t lead to wacky quantum-suicide-like implications, otherwise I would be a lot more skeptical about the prudence of partially ditching the Bayesian boat.)
You seem to be modeling the AGW disputant’s decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having “actual belief about AGW” as a latent node that isn’t introspectively accessible.
I’m describing it that way but I don’t think the introspection is necessary- it’s just easier to talk about as if he had full access to his mind. (Private beliefs don’t have to be beliefs that the mind’s narrator has access to, and oftentimes are kept out of its reach for security purposes!)
But if you want to model actual people’s policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn’t work or is way too cumbersome. Does your experience differ from mine?
I don’t think I’ve seen any Bayesian modeling of that sort of thing, but I haven’t gone looking for it.
Bayes nets in general are difficult for people, rather than computers, to manipulate, and so it’s hard to decide what makes them too cumbersome. (Bayes nets in industrial use, like for fault diagnostics, tend to have hundreds if not thousands of nodes, but you wouldn’t have a person traverse them unaided.)
If you wanted to code a narrow AI that determined someone’s mood by, say, webcam footage of them, I think putting your perception data into a Bayes net would be a common approach.
Political positions / psychology seem tough. I could see someone do belief-mapping and correlation in a useful way, but I don’t see analysis on the level of Steve_Rayhawk’s post coming out of a computer-run Bayes net anytime soon, and I don’t think drawing out a Bayes net would help significantly with that sort of analysis. Possible but unlikely- we’ve got pretty sophisticated dedicated hardware for very similar things.
Another more theoretical reason I encourage caution about the “belief as anticipation” idea is that I don’t think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination
Hmm. I’m going to need to sleep on this, but this sort of coordination still smells to me like anticipation.
(A general comment: this conversation has moved me towards thinking that it’s useful for the LW norm to be tabooing “belief” and using “anticipation” instead when appropriate, rather than trying to equate the two terms. I don’t know if you’re advocating for tabooing “belief”, though.)
(Complement to my other reply: You might not have seen this comment, where I suggest “knowledge” as a better descriptor than “belief” in most mundane settings. (Also I suspect that people’s uses of the words “think” versus “believe” are correlated with introspectively distinct kinds of uncertainty.))
Don’t my beliefs about primordial cows constrain my anticipation of the fossil record and development of contemporary species?
I think “most people’s beliefs” fit the anticipation framework- so long as you express them in a compartmentalized fashion, and my understanding of the point of the ‘belief=anticipation’ approach is that it helps resist compartmentalization, which is generally positive.
Can you give examples of beliefs that aren’t about anticipation?
Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don’t relate to things that leave historical footprints. If you’ll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities.
In both cases my inability to constrain my anticipated experiences speaks to my limited ability to experience and not a limitation of the universe. The same principles of ‘belief’ apply even though it has incidentally fallen out of the scope which I am able to influence or verify even in principle.
Beliefs that aren’t easily testable also tend to be the kind of beliefs that have a lot of political associations, and thus tend not to act like beliefs as such so much as policies. Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. “communism is good” with “correctly implemented communism is good”, or “whites and blacks have equal average IQ” with “whites and blacks would have equal average IQ if they’d had the same cultural privileges/disadvantages”. (Apologies for the necessary political examples. Please don’t use this as an opportunity to talk about communism or race.)
Many “beliefs” that aren’t politically relevant—which excludes most scientific “knowledge” and much knowledge of your self, the people you know, what you want to do with your life, et cetera—are better characterized as knowledge, and not beliefs as such. The answers to questions like “do I have one hand, two hands, or three hands?” or “how do I get back to my house from my workplace?” aren’t generally beliefs so much as knowledge, and in my opinion “knowledge” is not only epistemologically but cognitively-neurologically a more accurate description, though I don’t really know enough about memory encoding to really back up that claim (though the difference is introspectively apparent). Either way, I still think that given our knowledge of the non-fundamental-ness of Bayes, we shouldn’t try too hard to stretch Bayes-ness to fit decision problems or cognitive algorithms that Bayes wasn’t meant to describe or solve, even if it’s technically possible to do so.
I believe the common to term for that mistake is “no true Scotsman”.
What do we lose by saying that doesn’t count as a belief? Some consistency when we describe how our minds manipulate anticipations (because we don’t separate out ones we can measure and ones we can’t, but reality does separate those, and our terminology fits reality)? Something else?
So if someone you cared about is leaving your future light cone, you wouldn’t care if he gets horribly tortured as soon as he’s outside of it?
I’m not clear on the relevance of caring to beliefs. I would prefer that those I care about not be tortured, but once they’re out of my future light cone whatever happens to them is a sunk cost- I don’t see what I (or they) get from my preferring or believing things about them.
Yes, but you can affect what happens to them before they leave.
Before they leave, their torture would be in my future light cone, right?
Oops, I just realized that in my hypothetical scenario by someone being tortured outside your light cone, I meant someone being tortured somewhere your two future light cones don’t intersect.
Indeed; being outside of my future light cone just means whatever I do has no impact on them. But now not only can I not impact them, but they’re also dead to me (as they, or any information they emit, won’t exist in my future). I still don’t see what impact caring about them has.
Ok, my scenario involves your actions having an effect on them before your two light cones become disjoint.
Right, but for my actions to have an effect on them, they have to be in my future light cone at the time of action. It sounds like you’re interested in events in my future light cone but will not be in any of the past light cones centered at my future intervals- like, for example, things that I can set in motion now which will not come to fruition until after I’m dead, or the person I care about pondering whether or not to jump into a black hole. Those things are worth caring about so long as they’re in my future light cone, and it’s meaningful to have beliefs about them to the degree that they could be in my past light cone in the future.
The best illustration I’ve seen thus far is this one.
(Side note: I desire few things more than a community where people automatically and regularly engage in analyses like the one linked to. Such a community would actually be significantly less wrong than any community thus far seen on Earth. When LessWrong tries to engage in causal analyses of why others believe what they believe it’s usually really bad: proffered explanations are variations on “memetic selection pressures”, “confirmation bias”, or other fully general “explanations”/rationalizations. I think this in itself is a damning critique of LessWrong, and I think some of the attitude that promotes such ignorance of the causes of others’ beliefs is apparent in posts like “Our Phyg Is Not Exclusive Enough”.)
I agree that that post is the sort of thing that I want more of on LW.
It seems to me like Steve_Rayhawk’s comment is all about anticipation- I hold position X because I anticipate it will have Y impact on the future. But I think I see the disconnect you’re talking about- the position one takes on global warming is based on anticipations one has about politics, not the climate, but it’s necessary (and/or reduces cognitive dissonance) to state the political position in terms of anticipations one has about the climate.
I don’t think public stated beliefs have to be about anticipation- but I do think that private beliefs have to be (should be?) about anticipation. I also think I’m much more sympathetic to the view that rationalizations can use the “beliefs are anticipation” argument as a weapon without finding the true anticipations in question (like Steve_Rayhawk did), but I don’t think that implies that “beliefs are anticipation” is naive or incorrect. Separating out positions, identities, and beliefs seems more helpful than overloading the world beliefs.
You seem to be modeling the AGW disputant’s decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having “actual belief about AGW” as a latent node that isn’t introspectively accessible. That’s surely the case sometimes, but I don’t think that’s usually the case. Given the non-distinguishability of beliefs and preferences (and the theoretical non-unique-decomposability (is there a standard economic term for that?) of decision policies) I’m not sure it’s wise to use “belief” to refer to only the (in many cases unidentifiable) “actual anticipation” part of decision policies, either for others or ourselves, especially when we don’t have enough time to be abnormally reflective about the causes and purposes of others’/our “beliefs”.
(Areas where such caution isn’t as necessary are e.g. decision science modeling of simple rational agents, or largescale economic models. But if you want to model actual people’s policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn’t work or is way too cumbersome. Does your experience differ from mine? You have a lot more modeling experience than I do. Also I get the impression that Steve disagrees with me at least a little bit, and his opinion is worth a lot more than mine.)
Another more theoretical reason I encourage caution about the “belief as anticipation” idea is that I don’t think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination, where your choice of belief (e.g. expecting a squared rather than a cubed modulus Born rule) is determined by the innate preference (drilled into you by ecological contingencies and natural selection) to coordinate your actions with the actions and decision policies of the agents around you, and where your utility function is about self-coordination (e.g. for purposes of dynamic consistency). The ‘pure’ “anticipation” aspect of beliefs only seems relevant in certain cases, e.g. when you don’t have “anthropic” uncertainty (e.g. uncertainty about the extent to which your contexts are ambiently determined by your decision policy). Unfortunately people like me always have a substantial amount of “anthropic” uncertainty, and it’s mostly only in counterfactual/toy problems where I can use the naive Bayesian approach to epistemology.
(Note that taking the general decision theoretic perspective doesn’t lead to wacky quantum-suicide-like implications, otherwise I would be a lot more skeptical about the prudence of partially ditching the Bayesian boat.)
I’m describing it that way but I don’t think the introspection is necessary- it’s just easier to talk about as if he had full access to his mind. (Private beliefs don’t have to be beliefs that the mind’s narrator has access to, and oftentimes are kept out of its reach for security purposes!)
I don’t think I’ve seen any Bayesian modeling of that sort of thing, but I haven’t gone looking for it.
Bayes nets in general are difficult for people, rather than computers, to manipulate, and so it’s hard to decide what makes them too cumbersome. (Bayes nets in industrial use, like for fault diagnostics, tend to have hundreds if not thousands of nodes, but you wouldn’t have a person traverse them unaided.)
If you wanted to code a narrow AI that determined someone’s mood by, say, webcam footage of them, I think putting your perception data into a Bayes net would be a common approach.
Political positions / psychology seem tough. I could see someone do belief-mapping and correlation in a useful way, but I don’t see analysis on the level of Steve_Rayhawk’s post coming out of a computer-run Bayes net anytime soon, and I don’t think drawing out a Bayes net would help significantly with that sort of analysis. Possible but unlikely- we’ve got pretty sophisticated dedicated hardware for very similar things.
Hmm. I’m going to need to sleep on this, but this sort of coordination still smells to me like anticipation.
(A general comment: this conversation has moved me towards thinking that it’s useful for the LW norm to be tabooing “belief” and using “anticipation” instead when appropriate, rather than trying to equate the two terms. I don’t know if you’re advocating for tabooing “belief”, though.)
(Complement to my other reply: You might not have seen this comment, where I suggest “knowledge” as a better descriptor than “belief” in most mundane settings. (Also I suspect that people’s uses of the words “think” versus “believe” are correlated with introspectively distinct kinds of uncertainty.))
Beliefs about primordial cows, etc. Most people’s beliefs. He’s talking descriptively, not normatively.
Don’t my beliefs about primordial cows constrain my anticipation of the fossil record and development of contemporary species?
I think “most people’s beliefs” fit the anticipation framework- so long as you express them in a compartmentalized fashion, and my understanding of the point of the ‘belief=anticipation’ approach is that it helps resist compartmentalization, which is generally positive.