I agree that that post is the sort of thing that I want more of on LW.
It seems to me like Steve_Rayhawk’s comment is all about anticipation- I hold position X because I anticipate it will have Y impact on the future. But I think I see the disconnect you’re talking about- the position one takes on global warming is based on anticipations one has about politics, not the climate, but it’s necessary (and/or reduces cognitive dissonance) to state the political position in terms of anticipations one has about the climate.
I don’t think public stated beliefs have to be about anticipation- but I do think that private beliefs have to be (should be?) about anticipation. I also think I’m much more sympathetic to the view that rationalizations can use the “beliefs are anticipation” argument as a weapon without finding the true anticipations in question (like Steve_Rayhawk did), but I don’t think that implies that “beliefs are anticipation” is naive or incorrect. Separating out positions, identities, and beliefs seems more helpful than overloading the world beliefs.
it’s necessary (and/or reduces cognitive dissonance) to state the political position in terms of anticipations one has about the climate.
I don’t think public stated beliefs have to be about anticipation
You seem to be modeling the AGW disputant’s decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having “actual belief about AGW” as a latent node that isn’t introspectively accessible. That’s surely the case sometimes, but I don’t think that’s usually the case. Given the non-distinguishability of beliefs and preferences (and the theoretical non-unique-decomposability (is there a standard economic term for that?) of decision policies) I’m not sure it’s wise to use “belief” to refer to only the (in many cases unidentifiable) “actual anticipation” part of decision policies, either for others or ourselves, especially when we don’t have enough time to be abnormally reflective about the causes and purposes of others’/our “beliefs”.
(Areas where such caution isn’t as necessary are e.g. decision science modeling of simple rational agents, or largescale economic models. But if you want to model actual people’s policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn’t work or is way too cumbersome. Does your experience differ from mine? You have a lot more modeling experience than I do. Also I get the impression that Steve disagrees with me at least a little bit, and his opinion is worth a lot more than mine.)
Another more theoretical reason I encourage caution about the “belief as anticipation” idea is that I don’t think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination, where your choice of belief (e.g. expecting a squared rather than a cubed modulus Born rule) is determined by the innate preference (drilled into you by ecological contingencies and natural selection) to coordinate your actions with the actions and decision policies of the agents around you, and where your utility function is about self-coordination (e.g. for purposes of dynamic consistency). The ‘pure’ “anticipation” aspect of beliefs only seems relevant in certain cases, e.g. when you don’t have “anthropic” uncertainty (e.g. uncertainty about the extent to which your contexts are ambiently determined by your decision policy). Unfortunately people like me always have a substantial amount of “anthropic” uncertainty, and it’s mostly only in counterfactual/toy problems where I can use the naive Bayesian approach to epistemology.
(Note that taking the general decision theoretic perspective doesn’t lead to wacky quantum-suicide-like implications, otherwise I would be a lot more skeptical about the prudence of partially ditching the Bayesian boat.)
You seem to be modeling the AGW disputant’s decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having “actual belief about AGW” as a latent node that isn’t introspectively accessible.
I’m describing it that way but I don’t think the introspection is necessary- it’s just easier to talk about as if he had full access to his mind. (Private beliefs don’t have to be beliefs that the mind’s narrator has access to, and oftentimes are kept out of its reach for security purposes!)
But if you want to model actual people’s policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn’t work or is way too cumbersome. Does your experience differ from mine?
I don’t think I’ve seen any Bayesian modeling of that sort of thing, but I haven’t gone looking for it.
Bayes nets in general are difficult for people, rather than computers, to manipulate, and so it’s hard to decide what makes them too cumbersome. (Bayes nets in industrial use, like for fault diagnostics, tend to have hundreds if not thousands of nodes, but you wouldn’t have a person traverse them unaided.)
If you wanted to code a narrow AI that determined someone’s mood by, say, webcam footage of them, I think putting your perception data into a Bayes net would be a common approach.
Political positions / psychology seem tough. I could see someone do belief-mapping and correlation in a useful way, but I don’t see analysis on the level of Steve_Rayhawk’s post coming out of a computer-run Bayes net anytime soon, and I don’t think drawing out a Bayes net would help significantly with that sort of analysis. Possible but unlikely- we’ve got pretty sophisticated dedicated hardware for very similar things.
Another more theoretical reason I encourage caution about the “belief as anticipation” idea is that I don’t think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination
Hmm. I’m going to need to sleep on this, but this sort of coordination still smells to me like anticipation.
(A general comment: this conversation has moved me towards thinking that it’s useful for the LW norm to be tabooing “belief” and using “anticipation” instead when appropriate, rather than trying to equate the two terms. I don’t know if you’re advocating for tabooing “belief”, though.)
(Complement to my other reply: You might not have seen this comment, where I suggest “knowledge” as a better descriptor than “belief” in most mundane settings. (Also I suspect that people’s uses of the words “think” versus “believe” are correlated with introspectively distinct kinds of uncertainty.))
I agree that that post is the sort of thing that I want more of on LW.
It seems to me like Steve_Rayhawk’s comment is all about anticipation- I hold position X because I anticipate it will have Y impact on the future. But I think I see the disconnect you’re talking about- the position one takes on global warming is based on anticipations one has about politics, not the climate, but it’s necessary (and/or reduces cognitive dissonance) to state the political position in terms of anticipations one has about the climate.
I don’t think public stated beliefs have to be about anticipation- but I do think that private beliefs have to be (should be?) about anticipation. I also think I’m much more sympathetic to the view that rationalizations can use the “beliefs are anticipation” argument as a weapon without finding the true anticipations in question (like Steve_Rayhawk did), but I don’t think that implies that “beliefs are anticipation” is naive or incorrect. Separating out positions, identities, and beliefs seems more helpful than overloading the world beliefs.
You seem to be modeling the AGW disputant’s decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having “actual belief about AGW” as a latent node that isn’t introspectively accessible. That’s surely the case sometimes, but I don’t think that’s usually the case. Given the non-distinguishability of beliefs and preferences (and the theoretical non-unique-decomposability (is there a standard economic term for that?) of decision policies) I’m not sure it’s wise to use “belief” to refer to only the (in many cases unidentifiable) “actual anticipation” part of decision policies, either for others or ourselves, especially when we don’t have enough time to be abnormally reflective about the causes and purposes of others’/our “beliefs”.
(Areas where such caution isn’t as necessary are e.g. decision science modeling of simple rational agents, or largescale economic models. But if you want to model actual people’s policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn’t work or is way too cumbersome. Does your experience differ from mine? You have a lot more modeling experience than I do. Also I get the impression that Steve disagrees with me at least a little bit, and his opinion is worth a lot more than mine.)
Another more theoretical reason I encourage caution about the “belief as anticipation” idea is that I don’t think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination, where your choice of belief (e.g. expecting a squared rather than a cubed modulus Born rule) is determined by the innate preference (drilled into you by ecological contingencies and natural selection) to coordinate your actions with the actions and decision policies of the agents around you, and where your utility function is about self-coordination (e.g. for purposes of dynamic consistency). The ‘pure’ “anticipation” aspect of beliefs only seems relevant in certain cases, e.g. when you don’t have “anthropic” uncertainty (e.g. uncertainty about the extent to which your contexts are ambiently determined by your decision policy). Unfortunately people like me always have a substantial amount of “anthropic” uncertainty, and it’s mostly only in counterfactual/toy problems where I can use the naive Bayesian approach to epistemology.
(Note that taking the general decision theoretic perspective doesn’t lead to wacky quantum-suicide-like implications, otherwise I would be a lot more skeptical about the prudence of partially ditching the Bayesian boat.)
I’m describing it that way but I don’t think the introspection is necessary- it’s just easier to talk about as if he had full access to his mind. (Private beliefs don’t have to be beliefs that the mind’s narrator has access to, and oftentimes are kept out of its reach for security purposes!)
I don’t think I’ve seen any Bayesian modeling of that sort of thing, but I haven’t gone looking for it.
Bayes nets in general are difficult for people, rather than computers, to manipulate, and so it’s hard to decide what makes them too cumbersome. (Bayes nets in industrial use, like for fault diagnostics, tend to have hundreds if not thousands of nodes, but you wouldn’t have a person traverse them unaided.)
If you wanted to code a narrow AI that determined someone’s mood by, say, webcam footage of them, I think putting your perception data into a Bayes net would be a common approach.
Political positions / psychology seem tough. I could see someone do belief-mapping and correlation in a useful way, but I don’t see analysis on the level of Steve_Rayhawk’s post coming out of a computer-run Bayes net anytime soon, and I don’t think drawing out a Bayes net would help significantly with that sort of analysis. Possible but unlikely- we’ve got pretty sophisticated dedicated hardware for very similar things.
Hmm. I’m going to need to sleep on this, but this sort of coordination still smells to me like anticipation.
(A general comment: this conversation has moved me towards thinking that it’s useful for the LW norm to be tabooing “belief” and using “anticipation” instead when appropriate, rather than trying to equate the two terms. I don’t know if you’re advocating for tabooing “belief”, though.)
(Complement to my other reply: You might not have seen this comment, where I suggest “knowledge” as a better descriptor than “belief” in most mundane settings. (Also I suspect that people’s uses of the words “think” versus “believe” are correlated with introspectively distinct kinds of uncertainty.))