I think you’re exactly right that distinguishing between what people claim, and then what they turn out to actually expect, is the important thing here. My argument is that alief/belief (or belief in belief), as terms, make this harder. I just used the words “claim” and “expectation”, and I would be immensely surprised if anyone misunderstands me. (To be redundant to the point of silliness: I claim that’s my expectation.)
“Belief” has, I think, lost any coherent definition. It seems now, not to refer to expectations, but to mean “I want to expect X.” Or to be a command: “model me as expecting X.” Whenever it’s used, I have to ask “what do you mean you believe it?” and the answer is often “I think it’s true”; but then when I say “what do you mean, you think it’s true?”, the answer is often “I just think it’s true”, or “I choose to think it’s true”. So it always hits a state somewhere on the continuum between “meaningless” and “deceptive”.
Words like “claim”, “expectation”, or even “presume”—as in, “I choose to presume this is true”—all work fine. But belief is broken, and alief implies all we need is to add another word on top. My claim is that we need, instead, less words: merely the ones that remain meaningful, rather than acting as linguistic shields against meaning.
But to use rthomas2′s idea of expectation, these can just be contradictory expectations. If you take away the constraint that beliefs be functions from world states to probabilities and instead let them be relations between world states and probabilities, we eliminate the need to talk about alief/belief or belief and belief-in-belief and can just talk about having contradictory axia/priors/expectations.
I think making an alief/belief distinction is mostly interesting if you want to think in terms of belief in the technical sense used by Jaynes, Pearl, et al.. Unfortunately humans don’t actually have beliefs in this technical sense, hence the entire rationalist project.
Loren ipsum
I think you’re exactly right that distinguishing between what people claim, and then what they turn out to actually expect, is the important thing here. My argument is that alief/belief (or belief in belief), as terms, make this harder. I just used the words “claim” and “expectation”, and I would be immensely surprised if anyone misunderstands me. (To be redundant to the point of silliness: I claim that’s my expectation.)
“Belief” has, I think, lost any coherent definition. It seems now, not to refer to expectations, but to mean “I want to expect X.” Or to be a command: “model me as expecting X.” Whenever it’s used, I have to ask “what do you mean you believe it?” and the answer is often “I think it’s true”; but then when I say “what do you mean, you think it’s true?”, the answer is often “I just think it’s true”, or “I choose to think it’s true”. So it always hits a state somewhere on the continuum between “meaningless” and “deceptive”.
Words like “claim”, “expectation”, or even “presume”—as in, “I choose to presume this is true”—all work fine. But belief is broken, and alief implies all we need is to add another word on top. My claim is that we need, instead, less words: merely the ones that remain meaningful, rather than acting as linguistic shields against meaning.
But to use rthomas2′s idea of expectation, these can just be contradictory expectations. If you take away the constraint that beliefs be functions from world states to probabilities and instead let them be relations between world states and probabilities, we eliminate the need to talk about alief/belief or belief and belief-in-belief and can just talk about having contradictory axia/priors/expectations.
I think making an alief/belief distinction is mostly interesting if you want to think in terms of belief in the technical sense used by Jaynes, Pearl, et al.. Unfortunately humans don’t actually have beliefs in this technical sense, hence the entire rationalist project.