Can you give examples of beliefs that aren’t about anticipation?
Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don’t relate to things that leave historical footprints. If you’ll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities.
In both cases my inability to constrain my anticipated experiences speaks to my limited ability to experience and not a limitation of the universe. The same principles of ‘belief’ apply even though it has incidentally fallen out of the scope which I am able to influence or verify even in principle.
Beliefs that aren’t easily testable also tend to be the kind of beliefs that have a lot of political associations, and thus tend not to act like beliefs as such so much as policies. Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. “communism is good” with “correctly implemented communism is good”, or “whites and blacks have equal average IQ” with “whites and blacks would have equal average IQ if they’d had the same cultural privileges/disadvantages”. (Apologies for the necessary political examples. Please don’t use this as an opportunity to talk about communism or race.)
Many “beliefs” that aren’t politically relevant—which excludes most scientific “knowledge” and much knowledge of your self, the people you know, what you want to do with your life, et cetera—are better characterized as knowledge, and not beliefs as such. The answers to questions like “do I have one hand, two hands, or three hands?” or “how do I get back to my house from my workplace?” aren’t generally beliefs so much as knowledge, and in my opinion “knowledge” is not only epistemologically but cognitively-neurologically a more accurate description, though I don’t really know enough about memory encoding to really back up that claim (though the difference is introspectively apparent). Either way, I still think that given our knowledge of the non-fundamental-ness of Bayes, we shouldn’t try too hard to stretch Bayes-ness to fit decision problems or cognitive algorithms that Bayes wasn’t meant to describe or solve, even if it’s technically possible to do so.
Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. “communism is good” with “correctly implemented communism is good”, or “whites and blacks have equal average IQ” with “whites and blacks would have equal average IQ if they’d had the same cultural privileges/disadvantages”.
I believe the common to term for that mistake is “no true Scotsman”.
Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don’t relate to things that leave historical footprints. If you’ll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities.
What do we lose by saying that doesn’t count as a belief? Some consistency when we describe how our minds manipulate anticipations (because we don’t separate out ones we can measure and ones we can’t, but reality does separate those, and our terminology fits reality)? Something else?
I’m not clear on the relevance of caring to beliefs. I would prefer that those I care about not be tortured, but once they’re out of my future light cone whatever happens to them is a sunk cost- I don’t see what I (or they) get from my preferring or believing things about them.
Oops, I just realized that in my hypothetical scenario by someone being tortured outside your light cone, I meant someone being tortured somewhere your two future light cones don’t intersect.
Indeed; being outside of my future light cone just means whatever I do has no impact on them. But now not only can I not impact them, but they’re also dead to me (as they, or any information they emit, won’t exist in my future). I still don’t see what impact caring about them has.
Right, but for my actions to have an effect on them, they have to be in my future light cone at the time of action. It sounds like you’re interested in events in my future light cone but will not be in any of the past light cones centered at my future intervals- like, for example, things that I can set in motion now which will not come to fruition until after I’m dead, or the person I care about pondering whether or not to jump into a black hole. Those things are worth caring about so long as they’re in my future light cone, and it’s meaningful to have beliefs about them to the degree that they could be in my past light cone in the future.
Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don’t relate to things that leave historical footprints. If you’ll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities.
In both cases my inability to constrain my anticipated experiences speaks to my limited ability to experience and not a limitation of the universe. The same principles of ‘belief’ apply even though it has incidentally fallen out of the scope which I am able to influence or verify even in principle.
Beliefs that aren’t easily testable also tend to be the kind of beliefs that have a lot of political associations, and thus tend not to act like beliefs as such so much as policies. Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. “communism is good” with “correctly implemented communism is good”, or “whites and blacks have equal average IQ” with “whites and blacks would have equal average IQ if they’d had the same cultural privileges/disadvantages”. (Apologies for the necessary political examples. Please don’t use this as an opportunity to talk about communism or race.)
Many “beliefs” that aren’t politically relevant—which excludes most scientific “knowledge” and much knowledge of your self, the people you know, what you want to do with your life, et cetera—are better characterized as knowledge, and not beliefs as such. The answers to questions like “do I have one hand, two hands, or three hands?” or “how do I get back to my house from my workplace?” aren’t generally beliefs so much as knowledge, and in my opinion “knowledge” is not only epistemologically but cognitively-neurologically a more accurate description, though I don’t really know enough about memory encoding to really back up that claim (though the difference is introspectively apparent). Either way, I still think that given our knowledge of the non-fundamental-ness of Bayes, we shouldn’t try too hard to stretch Bayes-ness to fit decision problems or cognitive algorithms that Bayes wasn’t meant to describe or solve, even if it’s technically possible to do so.
I believe the common to term for that mistake is “no true Scotsman”.
What do we lose by saying that doesn’t count as a belief? Some consistency when we describe how our minds manipulate anticipations (because we don’t separate out ones we can measure and ones we can’t, but reality does separate those, and our terminology fits reality)? Something else?
So if someone you cared about is leaving your future light cone, you wouldn’t care if he gets horribly tortured as soon as he’s outside of it?
I’m not clear on the relevance of caring to beliefs. I would prefer that those I care about not be tortured, but once they’re out of my future light cone whatever happens to them is a sunk cost- I don’t see what I (or they) get from my preferring or believing things about them.
Yes, but you can affect what happens to them before they leave.
Before they leave, their torture would be in my future light cone, right?
Oops, I just realized that in my hypothetical scenario by someone being tortured outside your light cone, I meant someone being tortured somewhere your two future light cones don’t intersect.
Indeed; being outside of my future light cone just means whatever I do has no impact on them. But now not only can I not impact them, but they’re also dead to me (as they, or any information they emit, won’t exist in my future). I still don’t see what impact caring about them has.
Ok, my scenario involves your actions having an effect on them before your two light cones become disjoint.
Right, but for my actions to have an effect on them, they have to be in my future light cone at the time of action. It sounds like you’re interested in events in my future light cone but will not be in any of the past light cones centered at my future intervals- like, for example, things that I can set in motion now which will not come to fruition until after I’m dead, or the person I care about pondering whether or not to jump into a black hole. Those things are worth caring about so long as they’re in my future light cone, and it’s meaningful to have beliefs about them to the degree that they could be in my past light cone in the future.