I go by bouilhet. I don’t typically spend much time on the Internet, much less in the interactive blogosphere, and I don’t know how joining LessWrong will fit into the schedule of my life, but here goes. I’m interested from a philosophical perspective in many of the problems discussed on LW—AI/futurism, rationalism, epistemology, probability, bias—and after reading through a fair share of the material here I thought it was time to engage. I don’t exactly consider myself a rationalist (though perhaps I am one), but I spend a great deal of my thought-energy trying to see clearly—in my personal life as well as in my work life (art) - and reason plays a significant role in that. On the other hand, I’m fairly committed to the belief (at least partly based on observation) that a given (non-mathematical) truth claim cannot quite be separated from a person’s desire for said claim to be true. I’d like to have this belief challenged, naturally, but mostly I’m looking forward to further investigations of the gray areas. Broadly, I’m very attracted to what seems to be the unspoken premise of this community: that being definitively right may be off the table, but that one might, with a little effort, be less wrong.
I’m fairly committed to the belief (at least partly based on observation) that a given (non-mathematical) truth claim cannot quite be separated from a person’s desire for said claim to be true.
So, at the moment I believe that the car I can see out the window to the left of me is cream colored. I don’t think this belief is one I desire to be true (I would not be disappointed with a red car, for example). I have an (depending on how you count) an infinity of such beliefs about my immediate environment. What do you make of these beliefs, given your above claim?
I guess I don’t think you’re making a truth claim when you say that the car you see is cream-colored. You’re just reporting an empirical observation. If, however, someone sitting next to you objected that the same car was red, then there would be a problem to sort out, i.e. there would be some doubt as to what was being observed, whether one of you were color blind, etc. And in that case I think you would desire your perception to be the accurate one, not because cream-colored is better than red, but because humans, I think, generally need to believe that their direct experience of the world is reliable.
For practical purposes, intuition is of course indispensable. I prefer to distinguish between “beliefs” and “perceptions” when it comes to one’s immediate environment (I wouldn’t say I believe I’m sitting in front of my computer right now; I’d simply say that I am sitting in front of my computer), but there are also limits to what can be perceived immediately (e.g. by the naked eye) which can destabilize perceptions one would otherwise be happy to take for granted.
So: for most intents and purposes, I have no interest in challenging your report of what was seen out the window. But it seems to me that in making your report you already have some interest in its accuracy.
Yes I do. I believe that the car outside is cream colored. I believe that the car outside is not a cat. I believe that the car outside is heavier than 2.1312 kilograms, I believe...etc. I have a uncountably infinite number of beliefs just about the weight of that car!
You might not want to call these ‘beliefs’ for one reason or another, but that’s irrelevant to the grandparent: the great grand parent is just discussing truth-claims and my attitude towards them. And I can clearly make an infinite number of truth-claims about my immediate environment, given infinite claim making resources, of course (I assume, perhaps wrongly, that the question of my claim-making resources isn’t relevant to the point about belief and desire).
When you say “I have an infinity of such beliefs”, or even just “I can make an infinite number of truth-claims”, I assume that the “I” refers to “hen”, not some hypothetical entity with an infinite memory capacity (for the former), or an infinite lifespan (for the latter).
Unless you aren’t talking about yourself (and that car), both claims (have an infinity of beliefs, can make an infinite number of truth-claims) are obviously false on resource grounds alone. Even the number of truth-claims you could make in the remainder of your lifetime is limited. (In a hypothetical with infinite resources, it still would be a stretch to construct an infinite number of distinct claims about a finite object.)
Edit: You edited the “given infinite claim making resources” in later, which is contradictory with your “I” and the whole scenario I responded to. “I have an infinite number of beliefs”—“No you dont”—“Yes I do … with infinite resources”—“You don’t have infinite resources” - ????
Yeah, but the point about resources isn’t relevant to my question. Though, in fact, neither is the idea that I have an infinity of beliefs. So tapping out.
Edit: though, you know, this is an interesting question and I feel unsure of my answer, so I’d like to hear your objection. My thought is that if I believe A, and if A implies B, and if I’m aware that A implies B, then I believe B.
So in this case, I believe the car (now gone, sadly) weighs more than 100 kg. I’m aware that this implies that it weighs more than 99 kg. I’m also aware that this implies that it weighs more than all the real numbers of kilograms between 99 and 100. This is an infinity, and therefore I have an infinity of beliefs. Is that wrong?
I’m not Kawoomba, but I would say that yes, that’s wrong: the logical implications of my beliefs are not necessarily beliefs that I have, they are merely beliefs that I am capable of generating. (And in some cases, they aren’t even that, but that’s beside the point here.)
More specifically: do I believe that my car weighs more than 17.12311231 kilograms? Well, now that I’ve asked the question, yes I do. Did I believe that before I asked the question? No, I wouldn’t say so… though in this case, the derivation is so trivial it would not ordinarily occur to me to highlight the distinction.
The distinction becomes more salient when the derivation is more difficult; I can easily imagine myself responding to a Socratic question with some form of “Huh. I didn’t believe X a second ago, but X clearly follows from things I do believe, and which on reflection I continue to endorse, so I now believe X.”
Did I believe that before I asked the question? No, I wouldn’t say so...
Why not? Perhaps you could spell out the Socratic case a little more? I’m not stuck on saying that this or that must be what constitutes belief, but I do have the sense that I believe vastly more than what I do (or even am able to) call up in a given moment. This is why I’m reluctant to call explicit awareness* a criterion of belief. On the other hand, I’m not logically omniscient, so I can’t be said to believe everything that follows from what I’m explicitly aware that I believe. My guess as to a solution is that I believe (at least) everything that follows from what I explicitly believe, where those implications are cases of implications I am explicitly aware of.
So for example, I am explicitly aware that the car weighs more than 100kg, and I’m explicitly aware that it follows from this that the car weighs more than 99kg, and more than everything between 99 and 100kg, and that it follows from this that it weighs more than 99.1234...kg. Hence, infinite beliefs.
*Edit: explicit awareness should be glossed: I mean by this the relation I stand to a claim after you’ve asked me a question and I’ve given you that claim as an answer. I’m not sure what this involves, but ‘explicit awareness’ seems to describe it pretty well.
I’m not sure I have anything more to say; this feels more like a question of semantic preferences than anything deep. That is, I don’t think we disagree about what my brain is doing, merely what words to assign to what my brain is doing.
I certainly agree that I have many more things-I-would-label-beliefs than I am consciously aware of at any given moment. But I still wouldn’t call “my car weighs more than 12.141341 kg” one of those beliefs. Nor would I say that I was explicitly aware that it followed from “car > 100kg” that “car > 12.141341 kg” prior to explicitly thinking about it.
That is, I don’t think we disagree about what my brain is doing, merely what words to assign to what my brain is doing.
We agree on what our brains are doing. I think we disagree on whether or not our beliefs are limited to what our brains are or were doing: I suppose I’m saying that I should be said to believe right now what my brain would predictably do (belief/inference wise) on the basis of what it’s doing and has already done (excluding any new information).
Suppose we divide my beliefs (on my view of ‘belief’) into my occurrent beliefs (stuff my brain has done or is doing) from my extrapolated beliefs (stuff it would predictably do excluding new information). If you grant that my extrapolated beliefs have some special status that differentiates them from, say, the beliefs I’ll have about the episodes of The Americans I haven’t watched yet, then we’re debating semantics. If you don’t think my extrapolated beliefs are importantly different from any old beliefs I’ll have later on, then I think we’re arguing about something substantial.
Nor would I say that I was explicitly aware that it followed from “car > 100kg” that “car > 12.141341 kg” prior to explicitly thinking about it.
I mean that supposing you’re explicitly aware of a more general claim, say ‘the car weighs more than any specific real number of kilograms less than 100kg’, then you believe the (infinite) set of implied beliefs about the relation of the car’s weight to every real number of kg below 100, even though your brain hasn’t, and couldn’t, run through all of those beliefs explicitly.
Yes, I grant that beliefs which I can have in the future based on analysis of data I already have are importantly different from beliefs I can have in the future only if I’m given new inputs. Yes, I agree that the infinite set of implied beliefs about the car’s weight is in the former category, assuming I’m aware that the car weighs more than 100 kg and that numbers work the way they work. I think we’re just debating semantics.
(Let me just add to what TheOtherKawoomba already said)
This is an infinity, and therefore I have an infinity of beliefs. Is that wrong?
If that were so, then “I believe the sky is blue” would mean “I have an infinity of beliefs about the sky, namely that it is blue, so it also is “not blue+1/nth the distance to the next color” (then vary the n).
A student writing down “x>2” would have stated an infinity of beliefs about the answer. Does that seem like a sensible definition of belief? Say I picked one out of your infinite beliefs about the car’s weight. Where is it located in your brain? Which synapses encode it? It would have to be the same ones also encoding an infinity of other beliefs about the car’s weight. Does that make sense? I plead the Chewbacca defense.
There’s another problem if you consider all the implications as if they were your beliefs, even if you’ve not explicitly followed the implication. Propositions in math simply follow from axioms, i.e. are implications of some basic beliefs. Yet for some of those their truth value is famously not yet known. If you held all beliefs which were logically implied by stated beliefs to also be your beliefs just the same, you’d face a conundrum—you’d be uncertain about such famous—yet unknown—propositions. Yet that uncertainty isn’t in the territory—either the proposition is implied by the axioms or it isn’t. Yet you couldn’t build the “beliefs implied by this belief”. So would you just follow “trivial” implications such as in your example? You’d still need to evaluate them, and it is that simple fact of having to evaluate whether an implication actually is one, or even if 99 is actually smaller than 100 - however trivial it seems—that is the basis for the new (derived) belief, the reason you cannot automatically follow an infinity of implications simultaneously. Since you cannot evaluate an infinity of numbers, you cannot hold an infinity of beliefs.
Agreed. Edit: I don’t think the one claim means the other, but I do agree that the one (in this case) implies the other. Do you believe that the sky’s being blue excludes its being (at the same time and in the same respect) red?
A student writing down “x>2” would have stated an infinity of beliefs about the answer.
Well, the student could be said to believe an infinity of things about the answer, not that the student has stated such an infinity. We agree that to state (or explicitly think about) an infinity of beliefs would be impossible.
Where is it located in your brain?
In response to Dave (the other one), I distinguished beliefs on my view into occurrent beliefs (those beliefs that do or have corresponded to some neural process) and extrapolated beliefs (those beliefs, barring any new information, my brain could predictably arrive at from occurrent beliefs). I am saying that I should be said to believe right now both all of my occurrent beliefs and all my extrapolated beliefs, and that my extrapolated beliefs are infinite. My extrapolated beliefs have no place in my brain, but they’re safely in the bounds of logic+physics.
I plead the Chewbacca defense.
I...haven’t heard that one.
There’s another problem if you consider all the implications as if they were your beliefs, even if you’ve not explicitly followed the implication.
I don’t think this, I agree that this would lead to absurd results.
Hello everyone.
I go by bouilhet. I don’t typically spend much time on the Internet, much less in the interactive blogosphere, and I don’t know how joining LessWrong will fit into the schedule of my life, but here goes. I’m interested from a philosophical perspective in many of the problems discussed on LW—AI/futurism, rationalism, epistemology, probability, bias—and after reading through a fair share of the material here I thought it was time to engage. I don’t exactly consider myself a rationalist (though perhaps I am one), but I spend a great deal of my thought-energy trying to see clearly—in my personal life as well as in my work life (art) - and reason plays a significant role in that. On the other hand, I’m fairly committed to the belief (at least partly based on observation) that a given (non-mathematical) truth claim cannot quite be separated from a person’s desire for said claim to be true. I’d like to have this belief challenged, naturally, but mostly I’m looking forward to further investigations of the gray areas. Broadly, I’m very attracted to what seems to be the unspoken premise of this community: that being definitively right may be off the table, but that one might, with a little effort, be less wrong.
So, at the moment I believe that the car I can see out the window to the left of me is cream colored. I don’t think this belief is one I desire to be true (I would not be disappointed with a red car, for example). I have an (depending on how you count) an infinity of such beliefs about my immediate environment. What do you make of these beliefs, given your above claim?
Thanks for your reply, hen.
I guess I don’t think you’re making a truth claim when you say that the car you see is cream-colored. You’re just reporting an empirical observation. If, however, someone sitting next to you objected that the same car was red, then there would be a problem to sort out, i.e. there would be some doubt as to what was being observed, whether one of you were color blind, etc. And in that case I think you would desire your perception to be the accurate one, not because cream-colored is better than red, but because humans, I think, generally need to believe that their direct experience of the world is reliable.
For practical purposes, intuition is of course indispensable. I prefer to distinguish between “beliefs” and “perceptions” when it comes to one’s immediate environment (I wouldn’t say I believe I’m sitting in front of my computer right now; I’d simply say that I am sitting in front of my computer), but there are also limits to what can be perceived immediately (e.g. by the naked eye) which can destabilize perceptions one would otherwise be happy to take for granted.
So: for most intents and purposes, I have no interest in challenging your report of what was seen out the window. But it seems to me that in making your report you already have some interest in its accuracy.
Thanks for clarifying.
No you don’t.
Yes I do. I believe that the car outside is cream colored. I believe that the car outside is not a cat. I believe that the car outside is heavier than 2.1312 kilograms, I believe...etc. I have a uncountably infinite number of beliefs just about the weight of that car!
You might not want to call these ‘beliefs’ for one reason or another, but that’s irrelevant to the grandparent: the great grand parent is just discussing truth-claims and my attitude towards them. And I can clearly make an infinite number of truth-claims about my immediate environment, given infinite claim making resources, of course (I assume, perhaps wrongly, that the question of my claim-making resources isn’t relevant to the point about belief and desire).
When you say “I have an infinity of such beliefs”, or even just “I can make an infinite number of truth-claims”, I assume that the “I” refers to “hen”, not some hypothetical entity with an infinite memory capacity (for the former), or an infinite lifespan (for the latter).
Unless you aren’t talking about yourself (and that car), both claims (have an infinity of beliefs, can make an infinite number of truth-claims) are obviously false on resource grounds alone. Even the number of truth-claims you could make in the remainder of your lifetime is limited. (In a hypothetical with infinite resources, it still would be a stretch to construct an infinite number of distinct claims about a finite object.)
Edit: You edited the “given infinite claim making resources” in later, which is contradictory with your “I” and the whole scenario I responded to. “I have an infinite number of beliefs”—“No you dont”—“Yes I do … with infinite resources”—“You don’t have infinite resources” - ????
Yeah, but the point about resources isn’t relevant to my question. Though, in fact, neither is the idea that I have an infinity of beliefs. So tapping out.
Edit: though, you know, this is an interesting question and I feel unsure of my answer, so I’d like to hear your objection. My thought is that if I believe A, and if A implies B, and if I’m aware that A implies B, then I believe B.
So in this case, I believe the car (now gone, sadly) weighs more than 100 kg. I’m aware that this implies that it weighs more than 99 kg. I’m also aware that this implies that it weighs more than all the real numbers of kilograms between 99 and 100. This is an infinity, and therefore I have an infinity of beliefs. Is that wrong?
I’m not Kawoomba, but I would say that yes, that’s wrong: the logical implications of my beliefs are not necessarily beliefs that I have, they are merely beliefs that I am capable of generating. (And in some cases, they aren’t even that, but that’s beside the point here.)
More specifically: do I believe that my car weighs more than 17.12311231 kilograms? Well, now that I’ve asked the question, yes I do. Did I believe that before I asked the question? No, I wouldn’t say so… though in this case, the derivation is so trivial it would not ordinarily occur to me to highlight the distinction.
The distinction becomes more salient when the derivation is more difficult; I can easily imagine myself responding to a Socratic question with some form of “Huh. I didn’t believe X a second ago, but X clearly follows from things I do believe, and which on reflection I continue to endorse, so I now believe X.”
Why not? Perhaps you could spell out the Socratic case a little more? I’m not stuck on saying that this or that must be what constitutes belief, but I do have the sense that I believe vastly more than what I do (or even am able to) call up in a given moment. This is why I’m reluctant to call explicit awareness* a criterion of belief. On the other hand, I’m not logically omniscient, so I can’t be said to believe everything that follows from what I’m explicitly aware that I believe. My guess as to a solution is that I believe (at least) everything that follows from what I explicitly believe, where those implications are cases of implications I am explicitly aware of.
So for example, I am explicitly aware that the car weighs more than 100kg, and I’m explicitly aware that it follows from this that the car weighs more than 99kg, and more than everything between 99 and 100kg, and that it follows from this that it weighs more than 99.1234...kg. Hence, infinite beliefs.
*Edit: explicit awareness should be glossed: I mean by this the relation I stand to a claim after you’ve asked me a question and I’ve given you that claim as an answer. I’m not sure what this involves, but ‘explicit awareness’ seems to describe it pretty well.
I’m not sure I have anything more to say; this feels more like a question of semantic preferences than anything deep. That is, I don’t think we disagree about what my brain is doing, merely what words to assign to what my brain is doing.
I certainly agree that I have many more things-I-would-label-beliefs than I am consciously aware of at any given moment. But I still wouldn’t call “my car weighs more than 12.141341 kg” one of those beliefs. Nor would I say that I was explicitly aware that it followed from “car > 100kg” that “car > 12.141341 kg” prior to explicitly thinking about it.
We agree on what our brains are doing. I think we disagree on whether or not our beliefs are limited to what our brains are or were doing: I suppose I’m saying that I should be said to believe right now what my brain would predictably do (belief/inference wise) on the basis of what it’s doing and has already done (excluding any new information).
Suppose we divide my beliefs (on my view of ‘belief’) into my occurrent beliefs (stuff my brain has done or is doing) from my extrapolated beliefs (stuff it would predictably do excluding new information). If you grant that my extrapolated beliefs have some special status that differentiates them from, say, the beliefs I’ll have about the episodes of The Americans I haven’t watched yet, then we’re debating semantics. If you don’t think my extrapolated beliefs are importantly different from any old beliefs I’ll have later on, then I think we’re arguing about something substantial.
I mean that supposing you’re explicitly aware of a more general claim, say ‘the car weighs more than any specific real number of kilograms less than 100kg’, then you believe the (infinite) set of implied beliefs about the relation of the car’s weight to every real number of kg below 100, even though your brain hasn’t, and couldn’t, run through all of those beliefs explicitly.
Yes, I grant that beliefs which I can have in the future based on analysis of data I already have are importantly different from beliefs I can have in the future only if I’m given new inputs.
Yes, I agree that the infinite set of implied beliefs about the car’s weight is in the former category, assuming I’m aware that the car weighs more than 100 kg and that numbers work the way they work.
I think we’re just debating semantics.
Okay, well, thanks for giving me the opportunity to think this through a bit more.
(Let me just add to what TheOtherKawoomba already said)
If that were so, then “I believe the sky is blue” would mean “I have an infinity of beliefs about the sky, namely that it is blue, so it also is “not blue+1/nth the distance to the next color” (then vary the n).
A student writing down “x>2” would have stated an infinity of beliefs about the answer. Does that seem like a sensible definition of belief? Say I picked one out of your infinite beliefs about the car’s weight. Where is it located in your brain? Which synapses encode it? It would have to be the same ones also encoding an infinity of other beliefs about the car’s weight. Does that make sense? I plead the Chewbacca defense.
There’s another problem if you consider all the implications as if they were your beliefs, even if you’ve not explicitly followed the implication. Propositions in math simply follow from axioms, i.e. are implications of some basic beliefs. Yet for some of those their truth value is famously not yet known. If you held all beliefs which were logically implied by stated beliefs to also be your beliefs just the same, you’d face a conundrum—you’d be uncertain about such famous—yet unknown—propositions. Yet that uncertainty isn’t in the territory—either the proposition is implied by the axioms or it isn’t. Yet you couldn’t build the “beliefs implied by this belief”. So would you just follow “trivial” implications such as in your example? You’d still need to evaluate them, and it is that simple fact of having to evaluate whether an implication actually is one, or even if 99 is actually smaller than 100 - however trivial it seems—that is the basis for the new (derived) belief, the reason you cannot automatically follow an infinity of implications simultaneously. Since you cannot evaluate an infinity of numbers, you cannot hold an infinity of beliefs.
Agreed. Edit: I don’t think the one claim means the other, but I do agree that the one (in this case) implies the other. Do you believe that the sky’s being blue excludes its being (at the same time and in the same respect) red?
Well, the student could be said to believe an infinity of things about the answer, not that the student has stated such an infinity. We agree that to state (or explicitly think about) an infinity of beliefs would be impossible.
In response to Dave (the other one), I distinguished beliefs on my view into occurrent beliefs (those beliefs that do or have corresponded to some neural process) and extrapolated beliefs (those beliefs, barring any new information, my brain could predictably arrive at from occurrent beliefs). I am saying that I should be said to believe right now both all of my occurrent beliefs and all my extrapolated beliefs, and that my extrapolated beliefs are infinite. My extrapolated beliefs have no place in my brain, but they’re safely in the bounds of logic+physics.
I...haven’t heard that one.
I don’t think this, I agree that this would lead to absurd results.