I’m not Kawoomba, but I would say that yes, that’s wrong: the logical implications of my beliefs are not necessarily beliefs that I have, they are merely beliefs that I am capable of generating. (And in some cases, they aren’t even that, but that’s beside the point here.)
More specifically: do I believe that my car weighs more than 17.12311231 kilograms? Well, now that I’ve asked the question, yes I do. Did I believe that before I asked the question? No, I wouldn’t say so… though in this case, the derivation is so trivial it would not ordinarily occur to me to highlight the distinction.
The distinction becomes more salient when the derivation is more difficult; I can easily imagine myself responding to a Socratic question with some form of “Huh. I didn’t believe X a second ago, but X clearly follows from things I do believe, and which on reflection I continue to endorse, so I now believe X.”
Did I believe that before I asked the question? No, I wouldn’t say so...
Why not? Perhaps you could spell out the Socratic case a little more? I’m not stuck on saying that this or that must be what constitutes belief, but I do have the sense that I believe vastly more than what I do (or even am able to) call up in a given moment. This is why I’m reluctant to call explicit awareness* a criterion of belief. On the other hand, I’m not logically omniscient, so I can’t be said to believe everything that follows from what I’m explicitly aware that I believe. My guess as to a solution is that I believe (at least) everything that follows from what I explicitly believe, where those implications are cases of implications I am explicitly aware of.
So for example, I am explicitly aware that the car weighs more than 100kg, and I’m explicitly aware that it follows from this that the car weighs more than 99kg, and more than everything between 99 and 100kg, and that it follows from this that it weighs more than 99.1234...kg. Hence, infinite beliefs.
*Edit: explicit awareness should be glossed: I mean by this the relation I stand to a claim after you’ve asked me a question and I’ve given you that claim as an answer. I’m not sure what this involves, but ‘explicit awareness’ seems to describe it pretty well.
I’m not sure I have anything more to say; this feels more like a question of semantic preferences than anything deep. That is, I don’t think we disagree about what my brain is doing, merely what words to assign to what my brain is doing.
I certainly agree that I have many more things-I-would-label-beliefs than I am consciously aware of at any given moment. But I still wouldn’t call “my car weighs more than 12.141341 kg” one of those beliefs. Nor would I say that I was explicitly aware that it followed from “car > 100kg” that “car > 12.141341 kg” prior to explicitly thinking about it.
That is, I don’t think we disagree about what my brain is doing, merely what words to assign to what my brain is doing.
We agree on what our brains are doing. I think we disagree on whether or not our beliefs are limited to what our brains are or were doing: I suppose I’m saying that I should be said to believe right now what my brain would predictably do (belief/inference wise) on the basis of what it’s doing and has already done (excluding any new information).
Suppose we divide my beliefs (on my view of ‘belief’) into my occurrent beliefs (stuff my brain has done or is doing) from my extrapolated beliefs (stuff it would predictably do excluding new information). If you grant that my extrapolated beliefs have some special status that differentiates them from, say, the beliefs I’ll have about the episodes of The Americans I haven’t watched yet, then we’re debating semantics. If you don’t think my extrapolated beliefs are importantly different from any old beliefs I’ll have later on, then I think we’re arguing about something substantial.
Nor would I say that I was explicitly aware that it followed from “car > 100kg” that “car > 12.141341 kg” prior to explicitly thinking about it.
I mean that supposing you’re explicitly aware of a more general claim, say ‘the car weighs more than any specific real number of kilograms less than 100kg’, then you believe the (infinite) set of implied beliefs about the relation of the car’s weight to every real number of kg below 100, even though your brain hasn’t, and couldn’t, run through all of those beliefs explicitly.
Yes, I grant that beliefs which I can have in the future based on analysis of data I already have are importantly different from beliefs I can have in the future only if I’m given new inputs. Yes, I agree that the infinite set of implied beliefs about the car’s weight is in the former category, assuming I’m aware that the car weighs more than 100 kg and that numbers work the way they work. I think we’re just debating semantics.
I’m not Kawoomba, but I would say that yes, that’s wrong: the logical implications of my beliefs are not necessarily beliefs that I have, they are merely beliefs that I am capable of generating. (And in some cases, they aren’t even that, but that’s beside the point here.)
More specifically: do I believe that my car weighs more than 17.12311231 kilograms? Well, now that I’ve asked the question, yes I do. Did I believe that before I asked the question? No, I wouldn’t say so… though in this case, the derivation is so trivial it would not ordinarily occur to me to highlight the distinction.
The distinction becomes more salient when the derivation is more difficult; I can easily imagine myself responding to a Socratic question with some form of “Huh. I didn’t believe X a second ago, but X clearly follows from things I do believe, and which on reflection I continue to endorse, so I now believe X.”
Why not? Perhaps you could spell out the Socratic case a little more? I’m not stuck on saying that this or that must be what constitutes belief, but I do have the sense that I believe vastly more than what I do (or even am able to) call up in a given moment. This is why I’m reluctant to call explicit awareness* a criterion of belief. On the other hand, I’m not logically omniscient, so I can’t be said to believe everything that follows from what I’m explicitly aware that I believe. My guess as to a solution is that I believe (at least) everything that follows from what I explicitly believe, where those implications are cases of implications I am explicitly aware of.
So for example, I am explicitly aware that the car weighs more than 100kg, and I’m explicitly aware that it follows from this that the car weighs more than 99kg, and more than everything between 99 and 100kg, and that it follows from this that it weighs more than 99.1234...kg. Hence, infinite beliefs.
*Edit: explicit awareness should be glossed: I mean by this the relation I stand to a claim after you’ve asked me a question and I’ve given you that claim as an answer. I’m not sure what this involves, but ‘explicit awareness’ seems to describe it pretty well.
I’m not sure I have anything more to say; this feels more like a question of semantic preferences than anything deep. That is, I don’t think we disagree about what my brain is doing, merely what words to assign to what my brain is doing.
I certainly agree that I have many more things-I-would-label-beliefs than I am consciously aware of at any given moment. But I still wouldn’t call “my car weighs more than 12.141341 kg” one of those beliefs. Nor would I say that I was explicitly aware that it followed from “car > 100kg” that “car > 12.141341 kg” prior to explicitly thinking about it.
We agree on what our brains are doing. I think we disagree on whether or not our beliefs are limited to what our brains are or were doing: I suppose I’m saying that I should be said to believe right now what my brain would predictably do (belief/inference wise) on the basis of what it’s doing and has already done (excluding any new information).
Suppose we divide my beliefs (on my view of ‘belief’) into my occurrent beliefs (stuff my brain has done or is doing) from my extrapolated beliefs (stuff it would predictably do excluding new information). If you grant that my extrapolated beliefs have some special status that differentiates them from, say, the beliefs I’ll have about the episodes of The Americans I haven’t watched yet, then we’re debating semantics. If you don’t think my extrapolated beliefs are importantly different from any old beliefs I’ll have later on, then I think we’re arguing about something substantial.
I mean that supposing you’re explicitly aware of a more general claim, say ‘the car weighs more than any specific real number of kilograms less than 100kg’, then you believe the (infinite) set of implied beliefs about the relation of the car’s weight to every real number of kg below 100, even though your brain hasn’t, and couldn’t, run through all of those beliefs explicitly.
Yes, I grant that beliefs which I can have in the future based on analysis of data I already have are importantly different from beliefs I can have in the future only if I’m given new inputs.
Yes, I agree that the infinite set of implied beliefs about the car’s weight is in the former category, assuming I’m aware that the car weighs more than 100 kg and that numbers work the way they work.
I think we’re just debating semantics.
Okay, well, thanks for giving me the opportunity to think this through a bit more.