I dare say you can in principle have no conscious beliefs at all. Presumably that’s roughly the situation of an ant, for instance. But your actions will still embody various things we might as well call beliefs (the term “alief” is commonly used around here for a similar idea) and you will do better if those match the world better. I’m betting that I can do this better by actually having beliefs, because then I get to use this nice big brain evolution has given me.
Don’t you need to accept the axioms in order to [...]
Yes. (I’ve said this from the outset.) Note that this doesn’t make the evidence for them disappear because it’s possible (in principle) for the evidence to point the other way—as we can see from the closely parallel case where instead we assume as a working hypothesis that our perception and reasoning and memory are perfect, engage in scientific investigation of them, and find lots of evidence that they aren’t perfect after all.
It seems that you want a set of axioms from which we can derive everything—but then you want justification for adopting those axioms (so they aren’t really serving as axioms after all), and “they seem obvious” won’t do for you (even though that is pretty much the standard ground for adopting any axioms) and neither will the other considerations mentioned in this discussion. So, I repeat: What possibly conceivable outcome from this discussion would count as “OK” for you? It seems to me that you’re asking for something provably impossible.
(That is: If even axioms that seem obvious, can’t be avoided, and appear to work out well in practice aren’t good enough for you to treat them as axioms, then it looks like your strategy is to keep asking “why?” in response to any proposed set of axioms. But in that case, as you keep doing this one of two things must necessarily happen: either you will return to axioms already considered, in which case you have blatant circular reasoning of the sort you are already objecting to only worse, or else the axioms under consideration will get unboundedly longer and eventually you’ll get ones too complicated to fit into your brain or indeed the collective understanding of the human race. In none of these cases are you going to be satisfied.)
I dare say you can in principle have no conscious beliefs at all. Presumably that’s roughly the situation of an ant, for instance. But your actions will still embody various things we might as well call beliefs (the term “alief” is commonly used around here for a similar idea) and you will do better if those match the world better. I’m betting that I can do this better by actually having beliefs, because then I get to use this nice big brain evolution has given me.
I don’t know why you think ants presumably have no conscious beliefs, but I suppose that’s irrelevant. Anyways, I don’t disagree with what you said, but I don’t see how it entails that one is incapable of having no beliefs. You just suggest that having beliefs is beneficial.
What possibly conceivable outcome from this discussion would count as “OK” for you? It seems to me that you’re asking for something provably impossible.
(That is: If even axioms that seem obvious, can’t be avoided, and appear to work out well in practice aren’t good enough for you to treat them as axioms, then it looks like your strategy is to keep asking “why?” in response to any proposed set of axioms. But in that case, as you keep doing this one of two things must necessarily happen: either you will return to axioms already considered, in which case you have blatant circular reasoning of the sort you are already objecting to only worse, or else the axioms under consideration will get unboundedly longer and eventually you’ll get ones too complicated to fit into your brain or indeed the collective understanding of the human race. In none of these cases are you going to be satisfied.)
“Okay,” as I have said before, means to have a reasonable chance of being true. Anyways, I see your point; I really do seem to be asking for an impossible answer.
I dare say you can in principle have no conscious beliefs at all. Presumably that’s roughly the situation of an ant, for instance. But your actions will still embody various things we might as well call beliefs (the term “alief” is commonly used around here for a similar idea) and you will do better if those match the world better. I’m betting that I can do this better by actually having beliefs, because then I get to use this nice big brain evolution has given me.
Yes. (I’ve said this from the outset.) Note that this doesn’t make the evidence for them disappear because it’s possible (in principle) for the evidence to point the other way—as we can see from the closely parallel case where instead we assume as a working hypothesis that our perception and reasoning and memory are perfect, engage in scientific investigation of them, and find lots of evidence that they aren’t perfect after all.
It seems that you want a set of axioms from which we can derive everything—but then you want justification for adopting those axioms (so they aren’t really serving as axioms after all), and “they seem obvious” won’t do for you (even though that is pretty much the standard ground for adopting any axioms) and neither will the other considerations mentioned in this discussion. So, I repeat: What possibly conceivable outcome from this discussion would count as “OK” for you? It seems to me that you’re asking for something provably impossible.
(That is: If even axioms that seem obvious, can’t be avoided, and appear to work out well in practice aren’t good enough for you to treat them as axioms, then it looks like your strategy is to keep asking “why?” in response to any proposed set of axioms. But in that case, as you keep doing this one of two things must necessarily happen: either you will return to axioms already considered, in which case you have blatant circular reasoning of the sort you are already objecting to only worse, or else the axioms under consideration will get unboundedly longer and eventually you’ll get ones too complicated to fit into your brain or indeed the collective understanding of the human race. In none of these cases are you going to be satisfied.)
I don’t know why you think ants presumably have no conscious beliefs, but I suppose that’s irrelevant. Anyways, I don’t disagree with what you said, but I don’t see how it entails that one is incapable of having no beliefs. You just suggest that having beliefs is beneficial.
“Okay,” as I have said before, means to have a reasonable chance of being true. Anyways, I see your point; I really do seem to be asking for an impossible answer.