What about beliefs being justified by non-beliefs? If you’re a
traditional foundationalist, you think everything is ultimately
grounded in sense-experience, about which we cannot reasonably
doubt.
If a traditional foundationalist believes that beliefs are
justified by sense-experience, he’s a justificationalist. The
argument in the OP works. How can he justify the belief that
beliefs are justified by sense-experience without first assuming
his conclusion?
Also, what about externalism? This is one of the major elements
of modern epistemology, as a response to such skeptical
arguments.
I had to look it up.
It is apparently the position that the mind is a result of both
what is going on inside the subject and outside the subject.
Some of them seem to be concerned about what beliefs mean, and
others seem to carefully avoid using the word “belief”. In the
OP I was more interested in whether the beliefs accurately
predict sensory experience. So far as I can tell, externalism
says we don’t have a mind that can be considered as a separate
object, so we don’t know things, so I expect it to have little to
say about how we know what we know. Can you explain why you
brought it up?
I don’t mean to imply that either of these is correct, but it
seems that if one is going to attempt to use disjunctive
syllogism to argue for anti-justificationism, you ought to be
sure you’ve partitioned the space of reasonable theories.
I don’t see any way to be sure of that. Maybe some teenage boy
sitting alone in his bedroom in Iowa figured out something new
half an hour ago; I would have no way to know. Given the text above, do
think there are alternatives that are not covered?
Perhaps it is so structured that it is invulnerable to being
changed after it is adopted, regardless of the evidence
observed.
This example seems anomalous. If there exists some H such that,
if P(H) > 0.9, you lose the ability to choose P(H), you might
want to postpone believing in it for prudent reasons. But these
don’t really bear on what the epistemically rational level of
belief is (Assuming remaining epistemically rational is not part
of formal epistemic rationality).
Furthermore, if you adopted a policy of never raising P(H) above 0.9,
it’d be just like you were stuck with P(H) < 0.9 !
The point is that if a belief will prevent you from considering
alternatives, that is a true and relevant statement about the
belief that you should know when choosing whether to adopt it.
The point is not that you shouldn’t adopt it. Bayes’ rule is
probably one of those beliefs, for example.
Without a constraining external metric, there are many
consistent sets [of preferences], and the only criticism you can
ultimately bring to bear is one of inconsistency.
I presently believe there are many consistent sets of
preferences, and maybe you do too. If that’s true, we should
find a way to live with it, and the OP is proposing such a way.
I don’t know what the word “ultimately” means there. If I leave it
out, your statement is obviously false -- I listed a bunch of
criticisms of preferences in the OP. What did you mean?
Maybe some teenage boy sitting alone in his bedroom in Iowa figured out something new half an hour ago; I would have no way to know.
The two examples I gave are well known and well studied theories, held by large numbers of philosophers. Indeed, more philosophers accept Externalism than any other theory of justification. Any essay that argues for a position on the basis of the failure of some alternatives, without considering the most popular alternatives, is going to be unconvincing. If you were a biologist, presenting a new theory of evolution, you would be forgiven for not comparing it to Intelligent Design; however, omitting to compare it to NeoDarwinism would be a totally different issue. All you’ve done is present two straw man theories, and make pancriticial rationalism look good in comparison.
What did you mean? (by ‘ultimately’)
That all the criticisms you listed can be reduced to criticisms of inconsistency – generally by appending the phrase ‘and you prefer this not to happen’ to them.
How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?
I don’t know what exactly “justify” is supposed to mean, but I’ll interpret it as “show to be useful for helping me win.” In that case, it’s simply that certain types of sense-experience seem to have been a reliable guide for my actions in the past, for helping me win. That’s all.
To think of it in terms of assumptions and conclusions is to stay in the world of true/false or justified/unjustified, where we can only go in circles because we are putting the cart before the horse. The verbal concepts of “true” and “justified” probably originated as a way to help people win, not as ends to be pursued for their own sake. But since they were almost always correlated with winning, they became ends pursued for their own sake—essential ones! In the end, if you dissolve “truth” it just ends up meaning something like “seemingly reliable guidepost for my actions.”
If a traditional foundationalist believes that beliefs are justified by sense-experience, he’s a justificationalist. The argument in the OP works. How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?
If he believes belief are only justified by experience, that could a problem. Otherwise, he could use reductio, analysis, abduction, all sort of things.
If a traditional foundationalist believes that beliefs are justified by sense-experience, he’s a justificationalist. The argument in the OP works. How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?
I had to look it up. It is apparently the position that the mind is a result of both what is going on inside the subject and outside the subject. Some of them seem to be concerned about what beliefs mean, and others seem to carefully avoid using the word “belief”. In the OP I was more interested in whether the beliefs accurately predict sensory experience. So far as I can tell, externalism says we don’t have a mind that can be considered as a separate object, so we don’t know things, so I expect it to have little to say about how we know what we know. Can you explain why you brought it up?
I don’t see any way to be sure of that. Maybe some teenage boy sitting alone in his bedroom in Iowa figured out something new half an hour ago; I would have no way to know. Given the text above, do think there are alternatives that are not covered?
The point is that if a belief will prevent you from considering alternatives, that is a true and relevant statement about the belief that you should know when choosing whether to adopt it. The point is not that you shouldn’t adopt it. Bayes’ rule is probably one of those beliefs, for example.
I presently believe there are many consistent sets of preferences, and maybe you do too. If that’s true, we should find a way to live with it, and the OP is proposing such a way.
I don’t know what the word “ultimately” means there. If I leave it out, your statement is obviously false -- I listed a bunch of criticisms of preferences in the OP. What did you mean?
Wrong Externalism
The two examples I gave are well known and well studied theories, held by large numbers of philosophers. Indeed, more philosophers accept Externalism than any other theory of justification. Any essay that argues for a position on the basis of the failure of some alternatives, without considering the most popular alternatives, is going to be unconvincing. If you were a biologist, presenting a new theory of evolution, you would be forgiven for not comparing it to Intelligent Design; however, omitting to compare it to NeoDarwinism would be a totally different issue. All you’ve done is present two straw man theories, and make pancriticial rationalism look good in comparison.
That all the criticisms you listed can be reduced to criticisms of inconsistency – generally by appending the phrase ‘and you prefer this not to happen’ to them.
I don’t know what exactly “justify” is supposed to mean, but I’ll interpret it as “show to be useful for helping me win.” In that case, it’s simply that certain types of sense-experience seem to have been a reliable guide for my actions in the past, for helping me win. That’s all.
To think of it in terms of assumptions and conclusions is to stay in the world of true/false or justified/unjustified, where we can only go in circles because we are putting the cart before the horse. The verbal concepts of “true” and “justified” probably originated as a way to help people win, not as ends to be pursued for their own sake. But since they were almost always correlated with winning, they became ends pursued for their own sake—essential ones! In the end, if you dissolve “truth” it just ends up meaning something like “seemingly reliable guidepost for my actions.”
If he believes belief are only justified by experience, that could a problem. Otherwise, he could use reductio, analysis, abduction, all sort of things.