I’d first like to congratulate you on a much more reasonable presentation of Popperian ideas than the recent trolling.
Justificationlism. Your belief is justified because it is a consequence of other beliefs. This path is self-contradictory. Eventually you’ll go in circles trying to justify the other beliefs, or you’ll find beliefs you can’t jutify. Justificationalism itself cannot be justified.
What about beliefs being justified by non-beliefs? If you’re a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.
Also, what about externalism? This is one of the major elements of modern epistemology, as a response to such skeptical arguments.
I don’t mean to imply that either of these is correct, but it seems that if one is going to attempt to use disjunctive syllogism to argue for anti-justificationism, you ought to be sure you’ve partitioned the space of reasonable theories.
Perhaps it is so structured that it is invulnerable to being changed after it is adopted, regardless of the evidence observed.
This example seems anomalous. If there exists some H such that, if P(H) > 0.9, you lose the ability to choose P(H), you might want to postpone believing in it for prudent reasons. But these don’t really bear on what the epistemically rational level of belief is (Assuming remaining epistemically rational is not part of formal epistemic rationality).
Furthermore, if you adopted a policy of never raising P(H) above 0.9, it’d be just like you were stuck with P(H) < 0.9 !
Once you accept the idea that beliefs can be criticized, it’s a small step from there to adopting a similar approach to preferences and behavior. Here are some plausible criticisms of a preference:
It seems that there is a big difference between the two cases. We can criticize beliefs because we have a standard by which to measure them – reality, in the same way that we can criticize maps if they’re not very accurate representations of the territory. But it’s not at all clear that we have anything analogous with preferences. True, you could criticize my short term preference of going to lecturers as ineffective towards my long-term goal of getting my degree, but there doesn’t seem to be any canonical metric by which to criticize deep, foundational preferences.
One of the most important aspects of our epistemology with regards factual beliefs is that the set of beliefs a computationally unlimited agent should believe is uniquely determined by the evidence it has witnesses. However, this doesn’t seem to be the case with preferences: if I have a single long term, there’s no proof it should be {live a long time} rather than {die soon}. Without a constraining external metric, there are many consistent sets, and the only criticism you can ultimately bring to bear is one of inconsistency.
One of the most important aspects of our epistemology with regards factual beliefs is that the set of beliefs a computationally unlimited agent should believe is uniquely determined by the evidence it has witnesses.
I don’t think this is true. Aumann’s agreement theorem shows that this is true in the limiting case assuming an infinite string of evidence. However, this isn’t the case for any finite amount of evidence. Indeed, simply choose different versions of the Solomonoff prior (different formulations of Turing machines change the Kolmogorov complexity by at most a constant but that still changes the Solomonoff priors. It just means that two different sets of priors need to look similar overall.)
Would a similar statement couched in terms of limits be true?
As an agent’s computational ability increases, its beliefs should converge with those of similar agents regardless of their priors.
The limit you proposed doesn’t help. One’s beliefs after applying Bayes’ rule are determined by the prior and by the evidence. We’re talking about a situation where the evidence is the the same and finite, and the priors differ. Having more compute power doesn’t enter into it.
What about beliefs being justified by non-beliefs? If you’re a
traditional foundationalist, you think everything is ultimately
grounded in sense-experience, about which we cannot reasonably
doubt.
If a traditional foundationalist believes that beliefs are
justified by sense-experience, he’s a justificationalist. The
argument in the OP works. How can he justify the belief that
beliefs are justified by sense-experience without first assuming
his conclusion?
Also, what about externalism? This is one of the major elements
of modern epistemology, as a response to such skeptical
arguments.
I had to look it up.
It is apparently the position that the mind is a result of both
what is going on inside the subject and outside the subject.
Some of them seem to be concerned about what beliefs mean, and
others seem to carefully avoid using the word “belief”. In the
OP I was more interested in whether the beliefs accurately
predict sensory experience. So far as I can tell, externalism
says we don’t have a mind that can be considered as a separate
object, so we don’t know things, so I expect it to have little to
say about how we know what we know. Can you explain why you
brought it up?
I don’t mean to imply that either of these is correct, but it
seems that if one is going to attempt to use disjunctive
syllogism to argue for anti-justificationism, you ought to be
sure you’ve partitioned the space of reasonable theories.
I don’t see any way to be sure of that. Maybe some teenage boy
sitting alone in his bedroom in Iowa figured out something new
half an hour ago; I would have no way to know. Given the text above, do
think there are alternatives that are not covered?
Perhaps it is so structured that it is invulnerable to being
changed after it is adopted, regardless of the evidence
observed.
This example seems anomalous. If there exists some H such that,
if P(H) > 0.9, you lose the ability to choose P(H), you might
want to postpone believing in it for prudent reasons. But these
don’t really bear on what the epistemically rational level of
belief is (Assuming remaining epistemically rational is not part
of formal epistemic rationality).
Furthermore, if you adopted a policy of never raising P(H) above 0.9,
it’d be just like you were stuck with P(H) < 0.9 !
The point is that if a belief will prevent you from considering
alternatives, that is a true and relevant statement about the
belief that you should know when choosing whether to adopt it.
The point is not that you shouldn’t adopt it. Bayes’ rule is
probably one of those beliefs, for example.
Without a constraining external metric, there are many
consistent sets [of preferences], and the only criticism you can
ultimately bring to bear is one of inconsistency.
I presently believe there are many consistent sets of
preferences, and maybe you do too. If that’s true, we should
find a way to live with it, and the OP is proposing such a way.
I don’t know what the word “ultimately” means there. If I leave it
out, your statement is obviously false -- I listed a bunch of
criticisms of preferences in the OP. What did you mean?
Maybe some teenage boy sitting alone in his bedroom in Iowa figured out something new half an hour ago; I would have no way to know.
The two examples I gave are well known and well studied theories, held by large numbers of philosophers. Indeed, more philosophers accept Externalism than any other theory of justification. Any essay that argues for a position on the basis of the failure of some alternatives, without considering the most popular alternatives, is going to be unconvincing. If you were a biologist, presenting a new theory of evolution, you would be forgiven for not comparing it to Intelligent Design; however, omitting to compare it to NeoDarwinism would be a totally different issue. All you’ve done is present two straw man theories, and make pancriticial rationalism look good in comparison.
What did you mean? (by ‘ultimately’)
That all the criticisms you listed can be reduced to criticisms of inconsistency – generally by appending the phrase ‘and you prefer this not to happen’ to them.
How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?
I don’t know what exactly “justify” is supposed to mean, but I’ll interpret it as “show to be useful for helping me win.” In that case, it’s simply that certain types of sense-experience seem to have been a reliable guide for my actions in the past, for helping me win. That’s all.
To think of it in terms of assumptions and conclusions is to stay in the world of true/false or justified/unjustified, where we can only go in circles because we are putting the cart before the horse. The verbal concepts of “true” and “justified” probably originated as a way to help people win, not as ends to be pursued for their own sake. But since they were almost always correlated with winning, they became ends pursued for their own sake—essential ones! In the end, if you dissolve “truth” it just ends up meaning something like “seemingly reliable guidepost for my actions.”
If a traditional foundationalist believes that beliefs are justified by sense-experience, he’s a justificationalist. The argument in the OP works. How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?
If he believes belief are only justified by experience, that could a problem. Otherwise, he could use reductio, analysis, abduction, all sort of things.
What about beliefs being justified by non-beliefs? If you’re a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.
Yes, Barrtley’s justification munges together to different ideas:
1) beliefs can only be justified by other beliefs
2) beliefs can be positively supported and not just refuted/criticised.
The attack on “justificationism” is actually a problem for Popperiansim, since
a classic refutation is a single observation such as a Black Swan. However,
if my seeing one black swan doesn’t justify my belief that there is at least
one black swan, how can I refute “all swans are white”?
However, if my seeing one black swan doesn’t justify my belief that there is at least one black swan, how can I refute “all swans are white”?
Refuting something is justifying that it is false. The point of the OP is that you can’t justify anything, so it’s claiming that you can’t refute “all swans are white”. A black swan is simply a criticism of the statement “all swans are white”. You still have a choice—you can see the black swan and reject “all swans are white”, or you can quibble with the evidence in a large number of ways which I’m sure you know of too and keep on believing “all swans are white”. People really do that; searching Google for “Rapture schedule” will pull up a prominent and current example.
Why not just phrase it in terms of utility? “Justification” can mean too many different things.
Seeing a black swan diminishes (and for certain applications, destroys) the usefulness of the belief that all swans are white. This seems a lot simpler.
Putting it in terms of beliefs paying rent in anticipated experiences, the belief “all swans are white” told me to anticipate that if I knew there was a black animal perched on my shoulder it could not be a swan. Now that belief isn’t as reliable of a guidepost. If black swans are really rare I could probably get by with it for most applications and still use it to win at life most of the time, but in some cases it will steer me wrong—that is, cause me to lose.
So can’t this all be better phrased in more established LW terms?
Refuting something is justifying that it is false. The point of the OP is that you can’t justify anything, so it’s claiming that you can’t refute “all swans are white”. A black swan is simply a criticism of the statement “all swans are white”.
Fine. If criticism is just a loose sort of refutation, then I’ll invent something that
is just a loose kind of inductive support, let’s say schmitticism, and then I’ll
claim that every time I see a white swan, that schmitticises the claim that all
swans are white, and Popper can’t say schmitticisim doesn’t work because
there are no particular well-defined standards or mechanisms of schmitticism
for his arguments to latch onto.
I’d first like to congratulate you on a much more reasonable presentation of Popperian ideas than the recent trolling.
What about beliefs being justified by non-beliefs? If you’re a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.
Also, what about externalism? This is one of the major elements of modern epistemology, as a response to such skeptical arguments.
I don’t mean to imply that either of these is correct, but it seems that if one is going to attempt to use disjunctive syllogism to argue for anti-justificationism, you ought to be sure you’ve partitioned the space of reasonable theories.
This example seems anomalous. If there exists some H such that, if P(H) > 0.9, you lose the ability to choose P(H), you might want to postpone believing in it for prudent reasons. But these don’t really bear on what the epistemically rational level of belief is (Assuming remaining epistemically rational is not part of formal epistemic rationality).
Furthermore, if you adopted a policy of never raising P(H) above 0.9, it’d be just like you were stuck with P(H) < 0.9 !
It seems that there is a big difference between the two cases. We can criticize beliefs because we have a standard by which to measure them – reality, in the same way that we can criticize maps if they’re not very accurate representations of the territory. But it’s not at all clear that we have anything analogous with preferences. True, you could criticize my short term preference of going to lecturers as ineffective towards my long-term goal of getting my degree, but there doesn’t seem to be any canonical metric by which to criticize deep, foundational preferences.
One of the most important aspects of our epistemology with regards factual beliefs is that the set of beliefs a computationally unlimited agent should believe is uniquely determined by the evidence it has witnesses. However, this doesn’t seem to be the case with preferences: if I have a single long term, there’s no proof it should be {live a long time} rather than {die soon}. Without a constraining external metric, there are many consistent sets, and the only criticism you can ultimately bring to bear is one of inconsistency.
I don’t think this is true. Aumann’s agreement theorem shows that this is true in the limiting case assuming an infinite string of evidence. However, this isn’t the case for any finite amount of evidence. Indeed, simply choose different versions of the Solomonoff prior (different formulations of Turing machines change the Kolmogorov complexity by at most a constant but that still changes the Solomonoff priors. It just means that two different sets of priors need to look similar overall.)
Would a similar statement couched in terms of limits be true?
As an agent’s computational ability increases, its beliefs should converge with those of similar agents regardless of their priors.
The limit you proposed doesn’t help. One’s beliefs after applying Bayes’ rule are determined by the prior and by the evidence. We’re talking about a situation where the evidence is the the same and finite, and the priors differ. Having more compute power doesn’t enter into it.
If a traditional foundationalist believes that beliefs are justified by sense-experience, he’s a justificationalist. The argument in the OP works. How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?
I had to look it up. It is apparently the position that the mind is a result of both what is going on inside the subject and outside the subject. Some of them seem to be concerned about what beliefs mean, and others seem to carefully avoid using the word “belief”. In the OP I was more interested in whether the beliefs accurately predict sensory experience. So far as I can tell, externalism says we don’t have a mind that can be considered as a separate object, so we don’t know things, so I expect it to have little to say about how we know what we know. Can you explain why you brought it up?
I don’t see any way to be sure of that. Maybe some teenage boy sitting alone in his bedroom in Iowa figured out something new half an hour ago; I would have no way to know. Given the text above, do think there are alternatives that are not covered?
The point is that if a belief will prevent you from considering alternatives, that is a true and relevant statement about the belief that you should know when choosing whether to adopt it. The point is not that you shouldn’t adopt it. Bayes’ rule is probably one of those beliefs, for example.
I presently believe there are many consistent sets of preferences, and maybe you do too. If that’s true, we should find a way to live with it, and the OP is proposing such a way.
I don’t know what the word “ultimately” means there. If I leave it out, your statement is obviously false -- I listed a bunch of criticisms of preferences in the OP. What did you mean?
Wrong Externalism
The two examples I gave are well known and well studied theories, held by large numbers of philosophers. Indeed, more philosophers accept Externalism than any other theory of justification. Any essay that argues for a position on the basis of the failure of some alternatives, without considering the most popular alternatives, is going to be unconvincing. If you were a biologist, presenting a new theory of evolution, you would be forgiven for not comparing it to Intelligent Design; however, omitting to compare it to NeoDarwinism would be a totally different issue. All you’ve done is present two straw man theories, and make pancriticial rationalism look good in comparison.
That all the criticisms you listed can be reduced to criticisms of inconsistency – generally by appending the phrase ‘and you prefer this not to happen’ to them.
I don’t know what exactly “justify” is supposed to mean, but I’ll interpret it as “show to be useful for helping me win.” In that case, it’s simply that certain types of sense-experience seem to have been a reliable guide for my actions in the past, for helping me win. That’s all.
To think of it in terms of assumptions and conclusions is to stay in the world of true/false or justified/unjustified, where we can only go in circles because we are putting the cart before the horse. The verbal concepts of “true” and “justified” probably originated as a way to help people win, not as ends to be pursued for their own sake. But since they were almost always correlated with winning, they became ends pursued for their own sake—essential ones! In the end, if you dissolve “truth” it just ends up meaning something like “seemingly reliable guidepost for my actions.”
If he believes belief are only justified by experience, that could a problem. Otherwise, he could use reductio, analysis, abduction, all sort of things.
Yes, Barrtley’s justification munges together to different ideas:
1) beliefs can only be justified by other beliefs 2) beliefs can be positively supported and not just refuted/criticised.
The attack on “justificationism” is actually a problem for Popperiansim, since a classic refutation is a single observation such as a Black Swan. However, if my seeing one black swan doesn’t justify my belief that there is at least one black swan, how can I refute “all swans are white”?
Refuting something is justifying that it is false. The point of the OP is that you can’t justify anything, so it’s claiming that you can’t refute “all swans are white”. A black swan is simply a criticism of the statement “all swans are white”. You still have a choice—you can see the black swan and reject “all swans are white”, or you can quibble with the evidence in a large number of ways which I’m sure you know of too and keep on believing “all swans are white”. People really do that; searching Google for “Rapture schedule” will pull up a prominent and current example.
Why not just phrase it in terms of utility? “Justification” can mean too many different things.
Seeing a black swan diminishes (and for certain applications, destroys) the usefulness of the belief that all swans are white. This seems a lot simpler.
Putting it in terms of beliefs paying rent in anticipated experiences, the belief “all swans are white” told me to anticipate that if I knew there was a black animal perched on my shoulder it could not be a swan. Now that belief isn’t as reliable of a guidepost. If black swans are really rare I could probably get by with it for most applications and still use it to win at life most of the time, but in some cases it will steer me wrong—that is, cause me to lose.
So can’t this all be better phrased in more established LW terms?
I think you’ve just reinvented pragmatism.
ETA: Ugh, that Wikipedia page is remarkably uninformative… anyone have a better link?
Fine. If criticism is just a loose sort of refutation, then I’ll invent something that is just a loose kind of inductive support, let’s say schmitticism, and then I’ll claim that every time I see a white swan, that schmitticises the claim that all swans are white, and Popper can’t say schmitticisim doesn’t work because there are no particular well-defined standards or mechanisms of schmitticism for his arguments to latch onto.