Say you have two agents, Rorty and Russell, who have ~the same values except that Rorty only optimizes for winning, and Russell optimizes for both winning and having “true beliefs” in some correspondence theory sense. Then Rorty should just win more on average than Russell, because he’ll have the winning actions/beliefs in cases where they conflict with the truth maximization objective, while Russell will have to make some tradeoff between the two.
Now maybe your values just happen to contain something like “having true beliefs in the correspondence theory sense is good.” I’m not super opposed to those kinds of values, although I would caution that truth-as-correspondence is actually hard to operationalize (because you can’t actually tell from sense experience whether a belief is true or not) and you definitely need to prioritize some types of truths over others (the number of hairs on my arm is a truth, but it’s probably not interesting to you). So you might want to reframe your truth-values in terms of “curiosity” or something like that.
Say you have two agents, Rorty and Russell, who have ~the same values except that Rorty only optimizes for winning, and Russell optimizes for both winning and having “true beliefs” in some correspondence theory sense. Then Rorty should just win more on average than Russell, because he’ll have the winning actions/beliefs in cases where they conflict with the truth maximization objective, while Russell will have to make some tradeoff between the two.
I’m primarily making the point that truth and usefulness are different concepts in theory, rather than offering practical advice. They are still different concepts, even if usefulness is more useful!
It’s not obvious that winning is what you should be doing because there are many definitions of “should”. It’s what you should be doing according to instrumental rationality ….but not according to epistemic rationality.
Even if winning is what you should be doing...that doesnt make truth the same concept as usefulness.
I’m not super opposed to those kinds of values, although I would caution that truth-as-correspondence is actually hard to operationalize
I’m sort of fine with keeping the concepts of truth and usefulness distinct. While some pragmatists have tried to define truth in terms of usefulness (e.g. William James), others have said it’s better to keep truth as a primitive, and instead say that a belief is justified just in case it’s useful (Richard Rorty; see esp. here).
It’s not obvious that winning is what you should be doing because there are many definitions of “should”. It’s what you should be doing according to instrumental rationality ….but not according to epistemic rationality.
Well, part of what pragmatism is saying is that we should only care about instrumental rationality and not epistemic rationality. Insofar as epistemic rationality is actually useful, instrumental rationality will tell you to be epistemically rational.
It also seems that epistemic rationality is pretty strongly underdetermined. Of course the prior is a free parameter, but you also have to decide which parts of the world you want to be most correct about. Not to mention anthropics, where it seems the probabilities are just indeterminate and you have to bring in values to determine what betting odds you should use. And finally, once you drop the assumption that the true hypothesis is realizable (contained in your hypothesis space) and move to something like infra-Bayesianism, now you need to bring in a distance function to measure how “close” two hypotheses are. That distance function is presumably going to be informed by your values.
others have said it’s better to keep truth as a primitive, and instead say that a belief is justified just in case it’s useful (Richard Rorty; see esp. here).
But usefulness doesnt particularly justify correspondence-truth.
Well, part of what pragmatism is saying is that we should only care about instrumental rationality and not epistemic rationality
Using which definition of “should”? Obviously by the pragmatic definition...
It also seems that epistemic rationality is pretty strongly underdetermined
Yes, which means it can’t be usefully implemented , which means it’s something you shouldnt pursue according to pragmatism.
Of course, the fact that pragmatic arguments are somewhat circular doesn’t mean that non pragmatic ones aren’t. Circularities are to be expected , because it takes an epistemology to decide an epistemology.
But even if you can’t do anything directly useful with unattainable truth , you can at least get a realistic idea of your limitations.
But usefulness doesnt particularly justify correspondence-truth.
Neither I nor Rorty are saying that it does.
Using which definition of “should”? Obviously by the pragmatic definition...
No, I mean it in the primitive, unqualified sense of “should.” Otherwise it would be a tautology. I personally approve of people solely caring about instrumental rationality.
Yes, which means it can’t be usefully implemented , which means it’s something you shouldnt pursue according to pragmatism.
I don’t think it can be implemented at all; people just imagine that they are implementing it, but on further inspection they’re adding in further non-epistemic assumptions.
Say you have two agents, Rorty and Russell, who have ~the same values except that Rorty only optimizes for winning, and Russell optimizes for both winning and having “true beliefs” in some correspondence theory sense. Then Rorty should just win more on average than Russell, because he’ll have the winning actions/beliefs in cases where they conflict with the truth maximization objective, while Russell will have to make some tradeoff between the two.
Now maybe your values just happen to contain something like “having true beliefs in the correspondence theory sense is good.” I’m not super opposed to those kinds of values, although I would caution that truth-as-correspondence is actually hard to operationalize (because you can’t actually tell from sense experience whether a belief is true or not) and you definitely need to prioritize some types of truths over others (the number of hairs on my arm is a truth, but it’s probably not interesting to you). So you might want to reframe your truth-values in terms of “curiosity” or something like that.
I’m primarily making the point that truth and usefulness are different concepts in theory, rather than offering practical advice. They are still different concepts, even if usefulness is more useful!
It’s not obvious that winning is what you should be doing because there are many definitions of “should”. It’s what you should be doing according to instrumental rationality ….but not according to epistemic rationality.
Even if winning is what you should be doing...that doesnt make truth the same concept as usefulness.
I’ll say! That’s one of my favourite themes
I’m sort of fine with keeping the concepts of truth and usefulness distinct. While some pragmatists have tried to define truth in terms of usefulness (e.g. William James), others have said it’s better to keep truth as a primitive, and instead say that a belief is justified just in case it’s useful (Richard Rorty; see esp. here).
Well, part of what pragmatism is saying is that we should only care about instrumental rationality and not epistemic rationality. Insofar as epistemic rationality is actually useful, instrumental rationality will tell you to be epistemically rational.
It also seems that epistemic rationality is pretty strongly underdetermined. Of course the prior is a free parameter, but you also have to decide which parts of the world you want to be most correct about. Not to mention anthropics, where it seems the probabilities are just indeterminate and you have to bring in values to determine what betting odds you should use. And finally, once you drop the assumption that the true hypothesis is realizable (contained in your hypothesis space) and move to something like infra-Bayesianism, now you need to bring in a distance function to measure how “close” two hypotheses are. That distance function is presumably going to be informed by your values.
But usefulness doesnt particularly justify correspondence-truth.
Using which definition of “should”? Obviously by the pragmatic definition...
Yes, which means it can’t be usefully implemented , which means it’s something you shouldnt pursue according to pragmatism.
Of course, the fact that pragmatic arguments are somewhat circular doesn’t mean that non pragmatic ones aren’t. Circularities are to be expected , because it takes an epistemology to decide an epistemology.
But even if you can’t do anything directly useful with unattainable truth , you can at least get a realistic idea of your limitations.
Neither I nor Rorty are saying that it does.
No, I mean it in the primitive, unqualified sense of “should.” Otherwise it would be a tautology. I personally approve of people solely caring about instrumental rationality.
I don’t think it can be implemented at all; people just imagine that they are implementing it, but on further inspection they’re adding in further non-epistemic assumptions.
Are you saying “I personally approve of..” is the primitive, unqualified meaning of “should”?
I’ve agreed with that.
“Then why care about it”.
At the very least it’s part of the unqualified meaning. Moral realists mean something more by it, or at least claim to do so.
Okay. I think it’s probably not the most effective way to do this in most cases.