There are some kinds of truths that don’t seem to be covered by truth-as-correspondence-between-map-and-territory. (Note: This general objection is well know and is given as Objection 1 in SEP’s entry on Correspondence Theory.) Consider:
modal truths if one isn’t a modal realist
mathematical truths if one isn’t a mathematical Platonist
normative truths
Maybe the first two just argues for Platonism and modal realism (although I note that Eliezer explicitly disclaimed being a modal realist). The last one is most problematic to me, because some kinds of normative statements seem to be talking about what one should do given some assumed-to-be-accurate map, and not about the map itself. For example, “You should two-box in Newcomb’s problem.” If I say “Alice has a false belief that she should two-box in Newcomb’s problem” it doesn’t seem like I’m saying that her map doesn’t correspond to the territory.
So, a couple of questions that seem open to me: Do we need other notions of truth, besides correspondence between map and territory? If so, is there a more general notion of truth that covers all of these as special cases?
I think a more general notion of truth could be defined as correspondence between a map and any structure. If you define a structure using axioms and are referencing that structure, then you can talk about the correspondence properties of that reference. This at least cover both mathematical structures and physical reality.
If I say “Alice has a false belief that she should two-box in Newcomb’s problem” it doesn’t seem like I’m saying that her map doesn’t correspond to the territory.
The problem with Alice’s belief is that it is incomplete. It’s like saying “I believe that 3 is greater than” (end of sentence).
Even incomplete sentences can work in some contexts where people know how to interpret them. For example if we had a convention that all sentences ending with “greater than” have to be interpreted as “greater than zero”, then in given context the sentence “3 is greater than” makes sense, and is true. It just does not make sense outside of this context. Without context, it’s not a logical proposition, but rather a proposition template.
Similarly, the sentence “you should X” is meaningful in contexts which provide additional explanation of what “should” means. For a consequentialist, the meaning of “you should” is “maximizes your utility”. For a theist, it could mean “makes Deity happy”. For both of them, the meaning of “should” is obvious, and within their contexts, they are right. The sentence becomes confusing only when we take it out of context; when we pretend that the context is not necessary for completing it.
So perhaps the problem is not “some truths are not about map-territory correspondence”, but rather “some sentences require context to be transformed into true/false expressions (about map-territory correspondence)”.
Seems to me that this is somehow related to making ideas pay rent, in sense that when you describe how do you expect the idea to pay rent, in the process you explain the context.
“Makes Deity happy” sounds to me like a very specific interpretation of “utility”, rather than something separate from it. I can’t picture any context for the phrase “P should X” that doesn’t simply render “X maximizes utility” for different values of the word “utility”. If “make Deity happy” is the end goal, wouldn’t “utility” be whatever gives you the most efficient route to that goal?
Utility has a single, absolute, unexpressible meaning. To say “X gives me Y utility” is pointless, because I am making a statement about qualia, which are inherently incommunicable—I cannot describe the quale “red” to a person without a visual cortex, because that person is incapable of experiencing red (or any other colour-quale). “X maximises my utility” is implied by the statements “X maximises my deity’s utility” and “maximising my deity’s utility maximises my utility”, but this is not the same thing as saying that X should occur (which requires also that maximisng your own utility is your objective). Stripped of the word “utility”, your statement reduces to “The statement ‘If X is the end goal, and option A is the best way to achieve X, A should be chosen’ is tautologous”, which is true because this is the definition of the word “should”.
Michael Lynch has a functionalist theory of truth (described in this book) that responds to concerns like yours. His claim is that there is a “truth role” that is constant across all domains of discourse where we talk about truth and falsity of propositions. The truth role is characterized by three properties:
Objectivity: The belief that p is true if and only if with respect to the belief that p, things are as they are believed to be.
Norm of belief: It is prima facie correct to believe that p if and only if the proposition that p is true.
End of inquiry: Other things being equal, true beliefs are a worthy goal of inquiry.
Lynch claims that, in different domains of discourse, there are different properties that play this truth role. For instance, when we’re doing science it’s plausible that the appropriate realizer of the truth role is some kind of correspondence notion. On the other hand, when we’re doing mathematics, one might think that the truth role is played by some sort of theoretical coherence property. Mathematical truths, according to Lynch, satisfy the truth role, but not by virtue of correspondence to some state of affairs in our external environment. He has a similar analysis of moral truths.
I’m not sure whether Lynch’s particular description of the truth role is right, but the functionalist approach (truth is a functional property, and the function can be performed by many different realizers) is very attractive to me.
Maybe the first two just argues for Platonism and modal realism (although I note that Eliezer explicitly disclaimed being a modal realist).
I think Yudkowsky is a Platonist, and I’m not sure he has a consistent position on modal realism, since when arguing on morality he seemed to espouse it: see his comment here.
For example, “You should two-box in Newcomb’s problem.” If I say “Alice has a false belief that she should two-box in Newcomb’s problem” it doesn’t seem like I’m saying that her map doesn’t correspond to the territory.
I don’t think that “You should two-box in Newcomb’s problem.” is actually a normative statement, even if it contains a “should”: you can rephrase it epistemically as “If you two-box in Newcomb’s problem then you will maximize your expected utility”.
Therefore, if you say “Alice has a false belief that if she two-boxes in Newcomb’s problem then she will maximize her expected utility” you are saying that her belief doesn’t correspond to the mathematical constructs underlying Newcomb’s problem. If you take the Platonist position that mathematical constructs exist as external entities (“the territory”), then yes, you are saying that her map doesn’t correspond to the territory.
I don’t think that “You should two-box in Newcomb’s problem.” is actually a normative statement, even if it contains a “should”: you can rephrase it epistemically as “If you two-box in Newcomb’s problem then you will maximize your expected utility”.
Well, sure, a utilitarian can always “rephrase” should-statements that way; to a utilitarian what “X should Y” means is “Y maximizes X’s expected utility.” That doesn’t make “X should Y” not a normative statement, it just means that utilitarian normative statements are also objective statements about reality.
Conversely, I’m not sure a deontologist would agree that you can rephrase one as the other… that is, a deontologist might coherently (and incorrectly) say “Yes, two-boxing maximizes expected utility, but you still shouldn’t do it.”
I think you are conflating two different types of “should” statements: moral injunctions and decision-theoretical injunctions.
The statement “You should two-box in Newcomb’s problem” is normally interpreted as a decision-theoretical injunction. As such, it can be rephrased epistemically as “If you two-box in Newcomb’s problem then you will maximize your expected utility”.
But you could also interpret the statement “You should two-box in Newcomb’s problem” as the moral injunction “It is morally right for you to two-box in Newcomb’s problem”. Moral injunctions can’t be rephrased epistemically, at least unless you assume a priori that there exist some external moral truths that can’t be further rephrased.
The utilitarianist of your comment is doing that. His actual rephrasing is “If you two-box in Newcomb’s problem then you will maximize the expected universe cumulative utility”. This assumes that:
This universe cumulative utility exists as an external entity
The statement “It is morally right for you to maximize the expected universe cumulative utility” exists as an external moral truth.
I think Yudkowsky is a Platonist, and I’m not sure he has a consistent position on modal realism, since when arguing on morality he seemed to espouse it: see his comment here.
Thanks for the link. That does seem inconsistent.
I don’t think that “You should two-box in Newcomb’s problem.” is actually a normative statement, even if it contains a “should”: you can rephrase it epistemically as “If you two-box in Newcomb’s problem then you will maximize your expected utility”.
This comment should help you understand why I disagree. Does it make sense?
This comment should help you understand why I disagree. Does it make sense?
I don’t claim that all injunctions can be rephrased as epistemic statements. I claim that decision-theoretic injunctions can be rephrased as epistemic statements. Moral injunctions can’t.
I don’t claim that all injunctions can be rephrased as epistemic statements. I claim that decision-theoretic injunctions can be rephrased as epistemic statements. Moral injunctions can’t.
I’m confused by your reply because the comment I linked to tried to explain why I don’t think “You should two-box in Newcomb’s problem” can be rephrased as an epistemic statement (as you claimed earlier). Did you read it, and if so, can you explain why you disagree with its reasoning?
ETA: Sorry, I didn’t notice your comment in the other subthread where you gave your definitions of “decision-theoretic” vs “moral” injunctions. Your reply makes more sense with those definitions in mind, but I think it shows that the comment I linked to didn’t get my point across. So I’ll try it again here. You said earlier:
I don’t think that “You should two-box in Newcomb’s problem.” is actually a normative statement, even if it contains a “should”: you can rephrase it epistemically as “If you two-box in Newcomb’s problem then you will maximize your expected utility”.
A causal decision theorist (C) and an evidential decision theorist (E) have different definitions of “maximize your expected utility”, and so when C says to E “you should two-box in Newcomb’s problem” he is not just saying “If you two-box in Newcomb’s problem then you will maximize your expected utility according to the CDT formula” since E wouldn’t care about that. So my point is that “you should two-box in Newcomb’s problem” is usually not a “decision-theoretical injunction” in your sense of the phrase, but rather a normative statement as I claimed.
A causal decision theorist (C) and an evidential decision theorist (E) have different definitions of “maximize your expected utility”, and so when C says to E “you should two-box in Newcomb’s problem” he is not just saying “If you two-box in Newcomb’s problem then you will maximize your expected utility according to the CDT formula” since E wouldn’t care about that. So my point is that “you should two-box in Newcomb’s problem” is usually not a “decision-theoretical injunction” in your sense of the phrase, but rather a normative statement as I claimed.
I was assuming implicitely that we were talking in the context of EDT.
In general, you can say “Two-boxing in Newcomb’s problem is the optimal action for you”, where the definition of “optimal action” depends on the decision theory you use.
If you use EDT, then “optimal action” means “maximizes expected utility”, hence the statement above is false (that is, it is inconsistent with the axioms of EDT and Newcomb’s problem).
If you use CDT, then “optimal action” means “maximizes expected utility under a causality assumption”. Hence the statement above is technically true, although not very useful, since the axioms that define Newcomb’s problem specifically violate the causality assumption.
So, which decision theory should you use? An answer like “you should use the decision theory that determines the optimal action without any assumption that violates the problem constraints” seems irreducible to an epistemic statement. But is that actually correct?
If you are studing actual agents, then the point is moot, since these agents already have a decision theory (in practice it will be an approximation of either EDT or CDT, or something else), but what if you want to improve yourself, or build an artificial agent?
Then you evaluate the new decision theory according to the decision theory that you already have. Then, assuming that in principle your current decision theory can be described epistemically, you can say, for instance: “A decision theory that determines the optimal action without any assumption that violates the problem constraints is optimal for me”.
If you want to suggest a decision theory to somebody who is not you, you can say: “A decision theory that determines the optimal action without any assumption that violates the problem constraints is optimal for you”, or, more properly but less politely: “You using a decision theory that determines the optimal action without any assumption that violates the problem constraints are optimal for me”.
Then you evaluate the new decision theory according to the decision theory that you already have.
I had similar thoughts before, but eventually changed my mind. Unfortunately it’s hard to convince people that their solution to some problem isn’t entirely satisfactory without having a better solution at hand. (For example, this post of mine pointing out a problem with using probability theory to deal with indexical uncertainty sat at 0 points for months before I made my UDT post which suggested a different solution.) So instead of trying harder to convince people now, I think I will instead try harder to figure out a better answer by myself (and others who already share my views).
If I say “Alice has a false belief that she should two-box in Newcomb’s problem” it doesn’t seem like I’m saying that her map doesn’t correspond to the territory.
It seems that way to me. Specifically, in that case I think you’re saying that Alice (wrongly) expects that her decision is causally independent from the money Omega put in the boxes, and as such thinks that her expected utility is higher from grabbing both boxes.
I don’t think 2 is answered even if you say that the mathematical objects are themselves real. Consider a geometry that labels “true” everything that follows from its axioms. If this geometry is consistent, then we want to say that it is true, which implies that everything it labels as “true”, is. And the axioms themselves follow from the axioms, so the mathematical system says that they’re true. But you can also have another valid mathematical system, where one of those axioms is negated. This is a problem because it implies that something can be both true and not true.
Because of this, the sense in which mathematical propositions can be true can’t be the same sense in which “snow is white” can be true, even if the objects themselves are real. We have to be equivocating somewhere on “truth”.
It’s easy to overcome that simply by being a bit more precise—you are saying that such and such a proposition is true in geometry X. Meaning that the axioms of geometry X genuinely do imply the proposition. That this proposition may not be true in geometry Y has nothing to do with it.
It is a different sense of true in that it isn’t necessarily related to sensory experience—only to the interrelationships of ideas.
You are tacitly assuming that Platonists have to hold that what is formally true (proveable, derivable from axioms) is
actuallty true. But a significant part of the content of Platonism is that mathematical statements are only really
true if they correspond to the organisation of Plato’s heaven. Platonists can say, “I know you proved that, but
it isn’t actually true”. So there are indeed different notions of truth at play here.
Which is not to defend Platonism. The notion of a “real truth” which can’t be publically assessed or agreed upon in the way that formal proof can be is quite problematical.
He says that counterfactuals do have a truth value, though IMO he’s a bit vague about what that is (or maybe it’s me who can’t fully understand what he says).
There are some kinds of truths that don’t seem to be covered by truth-as-correspondence-between-map-and-territory. (Note: This general objection is well know and is given as Objection 1 in SEP’s entry on Correspondence Theory.) Consider:
modal truths if one isn’t a modal realist
mathematical truths if one isn’t a mathematical Platonist
normative truths
Maybe the first two just argues for Platonism and modal realism (although I note that Eliezer explicitly disclaimed being a modal realist). The last one is most problematic to me, because some kinds of normative statements seem to be talking about what one should do given some assumed-to-be-accurate map, and not about the map itself. For example, “You should two-box in Newcomb’s problem.” If I say “Alice has a false belief that she should two-box in Newcomb’s problem” it doesn’t seem like I’m saying that her map doesn’t correspond to the territory.
So, a couple of questions that seem open to me: Do we need other notions of truth, besides correspondence between map and territory? If so, is there a more general notion of truth that covers all of these as special cases?
I think a more general notion of truth could be defined as correspondence between a map and any structure. If you define a structure using axioms and are referencing that structure, then you can talk about the correspondence properties of that reference. This at least cover both mathematical structures and physical reality.
The problem with Alice’s belief is that it is incomplete. It’s like saying “I believe that 3 is greater than” (end of sentence).
Even incomplete sentences can work in some contexts where people know how to interpret them. For example if we had a convention that all sentences ending with “greater than” have to be interpreted as “greater than zero”, then in given context the sentence “3 is greater than” makes sense, and is true. It just does not make sense outside of this context. Without context, it’s not a logical proposition, but rather a proposition template.
Similarly, the sentence “you should X” is meaningful in contexts which provide additional explanation of what “should” means. For a consequentialist, the meaning of “you should” is “maximizes your utility”. For a theist, it could mean “makes Deity happy”. For both of them, the meaning of “should” is obvious, and within their contexts, they are right. The sentence becomes confusing only when we take it out of context; when we pretend that the context is not necessary for completing it.
So perhaps the problem is not “some truths are not about map-territory correspondence”, but rather “some sentences require context to be transformed into true/false expressions (about map-territory correspondence)”.
Seems to me that this is somehow related to making ideas pay rent, in sense that when you describe how do you expect the idea to pay rent, in the process you explain the context.
At the risk of nitpicking:
“Makes Deity happy” sounds to me like a very specific interpretation of “utility”, rather than something separate from it. I can’t picture any context for the phrase “P should X” that doesn’t simply render “X maximizes utility” for different values of the word “utility”. If “make Deity happy” is the end goal, wouldn’t “utility” be whatever gives you the most efficient route to that goal?
Utility has a single, absolute, unexpressible meaning. To say “X gives me Y utility” is pointless, because I am making a statement about qualia, which are inherently incommunicable—I cannot describe the quale “red” to a person without a visual cortex, because that person is incapable of experiencing red (or any other colour-quale). “X maximises my utility” is implied by the statements “X maximises my deity’s utility” and “maximising my deity’s utility maximises my utility”, but this is not the same thing as saying that X should occur (which requires also that maximisng your own utility is your objective). Stripped of the word “utility”, your statement reduces to “The statement ‘If X is the end goal, and option A is the best way to achieve X, A should be chosen’ is tautologous”, which is true because this is the definition of the word “should”.
Michael Lynch has a functionalist theory of truth (described in this book) that responds to concerns like yours. His claim is that there is a “truth role” that is constant across all domains of discourse where we talk about truth and falsity of propositions. The truth role is characterized by three properties:
Objectivity: The belief that p is true if and only if with respect to the belief that p, things are as they are believed to be.
Norm of belief: It is prima facie correct to believe that p if and only if the proposition that p is true.
End of inquiry: Other things being equal, true beliefs are a worthy goal of inquiry.
Lynch claims that, in different domains of discourse, there are different properties that play this truth role. For instance, when we’re doing science it’s plausible that the appropriate realizer of the truth role is some kind of correspondence notion. On the other hand, when we’re doing mathematics, one might think that the truth role is played by some sort of theoretical coherence property. Mathematical truths, according to Lynch, satisfy the truth role, but not by virtue of correspondence to some state of affairs in our external environment. He has a similar analysis of moral truths.
I’m not sure whether Lynch’s particular description of the truth role is right, but the functionalist approach (truth is a functional property, and the function can be performed by many different realizers) is very attractive to me.
Me too, thanks for this.
I think Yudkowsky is a Platonist, and I’m not sure he has a consistent position on modal realism, since when arguing on morality he seemed to espouse it: see his comment here.
I don’t think that “You should two-box in Newcomb’s problem.” is actually a normative statement, even if it contains a “should”: you can rephrase it epistemically as “If you two-box in Newcomb’s problem then you will maximize your expected utility”.
Therefore, if you say “Alice has a false belief that if she two-boxes in Newcomb’s problem then she will maximize her expected utility” you are saying that her belief doesn’t correspond to the mathematical constructs underlying Newcomb’s problem. If you take the Platonist position that mathematical constructs exist as external entities (“the territory”), then yes, you are saying that her map doesn’t correspond to the territory.
Well, sure, a utilitarian can always “rephrase” should-statements that way; to a utilitarian what “X should Y” means is “Y maximizes X’s expected utility.” That doesn’t make “X should Y” not a normative statement, it just means that utilitarian normative statements are also objective statements about reality.
Conversely, I’m not sure a deontologist would agree that you can rephrase one as the other… that is, a deontologist might coherently (and incorrectly) say “Yes, two-boxing maximizes expected utility, but you still shouldn’t do it.”
I think you are conflating two different types of “should” statements: moral injunctions and decision-theoretical injunctions.
The statement “You should two-box in Newcomb’s problem” is normally interpreted as a decision-theoretical injunction. As such, it can be rephrased epistemically as “If you two-box in Newcomb’s problem then you will maximize your expected utility”.
But you could also interpret the statement “You should two-box in Newcomb’s problem” as the moral injunction “It is morally right for you to two-box in Newcomb’s problem”. Moral injunctions can’t be rephrased epistemically, at least unless you assume a priori that there exist some external moral truths that can’t be further rephrased.
The utilitarianist of your comment is doing that. His actual rephrasing is “If you two-box in Newcomb’s problem then you will maximize the expected universe cumulative utility”. This assumes that:
This universe cumulative utility exists as an external entity
The statement “It is morally right for you to maximize the expected universe cumulative utility” exists as an external moral truth.
Thanks for the link. That does seem inconsistent.
This comment should help you understand why I disagree. Does it make sense?
I don’t claim that all injunctions can be rephrased as epistemic statements. I claim that decision-theoretic injunctions can be rephrased as epistemic statements. Moral injunctions can’t.
I’m confused by your reply because the comment I linked to tried to explain why I don’t think “You should two-box in Newcomb’s problem” can be rephrased as an epistemic statement (as you claimed earlier). Did you read it, and if so, can you explain why you disagree with its reasoning?
ETA: Sorry, I didn’t notice your comment in the other subthread where you gave your definitions of “decision-theoretic” vs “moral” injunctions. Your reply makes more sense with those definitions in mind, but I think it shows that the comment I linked to didn’t get my point across. So I’ll try it again here. You said earlier:
A causal decision theorist (C) and an evidential decision theorist (E) have different definitions of “maximize your expected utility”, and so when C says to E “you should two-box in Newcomb’s problem” he is not just saying “If you two-box in Newcomb’s problem then you will maximize your expected utility according to the CDT formula” since E wouldn’t care about that. So my point is that “you should two-box in Newcomb’s problem” is usually not a “decision-theoretical injunction” in your sense of the phrase, but rather a normative statement as I claimed.
I was assuming implicitely that we were talking in the context of EDT.
In general, you can say “Two-boxing in Newcomb’s problem is the optimal action for you”, where the definition of “optimal action” depends on the decision theory you use.
If you use EDT, then “optimal action” means “maximizes expected utility”, hence the statement above is false (that is, it is inconsistent with the axioms of EDT and Newcomb’s problem).
If you use CDT, then “optimal action” means “maximizes expected utility under a causality assumption”. Hence the statement above is technically true, although not very useful, since the axioms that define Newcomb’s problem specifically violate the causality assumption.
So, which decision theory should you use? An answer like “you should use the decision theory that determines the optimal action without any assumption that violates the problem constraints” seems irreducible to an epistemic statement. But is that actually correct?
If you are studing actual agents, then the point is moot, since these agents already have a decision theory (in practice it will be an approximation of either EDT or CDT, or something else), but what if you want to improve yourself, or build an artificial agent?
Then you evaluate the new decision theory according to the decision theory that you already have. Then, assuming that in principle your current decision theory can be described epistemically, you can say, for instance: “A decision theory that determines the optimal action without any assumption that violates the problem constraints is optimal for me”.
If you want to suggest a decision theory to somebody who is not you, you can say: “A decision theory that determines the optimal action without any assumption that violates the problem constraints is optimal for you”, or, more properly but less politely: “You using a decision theory that determines the optimal action without any assumption that violates the problem constraints are optimal for me”.
I had similar thoughts before, but eventually changed my mind. Unfortunately it’s hard to convince people that their solution to some problem isn’t entirely satisfactory without having a better solution at hand. (For example, this post of mine pointing out a problem with using probability theory to deal with indexical uncertainty sat at 0 points for months before I made my UDT post which suggested a different solution.) So instead of trying harder to convince people now, I think I will instead try harder to figure out a better answer by myself (and others who already share my views).
It seems that way to me. Specifically, in that case I think you’re saying that Alice (wrongly) expects that her decision is causally independent from the money Omega put in the boxes, and as such thinks that her expected utility is higher from grabbing both boxes.
I don’t think 2 is answered even if you say that the mathematical objects are themselves real. Consider a geometry that labels “true” everything that follows from its axioms. If this geometry is consistent, then we want to say that it is true, which implies that everything it labels as “true”, is. And the axioms themselves follow from the axioms, so the mathematical system says that they’re true. But you can also have another valid mathematical system, where one of those axioms is negated. This is a problem because it implies that something can be both true and not true.
Because of this, the sense in which mathematical propositions can be true can’t be the same sense in which “snow is white” can be true, even if the objects themselves are real. We have to be equivocating somewhere on “truth”.
It’s easy to overcome that simply by being a bit more precise—you are saying that such and such a proposition is true in geometry X. Meaning that the axioms of geometry X genuinely do imply the proposition. That this proposition may not be true in geometry Y has nothing to do with it.
It is a different sense of true in that it isn’t necessarily related to sensory experience—only to the interrelationships of ideas.
You are tacitly assuming that Platonists have to hold that what is formally true (proveable, derivable from axioms) is actuallty true. But a significant part of the content of Platonism is that mathematical statements are only really true if they correspond to the organisation of Plato’s heaven. Platonists can say, “I know you proved that, but it isn’t actually true”. So there are indeed different notions of truth at play here.
Which is not to defend Platonism. The notion of a “real truth” which can’t be publically assessed or agreed upon in the way that formal proof can be is quite problematical.
He says that counterfactuals do have a truth value, though IMO he’s a bit vague about what that is (or maybe it’s me who can’t fully understand what he says).