This approach implies there are two possible types of meanings: Sets of possible worlds and sets of possible experiences. A set of possible worlds would constitute truth conditions for “objective” statements about the external world, while a set of experience conditions would constitute verification conditions for subjective statements, i.e. statements about the current internal states of the agent.
However, it seems like a statement can mix both external or internal affairs, which would make the 0P/1P distinction problematic. Consider Wei Dai’s example of “I will see red”. It expresses a relation between the current agent (“I”) and its hypothetical “future self”. “I” is presumably an internal object, since the agent can refer to itself or its experiences independently of how the external world turns out to be constituted. The future agent, however, is an external object relative to the current agent which makes the statement. It must be external because its existence is uncertain to the present agent. Same for the future experience of red.
Then the statement “I will see red” could be formalized as follows, where i (“I”/”me”/”myself”) is an individual constant which refers to the present agent:
∃x(WillBecome(i,x)∧∃y(Red(y)∧Sees(x,y))).
Somewhat less formally: “There is an x such that I will become x and there is a experience of red y such that x sees y.”
(The quantifier is assumed to range over all objects irrespective of when they exist in time.)
If there is a future object x and a future experience y that make this statement true, they would be external to the present agent who is making the statement. But i is internal to the present agent, as it is the (present) agent itself. (Consider Descartes demon currently misleading you about the existence of the external world. Even in that case you could be certain that you exist. So you aren’t something external.)
So Wei’s statement seems partially internal and partially external, and it is not clear whether its meaning can be either a set of experiences or a set of possible worlds on the 0P/1P theory. So it seems a unified account is needed.
Here is an alternative theory.
Assume the meaning of a statement is instead a set of experience/degree-of-confirmation pairs. That is, two statements have the same meaning if they get confirmed/disconfirmed to the same degree for all possible experiences that E. So statement A has the same meaning as a statement B iff:
∀E(P(A∣E)=P(B∣E)),
where P(_∣_) is a probability function describing conditional beliefs. (See Yudkowsky’s anticipated experiences. Or Rudolf Carnap’s liberal verificationism, which considers degrees of confirmation instead of Wittgenstein’s strict verification.)
Now this arguably makes sense for statements about external affairs: If I make two statements, and I would regard them to be confirmed or disconfirmed to the same degree by the same evidence, that would plausibly mean I regard them as synonymous. And if two people disagree regarding the confirmation conditions of a statement, that would imply they don’t mean the same (or completely the same) thing when they express that statement, even if they use the same words.
It also makes sense for internal affairs. I make a statement about some internal affair, like “I see red”, formally See(i,r)∧Red(r). Here i refers to myself and r to my current experience of red. Then this is true iff there is some piece of evidence that E which is equivalent to that internal statement, namely the experience that I see red. Then P(See(i,r)∧Red(r)∣E)=1 if E=See(i,r)∧Red(r), otherwise P(See(i,r)∧Red(r)∣E)=0.
Again, the “I” here is logically an individual constant i internal to the agent, likewise the experience r. That is, only my own experience verifies that statement. If there is another agent, who also sees red, those experiences are numerically different. There are two different constants i which refer to numerically different agents, and two constants r which refer to two different experiences.
That is even the case if the two agents are perfectly correlated, qualitatively identical doppelgangers with qualitatively identical experiences (on, say, some duplicate versions of Earth, far away from each other). If one agent stubs its toe, the other agent also stubs its toe, but the first agent only feels the pain caused by the first agent’s toe, while the second only feels the pain caused by the second agent’s toe, and neither feels the experience of the other. Their experiences are only qualitatively but not numerically identical. We are talking about two experiences here, as one could have occurred without the other. They are only contingently correlated.
Now then, what about the mixed case “I will see red”? We need an analysis here such that the confirming evidence is different for statements expressed by two different agents who both say “I will see red”. My statement ∃x(WillBecome(i,x)∧∃y(Red(y)∧Sees(x,y))) would be (now) confirmed, to some degree, by any evidence (experiences) suggesting that a) I will become some future person x such that b) that future person will see red. That is different from the internal “I see red” experience that this future person would have themselves.
An example. I may see a notice indicating that a red umbrella I ordered will arrive later today, which would confirm that I will see red. Seeing this notice would constitute such a confirming experience. Again, my perfect doppelganger on a perfect twin Earth would also see such a notice, but our experiences would not be numerically identical. Just like my doppelganger wouldn’t feel my pain when we both, synchronously, stub our toes. My experience of seeing the umbrella notice is caused (explained) by the umbrella notice here on Earth, not by the far away umbrella notice on twin Earth. When I say “this notice” I refer to the hypothetical object which causes my experience of a notice. So every instance of the indexical “this” involves reference to myself and to an experience I have. Both are internal, and thus numerically different even for agents with qualitatively identical experiences. So if we both say “This notice says I will see a red umbrella later today”, we would express different statements. Their meaning would be different.
In summary, I think this is a good alternative to the 0P/1P theory. It provides a unified account of meanings, and it correctly deals with distinct agents using indexicals while having qualitatively identical experiences. Because it has a unified account of meaning, it has no in-principle problem with “mixed” (internal/external) statements.
It does omit possible worlds. So one objection would be that it would assign the same meaning to two hypotheses which make distinct but (in principle) unverifiable predictions. Like, perhaps, two different interpretations of quantum mechanics. I would say that a) these theories may differ in other aspects which are subject to some possible degree of (dis)confirmation and b) if even such indirect empirical comparisons are excluded a priori, regarding them as synonymous doesn’t sound so bad, I would argue.
The problem with using possible worlds to determine meanings is that you can always claim that the meaning of “The mome raths outgrabe” is the set of possible worlds where the mome raths outgrabe. Since possible worlds (unlike anticipated degrees of confirmation by different possible experiences) are objects external to an agent, there is no possibility of a decision procedure which determines that an expression is meaningless. Nor can there, with the possible worlds theory, be a decision procedure which determines that two expressions have the same or different meanings. It only says the meaning of “Bob is a bachelor” is determined by the possible worlds where Bob is a bachelor, and that the meaning of “Bob is an unmarried man” is determined by the worlds where Bob is an unmarried man, but it doesn’t say anything which would allow an agent to compare those meanings.
Assume the meaning of a statement is instead a set of experience/degree-of-confirmation pairs. That is, two statements have the same meaning if they get confirmed/disconfirmed to the same degree for all possible experiences that E.
Where do these degrees-of-confirmation come from? I think part of the motivation for defining meaning in terms of possible worlds is that it allows us to compute conditional and unconditional probabilities, e.g., P(A|B) = P(A and B)/P(B) where P(B) is defined in terms of the set of possible worlds that B “means”. But with your proposed semantics, we can’t do that, so I don’t know where these probabilities are supposed come from.
You can interpret them as subjective probability functions, where the conditional probability P(A|B) is the probability you currently expect for A under the assumption that you are certain that B. With the restriction that P(A and B)=P(A|B)P(B)=P(A)P(B|A).
I don’t think possible worlds help us to calculate any of the two values in the ratio P(A and B)/P(B). That would only be possible of you could say something about the share of possible worlds in which “A and B” is true, or “B”.
Like: “A and B” is true in 20% of all possible worlds, “B” is true in 50%, therefore “A” is true in 40% of the “B” worlds. So P(A|B)=0.4.
But that obviously doesn’t work. Each statement is true in infinitely many possible worlds and we have no idea how to count them to assign numbers like 20%.
You can interpret them as subjective probability functions, where the conditional probability P(A|B) is the probability you currently expect for A under the assumption that you are certain that B.
Where do they come from or how are they computed? However that’s done, shouldn’t the meaning or semantics of A and B play some role in that? In other words, how do you think about P(A|B) without first knowing what A and B mean (in some non-circular sense)? I think this suggests that “the meaning of a statement is instead a set of experience/degree-of-confirmation pairs” can’t be right.
Each statement is true in infinitely many possible worlds and we have no idea how to count them to assign numbers like 20%.
Yeah, this is a good point. The meaning of a statement is explained by experiences E, so the statement can’t be assumed from the outset to be a proposition (the meaning of a statement), as that would be circular. We have to assume that it is a potential utterance, something like a probabilistic disposition to assent to it. The synonymity condition can be clarified by writing the statements in quotation marks:
∀E(P("A"∣E)=P("B"∣E)).
Additionally the quantifier ranges only over experiences E, which can’t be any statements, but only potential experiences of the agent. Experiences are certain once you have them, while ordinary beliefs about external affairs are not.
By the way, the above is the synonymity condition which defines when two statements are synonymous or not. A somewhat awkward way to define the meaning of an individual statement would be as the equivalence class of all synonymous statements. But a possibility to define the meaning of an individual statement more directly would be to regard the meaning as the set of all pairwise odds ratios between the statement and any possible evidence. The odds ratio measures the degree of probabilistic dependence between two events. Which accords with the Bayesian idea that evidence is basically just dependence.
Then one could define synonymity alternatively as the meanings of two statements, their odds ratio sets, being equal. The above definition of synonymity would then no longer be required. This would have the advantage that we don’t have to assign some mysterious unconditional value to P(“A”|E)=P(“A”) if we think A and E are independent. Because independence just means OddsRatio(“A”,E)=1.
Another interesting thing to note is that Yudkowsky sometimes seems to express his theory of “anticipated experiences” in the reverse of what I’ve done above. He seems to think of prediction instead of confirmation. That would reverse things:
∀E(P(E∣"A")=P(E∣"B")).
I don’t think it makes much of a difference, since probabilistic dependence is ultimately symmetric, i.e. OddsRatio(X,Y)=OddsRatio(Y,X).
Maybe there is some other reason though to prefer the prediction approach over the confirmation approach. Like, for independence we would, instead of P(“A”|E)=P(“A”), have P(E|”A”)=P(E). The latter refers to the unconditional probability of an experience, which may be less problematic than to rely on the unconditional probability of a statement.
And how does someone compute the degree to which they expect some experience to confirm a statement? I leave that outside the theory. The theory only says that what you mean with a statement is determined by what you expect to confirm or disconfirm it. I think that has a lot of plausibility once you think about synonymity. How could be say two different statements have different meaning when we regard them as empirically equivalent under any possible evidence?
The approach can be generalized to account for the meaning of sub-sentence terms, i.e. individual words. A standard solution is to say that two words are synonymous iff they can be substituted for each other in any statement without affecting the meaning of the whole statement. Then there are tautologies, which are independent of any evidence, so they would be synonymous according to the standard approach. I think we could say their meaning differs in the sense that the meaning of the individual words differ. For other sentence types, like commands, we could e.g. rely on evidence that the command is executed—instead of true, like in statements. An open problem are to account for the meaning of expressions that don’t have any obvious satisfaction conditions (like being true or executed), e.g. greetings.
Regarding “What Are Probabilities, Anyway?”. The problem you discuss there is how to define an objective notion of probability. Subjective probabilities are simple, they are are just the degrees of belief of some agent at a particular point in time. But it is plausible that some subjective probability distributions are better than others, which suggests there is some objective, ideally rational probability distribution. It is unclear how to define such a thing, so this remains an open philosophical problem. But I think a theory of meaning works reasonably well with subjective probability.
Suppose I tell a stranger, “It’s raining.” Under possible worlds semantics, this seems pretty straightforward: I and the stranger share a similar map from sentences to sets of possible worlds, so with this sentence I’m trying to point them to a certain set of possible worlds that match the sentence, and telling them that I think the real world is in this set.
Can you tell a similar story of what I’m trying to do when I say something like this, under your proposed semantics?
And how does someone compute the degree to which they expect some experience to confirm a statement? I leave that outside the theory.
I don’t think we should judge philosophical ideas in isolation, without considering what other ideas it’s compatible with and how well it fits into them. So I think we should try to answer related questions like this, and look at the overall picture, instead of just saying “it’s outside the theory”.
Regarding “What Are Probabilities, Anyway?”. The problem you discuss there is how to define an objective notion of probability.
No, in that post I also consider interpretations of probability where it’s subjective. I linked to that post mainly to show you some ideas for how to quantify sizes of sets of possible worlds, in response to your assertion that we don’t have any ideas for this. Maybe try re-reading it with this in mind?
Suppose I tell a stranger, “It’s raining.” Under possible worlds semantics, this seems pretty straightforward: I and the stranger share a similar map from sentences to sets of possible worlds, so with this sentence I’m trying to point them to a certain set of possible worlds that match the sentence, and telling them that I think the real world is in this set.
Can you tell a similar story of what I’m trying to do when I say something like this, under your proposed semantics?
So my conjecture of what happens here is: You and the stranger assume a similar degree of confirmation relation between the sentence “It’s raining” and possible experiences. For example, you both expect visual experiences of raindrops, when looking out of the window, to confirm the sentence pretty strongly. Or rain-like sounds on the roof. So with asserting this sentence you try to tell the stranger that you predict/expect certain forms of experiences, which presumably makes the stranger predict similar things (if they assume you are honest and well-informed).
The problem with agents mapping a sentence to certain possible worlds is that this mapping has to occur “in our head”, internally to the agent. But possible worlds / truth conditions are external, at least for sentences about the external world. We can only create a mapping between things we have access to. So it seems we cannot create such a mapping. It’s basically the same thing Nate Showell said in a neighboring comment.
(We could replace possible worlds / truth conditions themselves with other beliefs, presumably a disjunction of beliefs that are more specific than the original statement. Beliefs are internal, so a mapping is possible. But beliefs have content (i.e. meaning) themselves, just like statements. So how then to account for these meanings? To explain them with more beliefs would lead to an infinite regress. It all has to bottom out in experiences, which is something we simply have as a given. Or any really any robot with sensory inputs, as Adele Lopez remarked.)
No, in that post I also consider interpretations of probability where it’s subjective. I linked to that post mainly to show you some ideas for how to quantify sizes of sets of possible worlds, in response to your assertion that we don’t have any ideas for this. Maybe try re-reading it with this in mind?
Okay, I admit I have a hard time understanding the post. To comment on the “mainstream view”:
“1. Only one possible world is real, and probabilities represent beliefs about which one is real.”
(While I wouldn’t personally call this a way of “estimating the size” of sets of possible worlds,) I think this interpretation has some plausibility. And I guess it may be broadly compatible with the confirmation/prediction theory of meaning. This is speculative, but truth seems to be the “limit” of confirmation or prediction, something that is approached, in some sense, as the evidence gets stronger. And truth is about how the external world is like. Which is just a way of saying that there is some possible way the world is like, which rules out other possible worlds.
Your counterarguments against interpretation 1 seems to be that it is merely subjective and not objective, which is true. Though this doesn’t rule out the existence of some unknown rationality standards which restrict the admissible beliefs to something more objective.
Interpretation 2, I would argue, is confusing possibilities with indexicals. These are really different. A possible world is not a location in a large multiverse world. Me in a different possible world is still me, at least if not too dissimilar, but a doppelganger of me in this world is someone else, even if he is perfectly similar to me. (It seems trivially true to say that I could have had different desires, and consequently something else for dinner. If this is true, it is possible that I could have wanted something else for dinner. Which is another way of saying there is a possible world where I had a different preference for food. So this person in that possible world is me. But to say there are certain possible worlds is just a metaphysically sounding way of saying that certain things are possible. Different counterfactual statements could be true of me, but I can’t exist at different locations. So indexical location is different from possible existence.)
I don’t quite understand interpretation 3. But interpretation 4 I understand even less. Beliefs seem to be are clearly different from desires. The desire that p is different from the belief that p. They can be even seen as opposites in terms of direction of fit. I don’t understand what you find plausible about this theory, but I also don’t know much about UDT.
This approach implies there are two possible types of meanings: Sets of possible worlds and sets of possible experiences. A set of possible worlds would constitute truth conditions for “objective” statements about the external world, while a set of experience conditions would constitute verification conditions for subjective statements, i.e. statements about the current internal states of the agent.
However, it seems like a statement can mix both external or internal affairs, which would make the 0P/1P distinction problematic. Consider Wei Dai’s example of “I will see red”. It expresses a relation between the current agent (“I”) and its hypothetical “future self”. “I” is presumably an internal object, since the agent can refer to itself or its experiences independently of how the external world turns out to be constituted. The future agent, however, is an external object relative to the current agent which makes the statement. It must be external because its existence is uncertain to the present agent. Same for the future experience of red.
Then the statement “I will see red” could be formalized as follows, where i (“I”/”me”/”myself”) is an individual constant which refers to the present agent:
∃x(WillBecome(i,x)∧∃y(Red(y)∧Sees(x,y))).
Somewhat less formally: “There is an x such that I will become x and there is a experience of red y such that x sees y.”
(The quantifier is assumed to range over all objects irrespective of when they exist in time.)
If there is a future object x and a future experience y that make this statement true, they would be external to the present agent who is making the statement. But i is internal to the present agent, as it is the (present) agent itself. (Consider Descartes demon currently misleading you about the existence of the external world. Even in that case you could be certain that you exist. So you aren’t something external.)
So Wei’s statement seems partially internal and partially external, and it is not clear whether its meaning can be either a set of experiences or a set of possible worlds on the 0P/1P theory. So it seems a unified account is needed.
Here is an alternative theory.
Assume the meaning of a statement is instead a set of experience/degree-of-confirmation pairs. That is, two statements have the same meaning if they get confirmed/disconfirmed to the same degree for all possible experiences that E. So statement A has the same meaning as a statement B iff:
∀E(P(A∣E)=P(B∣E)),
where P(_∣_) is a probability function describing conditional beliefs. (See Yudkowsky’s anticipated experiences. Or Rudolf Carnap’s liberal verificationism, which considers degrees of confirmation instead of Wittgenstein’s strict verification.)
Now this arguably makes sense for statements about external affairs: If I make two statements, and I would regard them to be confirmed or disconfirmed to the same degree by the same evidence, that would plausibly mean I regard them as synonymous. And if two people disagree regarding the confirmation conditions of a statement, that would imply they don’t mean the same (or completely the same) thing when they express that statement, even if they use the same words.
It also makes sense for internal affairs. I make a statement about some internal affair, like “I see red”, formally See(i,r)∧Red(r). Here i refers to myself and r to my current experience of red. Then this is true iff there is some piece of evidence that E which is equivalent to that internal statement, namely the experience that I see red. Then P(See(i,r)∧Red(r)∣E)=1 if E=See(i,r)∧Red(r), otherwise P(See(i,r)∧Red(r)∣E)=0.
Again, the “I” here is logically an individual constant i internal to the agent, likewise the experience r. That is, only my own experience verifies that statement. If there is another agent, who also sees red, those experiences are numerically different. There are two different constants i which refer to numerically different agents, and two constants r which refer to two different experiences.
That is even the case if the two agents are perfectly correlated, qualitatively identical doppelgangers with qualitatively identical experiences (on, say, some duplicate versions of Earth, far away from each other). If one agent stubs its toe, the other agent also stubs its toe, but the first agent only feels the pain caused by the first agent’s toe, while the second only feels the pain caused by the second agent’s toe, and neither feels the experience of the other. Their experiences are only qualitatively but not numerically identical. We are talking about two experiences here, as one could have occurred without the other. They are only contingently correlated.
Now then, what about the mixed case “I will see red”? We need an analysis here such that the confirming evidence is different for statements expressed by two different agents who both say “I will see red”. My statement ∃x(WillBecome(i,x)∧∃y(Red(y)∧Sees(x,y))) would be (now) confirmed, to some degree, by any evidence (experiences) suggesting that a) I will become some future person x such that b) that future person will see red. That is different from the internal “I see red” experience that this future person would have themselves.
An example. I may see a notice indicating that a red umbrella I ordered will arrive later today, which would confirm that I will see red. Seeing this notice would constitute such a confirming experience. Again, my perfect doppelganger on a perfect twin Earth would also see such a notice, but our experiences would not be numerically identical. Just like my doppelganger wouldn’t feel my pain when we both, synchronously, stub our toes. My experience of seeing the umbrella notice is caused (explained) by the umbrella notice here on Earth, not by the far away umbrella notice on twin Earth. When I say “this notice” I refer to the hypothetical object which causes my experience of a notice. So every instance of the indexical “this” involves reference to myself and to an experience I have. Both are internal, and thus numerically different even for agents with qualitatively identical experiences. So if we both say “This notice says I will see a red umbrella later today”, we would express different statements. Their meaning would be different.
In summary, I think this is a good alternative to the 0P/1P theory. It provides a unified account of meanings, and it correctly deals with distinct agents using indexicals while having qualitatively identical experiences. Because it has a unified account of meaning, it has no in-principle problem with “mixed” (internal/external) statements.
It does omit possible worlds. So one objection would be that it would assign the same meaning to two hypotheses which make distinct but (in principle) unverifiable predictions. Like, perhaps, two different interpretations of quantum mechanics. I would say that a) these theories may differ in other aspects which are subject to some possible degree of (dis)confirmation and b) if even such indirect empirical comparisons are excluded a priori, regarding them as synonymous doesn’t sound so bad, I would argue.
The problem with using possible worlds to determine meanings is that you can always claim that the meaning of “The mome raths outgrabe” is the set of possible worlds where the mome raths outgrabe. Since possible worlds (unlike anticipated degrees of confirmation by different possible experiences) are objects external to an agent, there is no possibility of a decision procedure which determines that an expression is meaningless. Nor can there, with the possible worlds theory, be a decision procedure which determines that two expressions have the same or different meanings. It only says the meaning of “Bob is a bachelor” is determined by the possible worlds where Bob is a bachelor, and that the meaning of “Bob is an unmarried man” is determined by the worlds where Bob is an unmarried man, but it doesn’t say anything which would allow an agent to compare those meanings.
Where do these degrees-of-confirmation come from? I think part of the motivation for defining meaning in terms of possible worlds is that it allows us to compute conditional and unconditional probabilities, e.g., P(A|B) = P(A and B)/P(B) where P(B) is defined in terms of the set of possible worlds that B “means”. But with your proposed semantics, we can’t do that, so I don’t know where these probabilities are supposed come from.
You can interpret them as subjective probability functions, where the conditional probability P(A|B) is the probability you currently expect for A under the assumption that you are certain that B. With the restriction that P(A and B)=P(A|B)P(B)=P(A)P(B|A).
I don’t think possible worlds help us to calculate any of the two values in the ratio P(A and B)/P(B). That would only be possible of you could say something about the share of possible worlds in which “A and B” is true, or “B”.
Like: “A and B” is true in 20% of all possible worlds, “B” is true in 50%, therefore “A” is true in 40% of the “B” worlds. So P(A|B)=0.4.
But that obviously doesn’t work. Each statement is true in infinitely many possible worlds and we have no idea how to count them to assign numbers like 20%.
Where do they come from or how are they computed? However that’s done, shouldn’t the meaning or semantics of A and B play some role in that? In other words, how do you think about P(A|B) without first knowing what A and B mean (in some non-circular sense)? I think this suggests that “the meaning of a statement is instead a set of experience/degree-of-confirmation pairs” can’t be right.
See What Are Probabilities, Anyway? for some ideas.
Yeah, this is a good point. The meaning of a statement is explained by experiences E, so the statement can’t be assumed from the outset to be a proposition (the meaning of a statement), as that would be circular. We have to assume that it is a potential utterance, something like a probabilistic disposition to assent to it. The synonymity condition can be clarified by writing the statements in quotation marks:
∀E(P("A"∣E)=P("B"∣E)).
Additionally the quantifier ranges only over experiences E, which can’t be any statements, but only potential experiences of the agent. Experiences are certain once you have them, while ordinary beliefs about external affairs are not.
By the way, the above is the synonymity condition which defines when two statements are synonymous or not. A somewhat awkward way to define the meaning of an individual statement would be as the equivalence class of all synonymous statements. But a possibility to define the meaning of an individual statement more directly would be to regard the meaning as the set of all pairwise odds ratios between the statement and any possible evidence. The odds ratio measures the degree of probabilistic dependence between two events. Which accords with the Bayesian idea that evidence is basically just dependence.
Then one could define synonymity alternatively as the meanings of two statements, their odds ratio sets, being equal. The above definition of synonymity would then no longer be required. This would have the advantage that we don’t have to assign some mysterious unconditional value to P(“A”|E)=P(“A”) if we think A and E are independent. Because independence just means OddsRatio(“A”,E)=1.
Another interesting thing to note is that Yudkowsky sometimes seems to express his theory of “anticipated experiences” in the reverse of what I’ve done above. He seems to think of prediction instead of confirmation. That would reverse things:
∀E(P(E∣"A")=P(E∣"B")).
I don’t think it makes much of a difference, since probabilistic dependence is ultimately symmetric, i.e. OddsRatio(X,Y)=OddsRatio(Y,X).
Maybe there is some other reason though to prefer the prediction approach over the confirmation approach. Like, for independence we would, instead of P(“A”|E)=P(“A”), have P(E|”A”)=P(E). The latter refers to the unconditional probability of an experience, which may be less problematic than to rely on the unconditional probability of a statement.
And how does someone compute the degree to which they expect some experience to confirm a statement? I leave that outside the theory. The theory only says that what you mean with a statement is determined by what you expect to confirm or disconfirm it. I think that has a lot of plausibility once you think about synonymity. How could be say two different statements have different meaning when we regard them as empirically equivalent under any possible evidence?
The approach can be generalized to account for the meaning of sub-sentence terms, i.e. individual words. A standard solution is to say that two words are synonymous iff they can be substituted for each other in any statement without affecting the meaning of the whole statement. Then there are tautologies, which are independent of any evidence, so they would be synonymous according to the standard approach. I think we could say their meaning differs in the sense that the meaning of the individual words differ. For other sentence types, like commands, we could e.g. rely on evidence that the command is executed—instead of true, like in statements. An open problem are to account for the meaning of expressions that don’t have any obvious satisfaction conditions (like being true or executed), e.g. greetings.
Regarding “What Are Probabilities, Anyway?”. The problem you discuss there is how to define an objective notion of probability. Subjective probabilities are simple, they are are just the degrees of belief of some agent at a particular point in time. But it is plausible that some subjective probability distributions are better than others, which suggests there is some objective, ideally rational probability distribution. It is unclear how to define such a thing, so this remains an open philosophical problem. But I think a theory of meaning works reasonably well with subjective probability.
Suppose I tell a stranger, “It’s raining.” Under possible worlds semantics, this seems pretty straightforward: I and the stranger share a similar map from sentences to sets of possible worlds, so with this sentence I’m trying to point them to a certain set of possible worlds that match the sentence, and telling them that I think the real world is in this set.
Can you tell a similar story of what I’m trying to do when I say something like this, under your proposed semantics?
I don’t think we should judge philosophical ideas in isolation, without considering what other ideas it’s compatible with and how well it fits into them. So I think we should try to answer related questions like this, and look at the overall picture, instead of just saying “it’s outside the theory”.
No, in that post I also consider interpretations of probability where it’s subjective. I linked to that post mainly to show you some ideas for how to quantify sizes of sets of possible worlds, in response to your assertion that we don’t have any ideas for this. Maybe try re-reading it with this in mind?
So my conjecture of what happens here is: You and the stranger assume a similar degree of confirmation relation between the sentence “It’s raining” and possible experiences. For example, you both expect visual experiences of raindrops, when looking out of the window, to confirm the sentence pretty strongly. Or rain-like sounds on the roof. So with asserting this sentence you try to tell the stranger that you predict/expect certain forms of experiences, which presumably makes the stranger predict similar things (if they assume you are honest and well-informed).
The problem with agents mapping a sentence to certain possible worlds is that this mapping has to occur “in our head”, internally to the agent. But possible worlds / truth conditions are external, at least for sentences about the external world. We can only create a mapping between things we have access to. So it seems we cannot create such a mapping. It’s basically the same thing Nate Showell said in a neighboring comment.
(We could replace possible worlds / truth conditions themselves with other beliefs, presumably a disjunction of beliefs that are more specific than the original statement. Beliefs are internal, so a mapping is possible. But beliefs have content (i.e. meaning) themselves, just like statements. So how then to account for these meanings? To explain them with more beliefs would lead to an infinite regress. It all has to bottom out in experiences, which is something we simply have as a given. Or any really any robot with sensory inputs, as Adele Lopez remarked.)
Okay, I admit I have a hard time understanding the post. To comment on the “mainstream view”:
(While I wouldn’t personally call this a way of “estimating the size” of sets of possible worlds,) I think this interpretation has some plausibility. And I guess it may be broadly compatible with the confirmation/prediction theory of meaning. This is speculative, but truth seems to be the “limit” of confirmation or prediction, something that is approached, in some sense, as the evidence gets stronger. And truth is about how the external world is like. Which is just a way of saying that there is some possible way the world is like, which rules out other possible worlds.
Your counterarguments against interpretation 1 seems to be that it is merely subjective and not objective, which is true. Though this doesn’t rule out the existence of some unknown rationality standards which restrict the admissible beliefs to something more objective.
Interpretation 2, I would argue, is confusing possibilities with indexicals. These are really different. A possible world is not a location in a large multiverse world. Me in a different possible world is still me, at least if not too dissimilar, but a doppelganger of me in this world is someone else, even if he is perfectly similar to me. (It seems trivially true to say that I could have had different desires, and consequently something else for dinner. If this is true, it is possible that I could have wanted something else for dinner. Which is another way of saying there is a possible world where I had a different preference for food. So this person in that possible world is me. But to say there are certain possible worlds is just a metaphysically sounding way of saying that certain things are possible. Different counterfactual statements could be true of me, but I can’t exist at different locations. So indexical location is different from possible existence.)
I don’t quite understand interpretation 3. But interpretation 4 I understand even less. Beliefs
seem to beare clearly different from desires. The desire that p is different from the belief that p. They can be even seen as opposites in terms of direction of fit. I don’t understand what you find plausible about this theory, but I also don’t know much about UDT.