Suppose I tell a stranger, “It’s raining.” Under possible worlds semantics, this seems pretty straightforward: I and the stranger share a similar map from sentences to sets of possible worlds, so with this sentence I’m trying to point them to a certain set of possible worlds that match the sentence, and telling them that I think the real world is in this set.
Can you tell a similar story of what I’m trying to do when I say something like this, under your proposed semantics?
So my conjecture of what happens here is: You and the stranger assume a similar degree of confirmation relation between the sentence “It’s raining” and possible experiences. For example, you both expect visual experiences of raindrops, when looking out of the window, to confirm the sentence pretty strongly. Or rain-like sounds on the roof. So with asserting this sentence you try to tell the stranger that you predict/expect certain forms of experiences, which presumably makes the stranger predict similar things (if they assume you are honest and well-informed).
The problem with agents mapping a sentence to certain possible worlds is that this mapping has to occur “in our head”, internally to the agent. But possible worlds / truth conditions are external, at least for sentences about the external world. We can only create a mapping between things we have access to. So it seems we cannot create such a mapping. It’s basically the same thing Nate Showell said in a neighboring comment.
(We could replace possible worlds / truth conditions themselves with other beliefs, presumably a disjunction of beliefs that are more specific than the original statement. Beliefs are internal, so a mapping is possible. But beliefs have content (i.e. meaning) themselves, just like statements. So how then to account for these meanings? To explain them with more beliefs would lead to an infinite regress. It all has to bottom out in experiences, which is something we simply have as a given. Or any really any robot with sensory inputs, as Adele Lopez remarked.)
No, in that post I also consider interpretations of probability where it’s subjective. I linked to that post mainly to show you some ideas for how to quantify sizes of sets of possible worlds, in response to your assertion that we don’t have any ideas for this. Maybe try re-reading it with this in mind?
Okay, I admit I have a hard time understanding the post. To comment on the “mainstream view”:
“1. Only one possible world is real, and probabilities represent beliefs about which one is real.”
(While I wouldn’t personally call this a way of “estimating the size” of sets of possible worlds,) I think this interpretation has some plausibility. And I guess it may be broadly compatible with the confirmation/prediction theory of meaning. This is speculative, but truth seems to be the “limit” of confirmation or prediction, something that is approached, in some sense, as the evidence gets stronger. And truth is about how the external world is like. Which is just a way of saying that there is some possible way the world is like, which rules out other possible worlds.
Your counterarguments against interpretation 1 seems to be that it is merely subjective and not objective, which is true. Though this doesn’t rule out the existence of some unknown rationality standards which restrict the admissible beliefs to something more objective.
Interpretation 2, I would argue, is confusing possibilities with indexicals. These are really different. A possible world is not a location in a large multiverse world. Me in a different possible world is still me, at least if not too dissimilar, but a doppelganger of me in this world is someone else, even if he is perfectly similar to me. (It seems trivially true to say that I could have had different desires, and consequently something else for dinner. If this is true, it is possible that I could have wanted something else for dinner. Which is another way of saying there is a possible world where I had a different preference for food. So this person in that possible world is me. But to say there are certain possible worlds is just a metaphysically sounding way of saying that certain things are possible. Different counterfactual statements could be true of me, but I can’t exist at different locations. So indexical location is different from possible existence.)
I don’t quite understand interpretation 3. But interpretation 4 I understand even less. Beliefs seem to be are clearly different from desires. The desire that p is different from the belief that p. They can be even seen as opposites in terms of direction of fit. I don’t understand what you find plausible about this theory, but I also don’t know much about UDT.
So my conjecture of what happens here is: You and the stranger assume a similar degree of confirmation relation between the sentence “It’s raining” and possible experiences. For example, you both expect visual experiences of raindrops, when looking out of the window, to confirm the sentence pretty strongly. Or rain-like sounds on the roof. So with asserting this sentence you try to tell the stranger that you predict/expect certain forms of experiences, which presumably makes the stranger predict similar things (if they assume you are honest and well-informed).
The problem with agents mapping a sentence to certain possible worlds is that this mapping has to occur “in our head”, internally to the agent. But possible worlds / truth conditions are external, at least for sentences about the external world. We can only create a mapping between things we have access to. So it seems we cannot create such a mapping. It’s basically the same thing Nate Showell said in a neighboring comment.
(We could replace possible worlds / truth conditions themselves with other beliefs, presumably a disjunction of beliefs that are more specific than the original statement. Beliefs are internal, so a mapping is possible. But beliefs have content (i.e. meaning) themselves, just like statements. So how then to account for these meanings? To explain them with more beliefs would lead to an infinite regress. It all has to bottom out in experiences, which is something we simply have as a given. Or any really any robot with sensory inputs, as Adele Lopez remarked.)
Okay, I admit I have a hard time understanding the post. To comment on the “mainstream view”:
(While I wouldn’t personally call this a way of “estimating the size” of sets of possible worlds,) I think this interpretation has some plausibility. And I guess it may be broadly compatible with the confirmation/prediction theory of meaning. This is speculative, but truth seems to be the “limit” of confirmation or prediction, something that is approached, in some sense, as the evidence gets stronger. And truth is about how the external world is like. Which is just a way of saying that there is some possible way the world is like, which rules out other possible worlds.
Your counterarguments against interpretation 1 seems to be that it is merely subjective and not objective, which is true. Though this doesn’t rule out the existence of some unknown rationality standards which restrict the admissible beliefs to something more objective.
Interpretation 2, I would argue, is confusing possibilities with indexicals. These are really different. A possible world is not a location in a large multiverse world. Me in a different possible world is still me, at least if not too dissimilar, but a doppelganger of me in this world is someone else, even if he is perfectly similar to me. (It seems trivially true to say that I could have had different desires, and consequently something else for dinner. If this is true, it is possible that I could have wanted something else for dinner. Which is another way of saying there is a possible world where I had a different preference for food. So this person in that possible world is me. But to say there are certain possible worlds is just a metaphysically sounding way of saying that certain things are possible. Different counterfactual statements could be true of me, but I can’t exist at different locations. So indexical location is different from possible existence.)
I don’t quite understand interpretation 3. But interpretation 4 I understand even less. Beliefs
seem to beare clearly different from desires. The desire that p is different from the belief that p. They can be even seen as opposites in terms of direction of fit. I don’t understand what you find plausible about this theory, but I also don’t know much about UDT.