Note that individual value differences (like personal differences in preferences/desires) do not imply a difference in moral priority. This is because moral priority, at least judging from a broadly utilitarian analysis of the term, derives from some kind of aggregate of preferences, not from an individual preference. Questions about moral priority can be reduced to the empirical question of what the individual preferences are, and/or to the conceptual question of what this ethical aggregation method is. People can come (or fail to come) to an agreement on both irrespective of what their preferences are.
cubefox
Not sure about the italics, but I like showing Earth this way from space. It drives home a sense of scale.
Note, the video doesn’t show up for me.
X because Y implies X and Y, though not the other way round.
This is an overall well-reasoned post. I don’t want the conclusion to be true, but that is no reason to downvote.
I noticed this years ago when the variations of the show Big Brother were on TV in various countries. The show consists of compilations of real people spontaneously talking to each other throughout the day. The difference between this and the scripted conversations we saw on TV before Big Brother is huge. Real people apparently hardly talk in complete sentences, which is why scripted conversations are immediately recognizable as being fake. It’s also strange that this is hardly noticeable in real life when you are actually having conversations.
I think one issue with the “person+time” context is that we may assume that once I know the time, I must know whether it is Friday or not. A more accurate assessment would be to say that an indexical proposition corresponds to a set of possible worlds together with a person moment, i.e. a complete mental state. The person moment replaces the “person + time” context. This makes it clear that “It’s Friday” is true in some possible worlds and false in others, depending on whether my person moment (my current mental state, including all the evidence I have from perception etc) is spatio-temporally located at a Friday in that possible world. This also makes intuitive sense, since I know my current mental state but that alone is not necessarily sufficient to determine the time of week, and I could be mistaken about whether it’s Friday or not.
A different case is “I am here now” or the classic “I exist”. Which would be true for any person moment and any possible world where that person moment exists. These are “synthetic a priori” propositions. Their truth can be ascertained from introspection alone (“a priori”), but they are “synthetic” rather than “analytic”, since they aren’t true in every possible world, i.e. in worlds were the person moment doesn’t exist. At least “I exist” is false at worlds where the associated person moment doesn’t exist, and arguably also “I am here now”.
Yet another variation would be “I’m hungry”, “I have a headache”, “I have the visual impression of a rose”, “I’m thinking about X”. These only state something about aspects of an internal state, so their truth value only depends on the person moment, not on what the world is like apart from it. So a proposition of this sort is either true in all possible worlds where that person moment exists, or false in all possible worlds where that person moment exists (depending on whether the sensation of hungriness etc is part of the person moment or not). Though I’m not sure which truth value they should be assigned in possible worlds where the person moment doesn’t exist. If “I’m thinking of a rose” is false when I don’t exist, is “I’m not thinking of a rose” also false when I don’t exist? Both presuppose that I exist. To avoid contradictions, this would apparently require a three-valued logic, with a third truth value for propositions like that in case the associated person moment doesn’t exist.
And what would this look like? Can you reframe the original argument accordingly?
I meant leogao’s argument above.
It seems the updating rule doesn’t tell you anything about the original argument even when you view information about reference classes as evidence rather than as a method of assigning prior probabilities to hypotheses. Or does it? Can you rephrase the argument in a proper Bayesian way such that it becomes clearer? Note that how strongly some evidence confirms or disconfirms a hypothesis also depends on a prior.
a prior should be over all valid explanations of the prior evidence.
… but that still leaves the problem of which prior distribution should be used.
It seems you are having in mind something like inference to the best explanation here. Bayesian updating, on the other hand, does need a prior distribution, and the question of which prior distribution to use cannot be waved away when there is a disagreement on how to update. In fact, that’s one of the main problems of Bayesian updating, and the reason why it is often not used in arguments.
Or DeepSeek-V3-Base.
I noticed recently is that there are two types of preference and that confusing them leads to some of the paradoxes described here.
Desires as preferences. A desire (wanting something) can loosely be understood as wanting something which you currently don’t have. More precisely, a desire for X to be true is preferring a higher (and indeed maximal) subjective probability for X to your current actual probability for X. “Not having X currently” above just means being currently less than certain that X is true. Wanting something is wanting it to be true, and wanting something to be true is wanting to be more certain (including perfectly certain) that it is true. Moreover, desires come in strengths. The strength of our desire for X corresponds to how strongly you prefer perfect certainty that X is true to your current degree of belief that X is true. These strengths can be described by numbers in a utility function. In specific decision theories, preferences/desires of that sort are simply called “utilities”, not “preferences”.
Preferences that compare desires. Since desires can have varying strengths, the desire for X can be stronger (“have higher utility”) than the desire for Y. In that sense you may prefer X to Y, even if you currently “have” neither X nor Y, i.e. even if you are less than certain that X or Y is true. Moreover, you may both desire X and desire Y, but preferring X to Y is not a desire.
Preferences of type 1 are what arrows express in your graphs (even though you interpret the nodes more narrowly as states, not broadly as propositions which could be true). means that if is the current state, you want to be in . More technically, you could say that in state you disbelieve that you are in state , and the arrow means you want to come to believe you are in state . Moreover, the desires inside a money pump argument are also preferences of type 1, they are about things which you currently don’t have but prefer to have.
What about preferences of type 2? Those are the things which standard decision theories call “preferences” and describe with a symbol like “”. E.g. Savage’s or Jeffrey’s theory.
Now one main problem is that people typically use the money pump argument that talks about preferences of type 1 (desires/utilities) in order to draw conclusions about preferences of type 2 (comparison between two desires/utilities) without noticing that they are different types. So in this form, the argument is clearly confused.
I mean I agree, indexicals don’t really work with interpreting propositions simply as sets of possible worlds, but both sentences contain such indexicals, like “I”, implicitly or explicitly. “I” makes only sense for a specific person at a specific time. “It’s Friday (for me)”, relative to a person and a time, fixes a set of possible worlds where the statement is true. It’s the same for “I try to make sure to check the mail on Fridays”.
Where do you think is the difference? I agree that there is a problem with indexical content, though this affects both examples. (“It’s (currently) Friday (where I live).”)
Even though it doesn’t solve all problems with indexicals, it’s probably better to not start with possible worlds but instead start with propositions directly, similar to propositional logic. Indeed this is what Richard Jeffrey does. Instead of starting with a set of possible worlds, he starts with a disjunction of two mutually exclusive propositions and :
If we wanted to be super proper, then preferences should have as objects maximally specific ways the world could be, including the whole history and future of the universe, down to the last detail. Decision theory involving anything more coarse-grained than that is just a useful approximation
Preferences can be equally rigorously defined over events if probabilities and utilities are also available. Call a possible world , the set of all possible worlds , and an a set such that an “event”. Then the utility of is plausibly This is a probability-weighted average, which derives from dividing the expected utility by , to arrive at the formula for alone.
So if we have both a probability function and a utility function over possible worlds, we can also fix a Boolean algebra of events over which those functions are defined. Then a “preference” between two events is simply .
“Events” are a lot more practical than possible worlds, since events don’t have to be maximally specific, and they correspond directly to propositions, which one can “believe” and “desire” to be true. Degrees of belief and degrees of desire can be described by probability and utility functions respectively.
Maybe the want comes from the cortex, while the urge comes from the cerebellum. Or the want comes from the superego, while the urge comes from the id. Though I agree that at other times it doesn’t feel necessary to talk of urges. I distinguished two different explanations here. One with urges vs wants, one with “want” vs “want to want”. Though you already touched on the latter.
That’s true. Another theory is that our tolerance for “small pieces of highly engaging information” increases the more we consume, so we need a higher dosage, and if we abstain for a while, the tolerance goes down again (the sensitivity increases), and we no longer need as much. Similar to how you “need” less sugar for food to feel appropriately sweet, if you abstained a while from sugar.
Yeah. It is probably even more important for the cover to look serious and “academically respectable” than for it to look maximally appealing to a broad audience. It shouldn’t give the impression of a science fiction novel or a sensationalist crackpot theory. An even more negative example of this kind (in my opinion) is the American cover of The Beginning of Infinity by David Deutsch.