Well, the version of UDT I’m using doesn’t have probabilities, only a utility function over combined outcomes. It’s just a simpler way to think about things. I think you and Scott might be overestimating the usefulness of probabilities. For example, in the Sleeping Beauty problem, the coinflip is “spacelike separated” from you (under Scott’s peculiar definition), but it can be assigned different “probabilities” depending on your utility function over combined outcomes.
That seems good to understand better in itself, but it isn’t a crux for the argument. Whether you’ve got “probabilities” or a “caring measure” or just raw utility which doesn’t reduce to anything like that, it still seems like you’re justifying it with Pareto-type arguments. Scott’s claim is that Pareto-type arguments won’t apply if you correctly take into account the way in which you have control over certain things. I’m not sure if that makes any sense, but basically the question is whether CCT can make sense in a logical setting where you may have self-referential sentences and so on.
That’s a great question. My current (very vague) idea is that we might need to replace first order logic with something else. A theory like PA is already updateful, because it can learn that a sentence is true, so trying to build updateless reasoning on top of it might be as futile as trying to build updateless reasoning on top of probabilities. But I have no idea what an updateless replacement for first order logic could look like.
Another part of the idea (not fully explained in Scott’s post I referenced earlier) is that nonexploited bargaining (AKA bargaining away from the pareto fronteir AKA cooperating with agents with different notions of fairness) provides a model of why agents should not just take pareto improvements all the time, and may therefore be a seed of “non-Bayesian” decision theory (in so far as Bayes is about taking pareto improvements).
Another way in which there might be something interesting in this direction is if we can further formalize Scott’s argument about when Bayesian probabilities are appropriate and inappropriate, which is framed in terms of pareto-style justifications of bayesianism.
Well, the version of UDT I’m using doesn’t have probabilities, only a utility function over combined outcomes. It’s just a simpler way to think about things. I think you and Scott might be overestimating the usefulness of probabilities. For example, in the Sleeping Beauty problem, the coinflip is “spacelike separated” from you (under Scott’s peculiar definition), but it can be assigned different “probabilities” depending on your utility function over combined outcomes.
That seems good to understand better in itself, but it isn’t a crux for the argument. Whether you’ve got “probabilities” or a “caring measure” or just raw utility which doesn’t reduce to anything like that, it still seems like you’re justifying it with Pareto-type arguments. Scott’s claim is that Pareto-type arguments won’t apply if you correctly take into account the way in which you have control over certain things. I’m not sure if that makes any sense, but basically the question is whether CCT can make sense in a logical setting where you may have self-referential sentences and so on.
That’s a great question. My current (very vague) idea is that we might need to replace first order logic with something else. A theory like PA is already updateful, because it can learn that a sentence is true, so trying to build updateless reasoning on top of it might be as futile as trying to build updateless reasoning on top of probabilities. But I have no idea what an updateless replacement for first order logic could look like.
Another part of the idea (not fully explained in Scott’s post I referenced earlier) is that nonexploited bargaining (AKA bargaining away from the pareto fronteir AKA cooperating with agents with different notions of fairness) provides a model of why agents should not just take pareto improvements all the time, and may therefore be a seed of “non-Bayesian” decision theory (in so far as Bayes is about taking pareto improvements).