So, you meant something like: if I think A is worse than B, but not infinitely worse than B, and I don’t have some kind of threshold (e.g., a threshold of probability) below which I no longer evaluate expected utility of events at all, then my beliefs about B are irrelevant to my decisions because my decisions are entirely driven by my beliefs about A?
I mean, that’s trivially true, in the sense that a false premise justifies any conclusion, and any finite system will have some threshold for which events not to evaluate.
Ah.
So, you meant something like: if I think A is worse than B, but not infinitely worse than B, and I don’t have some kind of threshold (e.g., a threshold of probability) below which I no longer evaluate expected utility of events at all, then my beliefs about B are irrelevant to my decisions because my decisions are entirely driven by my beliefs about A?
I mean, that’s trivially true, in the sense that a false premise justifies any conclusion, and any finite system will have some threshold for which events not to evaluate.
But in a less trivial sense… hm.
OK, thanks for clarifying.