We haven’t proved that you must either become an average utilitarian, or stop describing rationality as expectation maximization. But we’ve shown that there are strong reasons to believe that proposition. Without equally strong reasons to doubt it, it is in most cases rational to act as if it were true (depending on the utility of its truth or falsehood).
(And, yes, I’m in danger of falling back into expectation maximization in that last sentence. I don’t know what else to do.)
Please read it. Even if you don’t agree with it, it should at the very least give you an appreciation that there are strong reasons to doubt your conclusion, and that there are people smarter/more knowledgeable about this than either of us who would not accept it. (For my part, learning that John Broome thinks there could be something to the argument has shifted my credence in it slightly, even if Weymark ultimately concludes that Broome’s argument doesn’t quite work.)
The discussion is framed around Harsanyi’s axiomatic “proof” of utilitarianism, but I’m fairly sure that if Harsanyi’s argument fails for the reasons discussed, then so will yours.
EDIT: I’d very much like to know whether (a) reading this shifts your estimate of either (i) whether your argument has provided strong reasons for anything, or (ii) whether utilitarianism is true (conditional on expectation maximization being rational); and (b) if not, why not?
I haven’t read it yet. I’ll probably go back and change the word “strong”; it is too subjective, and provokes resistance, and is a big distraction. People get caught up protesting that the evidence isn’t “strong”, which I think is beside the point. Even weak evidence for the argument I’m presenting should still be very interesting.
When there are strong reasons, it should be possible to construct a strong argument, one you can go around crushing sceptics with. I don’t see anything salient in this case, to either support or debunk, so I’m either blind, or the argument is not as strong as you write it to be. It is generally a good practice to do every available verification routine, where it helps to find your way in the murky pond of weakly predictable creativity.
When there are strong reasons, it should be possible to construct a strong argument, one you can go around crushing sceptics with.
I really only need a preponderance of evidence for one side (utilities being equal). If have a jar with 100 coins in it and you ask me to bet on a coin flip, and I know that one coin in the jar has two heads on it, I should bet heads. And you have to bet in this case—you have to have some utility function, if you’re claiming to be a rational utility-maximizer.
The fact that I have given any reason at all to think that you have to choose between being an average utilitarian, or stop defining rationality as expectation maximization, is in itself interesting, because of the extreme importance of the subject.
I don’t see anything salient in this case, to either support or debunk, so I’m either blind, or the argument is not as strong as you write it to be.
Do you mean that you don’t see anything in the original argument, or in some further discussion of the original argument?
If you “don’t see anything salient”, then identify a flaw in my argument. Otherwise, you’re just saying, “I can’t find any problems with your argument, but I choose not to update anyway.”
I’m sympathetic to this, but I’m not sure it’s entirely fair. It probably just means you’re talking past each other. It’s very difficult to identify specific flaws in an argument when you just don’t see how it is supposed to be relevant to the supposed conclusion.
If this were a fair criticism of Vladimir, then I think it would also be a fair criticism of you. I’ve provided what I view as extensive, and convincing (to me! (and to Amartya Sen)) criticisms of your argument, to which you general response has been, not to point out a flaw in my argument, but instead to say “I don’t see how this is relevant”.
This is incredibly frustrating to me, just as Vladimir’s response probably seems frustrating to you. But I’d like to think it’s more a failure of communication than it is bloody-mindedness on your or Vladimir’s part.
Fair enough. It sounded to me like Vladimir was saying something like, “I think your argument is all right; but now I want another argument to support the case for actually applying your argument”.
I haven’t read that paper you referenced yet. If you have others that are behind firewalls, I can likely get a copy for us.
We haven’t proved that you must either become an average utilitarian, or stop describing rationality as expectation maximization. But we’ve shown that there are strong reasons to believe that proposition. Without equally strong reasons to doubt it, it is in most cases rational to act as if it were true (depending on the utility of its truth or falsehood).
(And, yes, I’m in danger of falling back into expectation maximization in that last sentence. I don’t know what else to do.)
Phil, I’ve finally managed to find a paper addressing this issue that doesn’t appear to be behind a paywall.
Weymark, John (2005) “Measurement Theory and the Foundations of Utilitarianism”
Please read it. Even if you don’t agree with it, it should at the very least give you an appreciation that there are strong reasons to doubt your conclusion, and that there are people smarter/more knowledgeable about this than either of us who would not accept it. (For my part, learning that John Broome thinks there could be something to the argument has shifted my credence in it slightly, even if Weymark ultimately concludes that Broome’s argument doesn’t quite work.)
The discussion is framed around Harsanyi’s axiomatic “proof” of utilitarianism, but I’m fairly sure that if Harsanyi’s argument fails for the reasons discussed, then so will yours.
EDIT: I’d very much like to know whether (a) reading this shifts your estimate of either (i) whether your argument has provided strong reasons for anything, or (ii) whether utilitarianism is true (conditional on expectation maximization being rational); and (b) if not, why not?
I haven’t read it yet. I’ll probably go back and change the word “strong”; it is too subjective, and provokes resistance, and is a big distraction. People get caught up protesting that the evidence isn’t “strong”, which I think is beside the point. Even weak evidence for the argument I’m presenting should still be very interesting.
When there are strong reasons, it should be possible to construct a strong argument, one you can go around crushing sceptics with. I don’t see anything salient in this case, to either support or debunk, so I’m either blind, or the argument is not as strong as you write it to be. It is generally a good practice to do every available verification routine, where it helps to find your way in the murky pond of weakly predictable creativity.
I really only need a preponderance of evidence for one side (utilities being equal). If have a jar with 100 coins in it and you ask me to bet on a coin flip, and I know that one coin in the jar has two heads on it, I should bet heads. And you have to bet in this case—you have to have some utility function, if you’re claiming to be a rational utility-maximizer.
The fact that I have given any reason at all to think that you have to choose between being an average utilitarian, or stop defining rationality as expectation maximization, is in itself interesting, because of the extreme importance of the subject.
Do you mean that you don’t see anything in the original argument, or in some further discussion of the original argument?I’m sympathetic to this, but I’m not sure it’s entirely fair. It probably just means you’re talking past each other. It’s very difficult to identify specific flaws in an argument when you just don’t see how it is supposed to be relevant to the supposed conclusion.
If this were a fair criticism of Vladimir, then I think it would also be a fair criticism of you. I’ve provided what I view as extensive, and convincing (to me! (and to Amartya Sen)) criticisms of your argument, to which you general response has been, not to point out a flaw in my argument, but instead to say “I don’t see how this is relevant”.
This is incredibly frustrating to me, just as Vladimir’s response probably seems frustrating to you. But I’d like to think it’s more a failure of communication than it is bloody-mindedness on your or Vladimir’s part.
Fair enough. It sounded to me like Vladimir was saying something like, “I think your argument is all right; but now I want another argument to support the case for actually applying your argument”.
I haven’t read that paper you referenced yet. If you have others that are behind firewalls, I can likely get a copy for us.