Baez: … you shouldn’t always maximize expected utility if you only live once.
BenElliot: [Baez is wrong] Expected utilities do not work like that.
XiXiDu: If a mathematician like John Baez can be that wrong …
A mathematician like Baez can indeed be that wrong, when he discusses technical topics that he is insufficiently familiar with. I’m sure Baez is quite capable of understanding the standard position of economists on this topic (the position echoed by BenElliot). But, as it apparently turns out, Baez has not yet done so. No big deal. Treat Baez as an authority on mathematical physics, category theory, and perhaps saving the environment. He is not necessarily an authority on the foundations of microeconomics.
A mathematician like Baez can indeed be that wrong, when he discusses technical topics that he is insufficiently familiar with.
What about Robin Hanson? See for example his post here and here. What is it that he is insufficiently familiar with? Or what about Katja Grace who has been a visiting fellow of the SIAI? See her post here (there are many other posts by her).
And the people from GiveWell even knew about Pascal’s Mugging, what is it that they are insufficiently familiar with?
I mean, those people might disagree for different reasons. But I think that too often the argument is used that people just don’t know what they are talking about, rather than trying to find out why else they might disagree. As I said in the OP, none of them doubts that there are risks from AI, just that we don’t know enough to take them too seriously at this moment. Whereas the SIAI says that the utility associated with AI related matters outweighs those doubts. So if we were going to pinpoint the exact nature of disagreement, would it maybe all come down to how seriously we should take vague possibilities?
And if you are right that the whole problem is that they are insufficiently familiar with the economics of existential risks, then isn’t that something that should be improved by putting some effort into raising the awareness of why it is rational not to disregard risks from AI even if one believes that they are very unlikely?
For the record, I never said I disagreed with the people from Givewell. I don’t, my charity of choice is currently Village Reach. I merely disagree with Baez when he says we should not maximise expected utility. I would be very surprised to find Robin Hanson making the same mistake (if I did I would seriously re-think my own position, and possibly lower my respect for Hanson significantly).
Please stop trying to view the world in just two sides, Hanson’s arguments are arguments that the probability of a singularity (as Eliezer sees it) is low enough that an expected utility maximiser would not spend much time worrying about it (at least, I think that’s his point, all he explicitly argues is that the probability is low). His point is not, even slightly, an argument against the utility maximisation.
Sheesh! Please don’t assume that everyone who disagrees with one point you made is doing so because he disagrees with the whole thrust of your thinking.
He isn’t wrong, he’s just used to using different language than you are. And I might add that the language he is using is, as far as I can tell, the far more commonly accepted notion of utility, rather than VNM utility, which is what I assume you are talking about. By “commonly accepted” I mean that the average technical person who uses the word utility probably is not thinking about VNM utility. So if you want to write Baez’s views off, you should at least first agree on the same definition and then ask the same question.
See my other comment here. I originally misattributed the Baez quote to XiXiDu, so the reply was addressed to him directly.
A mathematician like Baez can indeed be that wrong, when he discusses technical topics that he is insufficiently familiar with. I’m sure Baez is quite capable of understanding the standard position of economists on this topic (the position echoed by BenElliot). But, as it apparently turns out, Baez has not yet done so. No big deal. Treat Baez as an authority on mathematical physics, category theory, and perhaps saving the environment. He is not necessarily an authority on the foundations of microeconomics.
What about Robin Hanson? See for example his post here and here. What is it that he is insufficiently familiar with? Or what about Katja Grace who has been a visiting fellow of the SIAI? See her post here (there are many other posts by her).
And the people from GiveWell even knew about Pascal’s Mugging, what is it that they are insufficiently familiar with?
I mean, those people might disagree for different reasons. But I think that too often the argument is used that people just don’t know what they are talking about, rather than trying to find out why else they might disagree. As I said in the OP, none of them doubts that there are risks from AI, just that we don’t know enough to take them too seriously at this moment. Whereas the SIAI says that the utility associated with AI related matters outweighs those doubts. So if we were going to pinpoint the exact nature of disagreement, would it maybe all come down to how seriously we should take vague possibilities?
And if you are right that the whole problem is that they are insufficiently familiar with the economics of existential risks, then isn’t that something that should be improved by putting some effort into raising the awareness of why it is rational not to disregard risks from AI even if one believes that they are very unlikely?
For the record, I never said I disagreed with the people from Givewell. I don’t, my charity of choice is currently Village Reach. I merely disagree with Baez when he says we should not maximise expected utility. I would be very surprised to find Robin Hanson making the same mistake (if I did I would seriously re-think my own position, and possibly lower my respect for Hanson significantly).
Please stop trying to view the world in just two sides, Hanson’s arguments are arguments that the probability of a singularity (as Eliezer sees it) is low enough that an expected utility maximiser would not spend much time worrying about it (at least, I think that’s his point, all he explicitly argues is that the probability is low). His point is not, even slightly, an argument against the utility maximisation.
What benelliot said.
Sheesh! Please don’t assume that everyone who disagrees with one point you made is doing so because he disagrees with the whole thrust of your thinking.
Doesn’t seem to agree with Baez on the subject of utility maximisation. Baez was making no sense—he does seem to be “that wrong” on the topic.
scope insensitivity.
He isn’t wrong, he’s just used to using different language than you are. And I might add that the language he is using is, as far as I can tell, the far more commonly accepted notion of utility, rather than VNM utility, which is what I assume you are talking about. By “commonly accepted” I mean that the average technical person who uses the word utility probably is not thinking about VNM utility. So if you want to write Baez’s views off, you should at least first agree on the same definition and then ask the same question.
See my other comment here. I originally misattributed the Baez quote to XiXiDu, so the reply was addressed to him directly.