I am usually a fan of making up specific numbers, but in this case that doesn’t seem useful
I think you really should. I asked you to compare P(mugger can save 3^^^^3 lives) with P(mugger can save 3^^^^^3 lives). The second probability should be only slightly lower than the first, it can’t possibly be 3^^^^3/3^^^^^3 as low, because if you’re talking to an omnipotent matrixlord, the number of arrows means nothing to them. So it doesn’t matter how big u is, with enough arrows P(mugger can save N lives) times U(N lives) is going to catch up.
What I am arguing is that it is wrong to assign a fairly low utility u to 1$ worth of resources
What does “low utility” mean? 1$ presumably has 10 times less utility than the 10$ that I have in my pocket right now, and it’s much lower than U(1 life), so it’s clearly not the most useful thing in the world, but aside from that, there isn’t much to say. The scale of utilities as a “0″, but the choice of “1” is arbitrary. Everything is high or low only in comparison to other things.
Do you think the probability that a mugger is telling the truth is a billion times as high in the world where 1000 000 000 of them ask the AI versus the world where just 1 asks?
The muggers may or may not be independent. It’s possible that they each of them has independent powers to save a different set of 3^^^^3 lives. It also possible that all of them are lying, but P(a billion people are all lying) is surely much lower than P(one person is lying). I could imagine why you still wouldn’t pay, but if you did the math, the numbers would be very different from just one person asking a billion times.
Rather than addressing them here, I think I’ll make a part 2 where I explain exactly how I think about these points… or, alternatively, realize you’ve convinced me in the process (in that case I’ll reply here again).
What happened since is neither one or the other, which is why I found it tricky to decide what to do. Basically it seems to me that the everything just comes down to the fact that expected utilities don’t converge. Every response I’d have to your arguments would run into that wall.This seems like an incredibly relevant and serious problem that throws a wrench into all of these kinds of discussions, and Pascal’s Mugging seems like merely a symptom of it.
So basically my view changed from “There’s no fire here” to “expected utilities don’t converge holy shit why doesn’t everyone point this out immediately?” But I don’t see PM as showcasing any problem independent from that, and find the way I head it talked about before pretty strange.
Thank you for that post. The way I phrased this clearly misses these objections. Rather than addressing them here, I think I’ll make a part 2 where I explain exactly how I think about these points… or, alternatively, realize you’ve convinced me in the process (in that case I’ll reply here again).
I think you really should. I asked you to compare P(mugger can save 3^^^^3 lives) with P(mugger can save 3^^^^^3 lives). The second probability should be only slightly lower than the first, it can’t possibly be 3^^^^3/3^^^^^3 as low, because if you’re talking to an omnipotent matrixlord, the number of arrows means nothing to them. So it doesn’t matter how big u is, with enough arrows P(mugger can save N lives) times U(N lives) is going to catch up.
What does “low utility” mean? 1$ presumably has 10 times less utility than the 10$ that I have in my pocket right now, and it’s much lower than U(1 life), so it’s clearly not the most useful thing in the world, but aside from that, there isn’t much to say. The scale of utilities as a “0″, but the choice of “1” is arbitrary. Everything is high or low only in comparison to other things.
The muggers may or may not be independent. It’s possible that they each of them has independent powers to save a different set of 3^^^^3 lives. It also possible that all of them are lying, but P(a billion people are all lying) is surely much lower than P(one person is lying). I could imagine why you still wouldn’t pay, but if you did the math, the numbers would be very different from just one person asking a billion times.
What happened since is neither one or the other, which is why I found it tricky to decide what to do. Basically it seems to me that the everything just comes down to the fact that expected utilities don’t converge. Every response I’d have to your arguments would run into that wall. This seems like an incredibly relevant and serious problem that throws a wrench into all of these kinds of discussions, and Pascal’s Mugging seems like merely a symptom of it.
So basically my view changed from “There’s no fire here” to “expected utilities don’t converge holy shit why doesn’t everyone point this out immediately?” But I don’t see PM as showcasing any problem independent from that, and find the way I head it talked about before pretty strange.
Thank you for that post. The way I phrased this clearly misses these objections. Rather than addressing them here, I think I’ll make a part 2 where I explain exactly how I think about these points… or, alternatively, realize you’ve convinced me in the process (in that case I’ll reply here again).