By the way, what happens if a billion independent muggers all mug you for 1 dollar, one after another?
The same as if one mugger asks a billion times, I believe. Do you think the probability that a mugger is telling the truth is a billion times as high in the world where 1000 000 000 of them ask the AI versus the world where just 1 asks? If the answer is no, then why would the AI think so?
Why do you think that? What is the probability that the mugger does in fact have exclusive access to 3^^^^3 lives? And what is the probability for 3^^^^^3 lives?
In the section you quoted, I am not saying that other ways of affecting 3^^^^3 lives exist, I am saying that other ways with a non-zero probability of affecting that many lives exist – this is trivial, I think. A way to actually do this does, most likely, not exist.
So there is of course a probability that the mugger does have exclusive access to 3^^^^3 lives. Let’s call that p. What I am arguing is that it is wrong to assign a fairly low utility u to 1$ worth of resources and
then conclude “aha, since p⋅U(3^^^^3 lives) >u, it must be correct to pay!” And the reason for this is that u is not actually small. Calculating u, the utility of one dollar, does itself include considering various mugging-like scenarios; what if there is just a bit of additional self-improvement necessary to see how 3^^^^3 lives can be saved? It is up to the discretion of the AI to decide when the above formula holds.
So p might be much larger than 1/3^^^^3 but u is actually very large, too. (I am usually a fan of making up specific numbers, but in this case that doesn’t seem useful).
I am usually a fan of making up specific numbers, but in this case that doesn’t seem useful
I think you really should. I asked you to compare P(mugger can save 3^^^^3 lives) with P(mugger can save 3^^^^^3 lives). The second probability should be only slightly lower than the first, it can’t possibly be 3^^^^3/3^^^^^3 as low, because if you’re talking to an omnipotent matrixlord, the number of arrows means nothing to them. So it doesn’t matter how big u is, with enough arrows P(mugger can save N lives) times U(N lives) is going to catch up.
What I am arguing is that it is wrong to assign a fairly low utility u to 1$ worth of resources
What does “low utility” mean? 1$ presumably has 10 times less utility than the 10$ that I have in my pocket right now, and it’s much lower than U(1 life), so it’s clearly not the most useful thing in the world, but aside from that, there isn’t much to say. The scale of utilities as a “0″, but the choice of “1” is arbitrary. Everything is high or low only in comparison to other things.
Do you think the probability that a mugger is telling the truth is a billion times as high in the world where 1000 000 000 of them ask the AI versus the world where just 1 asks?
The muggers may or may not be independent. It’s possible that they each of them has independent powers to save a different set of 3^^^^3 lives. It also possible that all of them are lying, but P(a billion people are all lying) is surely much lower than P(one person is lying). I could imagine why you still wouldn’t pay, but if you did the math, the numbers would be very different from just one person asking a billion times.
Rather than addressing them here, I think I’ll make a part 2 where I explain exactly how I think about these points… or, alternatively, realize you’ve convinced me in the process (in that case I’ll reply here again).
What happened since is neither one or the other, which is why I found it tricky to decide what to do. Basically it seems to me that the everything just comes down to the fact that expected utilities don’t converge. Every response I’d have to your arguments would run into that wall.This seems like an incredibly relevant and serious problem that throws a wrench into all of these kinds of discussions, and Pascal’s Mugging seems like merely a symptom of it.
So basically my view changed from “There’s no fire here” to “expected utilities don’t converge holy shit why doesn’t everyone point this out immediately?” But I don’t see PM as showcasing any problem independent from that, and find the way I head it talked about before pretty strange.
Thank you for that post. The way I phrased this clearly misses these objections. Rather than addressing them here, I think I’ll make a part 2 where I explain exactly how I think about these points… or, alternatively, realize you’ve convinced me in the process (in that case I’ll reply here again).
The same as if one mugger asks a billion times, I believe. Do you think the probability that a mugger is telling the truth is a billion times as high in the world where 1000 000 000 of them ask the AI versus the world where just 1 asks? If the answer is no, then why would the AI think so?
In the section you quoted, I am not saying that other ways of affecting 3^^^^3 lives exist, I am saying that other ways with a non-zero probability of affecting that many lives exist – this is trivial, I think. A way to actually do this does, most likely, not exist.
So there is of course a probability that the mugger does have exclusive access to 3^^^^3 lives. Let’s call that p. What I am arguing is that it is wrong to assign a fairly low utility u to 1$ worth of resources and
then conclude “aha, since p⋅U(3^^^^3 lives) >u, it must be correct to pay!” And the reason for this is that u is not actually small. Calculating u, the utility of one dollar, does itself include considering various mugging-like scenarios; what if there is just a bit of additional self-improvement necessary to see how 3^^^^3 lives can be saved? It is up to the discretion of the AI to decide when the above formula holds.
So p might be much larger than 1/3^^^^3 but u is actually very large, too. (I am usually a fan of making up specific numbers, but in this case that doesn’t seem useful).
I think you really should. I asked you to compare P(mugger can save 3^^^^3 lives) with P(mugger can save 3^^^^^3 lives). The second probability should be only slightly lower than the first, it can’t possibly be 3^^^^3/3^^^^^3 as low, because if you’re talking to an omnipotent matrixlord, the number of arrows means nothing to them. So it doesn’t matter how big u is, with enough arrows P(mugger can save N lives) times U(N lives) is going to catch up.
What does “low utility” mean? 1$ presumably has 10 times less utility than the 10$ that I have in my pocket right now, and it’s much lower than U(1 life), so it’s clearly not the most useful thing in the world, but aside from that, there isn’t much to say. The scale of utilities as a “0″, but the choice of “1” is arbitrary. Everything is high or low only in comparison to other things.
The muggers may or may not be independent. It’s possible that they each of them has independent powers to save a different set of 3^^^^3 lives. It also possible that all of them are lying, but P(a billion people are all lying) is surely much lower than P(one person is lying). I could imagine why you still wouldn’t pay, but if you did the math, the numbers would be very different from just one person asking a billion times.
What happened since is neither one or the other, which is why I found it tricky to decide what to do. Basically it seems to me that the everything just comes down to the fact that expected utilities don’t converge. Every response I’d have to your arguments would run into that wall. This seems like an incredibly relevant and serious problem that throws a wrench into all of these kinds of discussions, and Pascal’s Mugging seems like merely a symptom of it.
So basically my view changed from “There’s no fire here” to “expected utilities don’t converge holy shit why doesn’t everyone point this out immediately?” But I don’t see PM as showcasing any problem independent from that, and find the way I head it talked about before pretty strange.
Thank you for that post. The way I phrased this clearly misses these objections. Rather than addressing them here, I think I’ll make a part 2 where I explain exactly how I think about these points… or, alternatively, realize you’ve convinced me in the process (in that case I’ll reply here again).