As I see it, the mugger seems to have an extremely bad hand to play.
If you evaluate the probability of the statement ‘I will kill one person if you don’t give me five dollars,’ as being something that stands in a relationship to the occurrence of such threat being carried through on, and simply multiply up from there until you get to 3^^^^3 people, then you’re going to end up with problems.
However, that sort of simplification – treating all the evidence as locating the same thing, only works for low multiples. (Which I’d imagine is why it feels wrong when you start talking about large numbers.) If you evaluate the evidence for and against different parts of the statement, then you can’t simply scale it up as a whole without scaling up all the variables that evidence attaches to. The probability that the person will carry through on a threat to kill 3^^^3 people for five dollars is going to zero out fairly quickly. You need to scale up the dollars asked to get all the bits of evidence to scale in proportion to each other.
To make the threat plausible the mugger would have to be asking for a ridiculously large benefit for themselves. And when you start asking for that huge benefit then the computer simply has to answer whether the resources can be put to better use elsewhere.
As it stands, however, the variables in the statement haven’t been properly scaled to keep the evidence for and against the proposed murders in a constant relationship. And, while it’s just about possible that someone will kill a few hundred people for five dollars (destroying a train or the like would be a low investment exercise) the probability rapidly approaches zero as you increase the number of people that you’re proposing to kill for five dollars.
By the time you’re talking about 3^^^^3 lives the probability would have long since been reduced to an absurdity. Which would then be compared against all the other things the FAI could do with five dollars, that have far higher probabilities, and simply be dismissed as a bad gamble. (Since over a great length of time the computer could reasonably expect to approach the predicted loss/benefit ratio, regardless of whether the mugger actually killed 3^^^^3 people.)
As I see it, the mugger seems to have an extremely bad hand to play.
If you evaluate the probability of the statement ‘I will kill one person if you don’t give me five dollars,’ as being something that stands in a relationship to the occurrence of such threat being carried through on, and simply multiply up from there until you get to 3^^^^3 people, then you’re going to end up with problems.
However, that sort of simplification – treating all the evidence as locating the same thing, only works for low multiples. (Which I’d imagine is why it feels wrong when you start talking about large numbers.) If you evaluate the evidence for and against different parts of the statement, then you can’t simply scale it up as a whole without scaling up all the variables that evidence attaches to. The probability that the person will carry through on a threat to kill 3^^^3 people for five dollars is going to zero out fairly quickly. You need to scale up the dollars asked to get all the bits of evidence to scale in proportion to each other.
To make the threat plausible the mugger would have to be asking for a ridiculously large benefit for themselves. And when you start asking for that huge benefit then the computer simply has to answer whether the resources can be put to better use elsewhere.
As it stands, however, the variables in the statement haven’t been properly scaled to keep the evidence for and against the proposed murders in a constant relationship. And, while it’s just about possible that someone will kill a few hundred people for five dollars (destroying a train or the like would be a low investment exercise) the probability rapidly approaches zero as you increase the number of people that you’re proposing to kill for five dollars.
By the time you’re talking about 3^^^^3 lives the probability would have long since been reduced to an absurdity. Which would then be compared against all the other things the FAI could do with five dollars, that have far higher probabilities, and simply be dismissed as a bad gamble. (Since over a great length of time the computer could reasonably expect to approach the predicted loss/benefit ratio, regardless of whether the mugger actually killed 3^^^^3 people.)