This might be overly simplistic, but it seems relevant to consider the probability per murder. I am feeling a bit of scope insensitivity on that particular probability, as it is far too small for me to compute, so I need to go through the steps.
If someone tells me that they are going to murder one person if I don’t give them $5, I have to consider the probability of it: not every attempted murder is successful, after all, and I don’t have nearly as much incentive to pay someone if I believe they won’t be successful. Further, most people don’t actually attempt murder, and the cost to that person of telling me they will murder someone if they don’t get $5 is much, much smaller then the cost of actually murdering someone. Consequences usually follow from murder, after all. I also have to consider the probability that this person is insane and doesn’t care about the consequences: only the $5.
Still, only .00496% of people are murdered in a year. (According to Wolfram Alpha, at least) And while I would assign a higher probability to a person claiming to murder someone, it wouldn’t jump dramatically- they could be lying, they could try but fail, etc. Even if I let “I will kill someone” be a 90% accurate test with only a 10% false positive rate- which I think is generous in the case of $5 with no additional evidence- as only being .004%. Even if it was 99% sure and 1% false positive, EXTREMELY generous odds, there is only a .4% total probability of it occurring.
In reality, I think there would be some evidence in the case of one murder. At very least I could get strong sociological cues that the person was likely to be telling the truth. However, since I am moving to an end point where they will be killing 3^^^^3 people, I’ll leave that aside as it is irrelevant to the end example.
If such a person claimed they would murder 2 people, it would depend on whether I thought the probabilities of the events occurring together were dependent or independent: if him killing one person made it more likely that he would kill two, given the event (the threat) in question.
Now, if he says he will kill two people, and he kills one, he is unlikely to stop before killing another. BUT, there are more chances for complication or failure, and the cost:benefit for him shrinks by half, making the probability that he manages to or tries to kill anyone smaller. These numbers in reality would be affected by circumstance: it is a lot easier to kill two people with a pistol or a bomb than it is with your bare hands. But since I see no bomb or pistol and he is claiming some mechanism I have no evidence for, we’ll ignore that reality for now.
I had trouble finding information on the rate of double homicide:single homicide to use as a baseline, but it seems likely that it is neither totally dependent, nor totally independent. In order to believe the threat credible, I have to believe (after hearing the threat) that they will attempt to kill two people, successfully kill one, AND successfully kill another. And if I put the probability of A+B at .004%, I can’t very well put A+B+C at any higher. Since I used a 90% false positive rate for my initial calculation, let’s use it twice: 81% false positive. We’ll assume that the false negative (he murders people even when he says he won’t) stays constant.
This means that each murder is slightly more likely than 90% as likely to occur as the murder before it. Now, it isn’t exact, and these numbers get really, really small, so I’m looking at 3^3 as a reference.
At 3^3, the cost has gone up 27x if he kills people, but the probability of the event has gone down to .06 of what it was. So, something like 1.7x more costly, given what was said above.
But all this was dependent on several assumed figures. So at what points does it balance out?
I’m a little tired for doing all the math right now, but some quick work showed that being only 80% sure of the test, with a 10% false positive rate, would be enough to where it would go down continuously. So if I am less than 80% sure of the test of “he says he will murder one person if I don’t give him 5 dollars” then I can be sure that the probability that he will kill 3^^^^3 is far, far less than the cost if I am wrong.
I’m assuming that I am getting my math right here, and I am quite tired, so if anyone wishes to correct me on some portion of this I would be happy for the criticism.
This might be overly simplistic, but it seems relevant to consider the probability per murder. I am feeling a bit of scope insensitivity on that particular probability, as it is far too small for me to compute, so I need to go through the steps.
If someone tells me that they are going to murder one person if I don’t give them $5, I have to consider the probability of it: not every attempted murder is successful, after all, and I don’t have nearly as much incentive to pay someone if I believe they won’t be successful. Further, most people don’t actually attempt murder, and the cost to that person of telling me they will murder someone if they don’t get $5 is much, much smaller then the cost of actually murdering someone. Consequences usually follow from murder, after all. I also have to consider the probability that this person is insane and doesn’t care about the consequences: only the $5.
Still, only .00496% of people are murdered in a year. (According to Wolfram Alpha, at least) And while I would assign a higher probability to a person claiming to murder someone, it wouldn’t jump dramatically- they could be lying, they could try but fail, etc. Even if I let “I will kill someone” be a 90% accurate test with only a 10% false positive rate- which I think is generous in the case of $5 with no additional evidence- as only being .004%. Even if it was 99% sure and 1% false positive, EXTREMELY generous odds, there is only a .4% total probability of it occurring.
In reality, I think there would be some evidence in the case of one murder. At very least I could get strong sociological cues that the person was likely to be telling the truth. However, since I am moving to an end point where they will be killing 3^^^^3 people, I’ll leave that aside as it is irrelevant to the end example.
If such a person claimed they would murder 2 people, it would depend on whether I thought the probabilities of the events occurring together were dependent or independent: if him killing one person made it more likely that he would kill two, given the event (the threat) in question.
Now, if he says he will kill two people, and he kills one, he is unlikely to stop before killing another. BUT, there are more chances for complication or failure, and the cost:benefit for him shrinks by half, making the probability that he manages to or tries to kill anyone smaller. These numbers in reality would be affected by circumstance: it is a lot easier to kill two people with a pistol or a bomb than it is with your bare hands. But since I see no bomb or pistol and he is claiming some mechanism I have no evidence for, we’ll ignore that reality for now.
I had trouble finding information on the rate of double homicide:single homicide to use as a baseline, but it seems likely that it is neither totally dependent, nor totally independent. In order to believe the threat credible, I have to believe (after hearing the threat) that they will attempt to kill two people, successfully kill one, AND successfully kill another. And if I put the probability of A+B at .004%, I can’t very well put A+B+C at any higher. Since I used a 90% false positive rate for my initial calculation, let’s use it twice: 81% false positive. We’ll assume that the false negative (he murders people even when he says he won’t) stays constant.
This means that each murder is slightly more likely than 90% as likely to occur as the murder before it. Now, it isn’t exact, and these numbers get really, really small, so I’m looking at 3^3 as a reference.
At 3^3, the cost has gone up 27x if he kills people, but the probability of the event has gone down to .06 of what it was. So, something like 1.7x more costly, given what was said above.
But all this was dependent on several assumed figures. So at what points does it balance out?
I’m a little tired for doing all the math right now, but some quick work showed that being only 80% sure of the test, with a 10% false positive rate, would be enough to where it would go down continuously. So if I am less than 80% sure of the test of “he says he will murder one person if I don’t give him 5 dollars” then I can be sure that the probability that he will kill 3^^^^3 is far, far less than the cost if I am wrong.
I’m assuming that I am getting my math right here, and I am quite tired, so if anyone wishes to correct me on some portion of this I would be happy for the criticism.