I’m surprised no one has linked to this yet. It’s not a perfect match, but I think that “if killing innocent people seems like the right thing to do, you’ve probably made a mistake” is close enough to be relevant.
Maybe less so before the post was edited, I guess.
I meant to link to that or something similar. In both situations I am killing someone. By not donating to a givewell charity some innocent in Africa dies, (saving more innocents live elsewhere). So I am already in mistake territory, even before I start thinking about terrorism.
I don’t like being in mistake territory, so my brain is liable to want to shut off from thinking about it, or inure my heart to the decision.
The distinction between taking an action resulting in someone dying when counterfactually they would not have died if you took some other action, and when counterfactually they would not have died if you didn’t exist, while not important to pure consequentialist reasoning, has bearing on when a human attempting consequentialist reasoning should be wary of the fact that they are running on hostile hardware.
You can slightly change the scenarios and get it so that people counter factually wouldn’t have died if you didn’t exist, which don’t seem much morally different. For example X is going to donate to givewell and save Zs life. Should you (Y) convince X to donate to an anti-tobacco campaign which will save more lives. Is this morally the same as (risk free, escalation-less) terrorism or the same as being X?
Anyway I have the feeling people are getting bored of me on this subject, including myself. Simply chalk this up to someone not compartmentalizing correctly. Although I think that if I need to keep consequentialist reasoning compartmentalised, I am likely to find all consequentialist reasoning more suspect.
I think that “if killing innocent people seems like the right thing to do, you’ve probably made a mistake”.
I don’t think so. And I don’t get why you wouldn’t bomb Skynet if you could save the human race by doing so? Sure, you can call it a personal choice that has nothing to do with rationality. But in the face of posts like this I don’t see why nobody here is suggesting to take active measures against uFAI. I can only conclude you either don’t follow your beliefs through or don’t discuss it because it could be perceived as terrorism.
I’m surprised no one has linked to this yet. It’s not a perfect match, but I think that “if killing innocent people seems like the right thing to do, you’ve probably made a mistake” is close enough to be relevant.
Maybe less so before the post was edited, I guess.
It would seem so, but is taking war into enemy territory that reliably a mistake?
I meant to link to that or something similar. In both situations I am killing someone. By not donating to a givewell charity some innocent in Africa dies, (saving more innocents live elsewhere). So I am already in mistake territory, even before I start thinking about terrorism.
I don’t like being in mistake territory, so my brain is liable to want to shut off from thinking about it, or inure my heart to the decision.
The distinction between taking an action resulting in someone dying when counterfactually they would not have died if you took some other action, and when counterfactually they would not have died if you didn’t exist, while not important to pure consequentialist reasoning, has bearing on when a human attempting consequentialist reasoning should be wary of the fact that they are running on hostile hardware.
You can slightly change the scenarios and get it so that people counter factually wouldn’t have died if you didn’t exist, which don’t seem much morally different. For example X is going to donate to givewell and save Zs life. Should you (Y) convince X to donate to an anti-tobacco campaign which will save more lives. Is this morally the same as (risk free, escalation-less) terrorism or the same as being X?
Anyway I have the feeling people are getting bored of me on this subject, including myself. Simply chalk this up to someone not compartmentalizing correctly. Although I think that if I need to keep consequentialist reasoning compartmentalised, I am likely to find all consequentialist reasoning more suspect.
I don’t think so. And I don’t get why you wouldn’t bomb Skynet if you could save the human race by doing so? Sure, you can call it a personal choice that has nothing to do with rationality. But in the face of posts like this I don’t see why nobody here is suggesting to take active measures against uFAI. I can only conclude you either don’t follow your beliefs through or don’t discuss it because it could be perceived as terrorism.