For discussion of the general response to hypothetical ticking time-bomb cases in which one knows with unrealistic certainty than a violation of an ethical injunction will pay off, when in reality such an apparent assessment is more likely to be a result of bias and a shortsighted incomplete picture of the situation (e.g. the impact of being the kind of person who would do such a thing), see the linked post.
With respect to the idea of neo-Luddite wrongdoing, I’ll quote a previous comment:
The Unabomber attacked innocent people in a way that did not slow down technology advancement and brought ill repute to his cause. The Luddites accomplished nothing. Some criminal nutcase hurting people in the name of preventing AI risks would just stigmatize his ideas, and bring about impenetrable security for AI development in the future without actually improving the odds of a good outcome (when X can make AGI, others will be able to do so then, or soon after).
“Ticking time bomb cases” are offered to justify legalizing torture, but they essentially never happen: there is always vastly more uncertainty and lower expected benefits. It’s dangerous to use such hypotheticals as a way to justify legalization of abuse in realistic cases. No one is going to wind up in a state of justified confidence that wrongdoing to “disable Skynet” is an available option (if such a thing was known to exist, it would be too late anyway, so the idea could only apply in much more uncertain conditions), and if a system could be shown to be quite likely dangerous, one would call the police, regulators, and politicians.
In any plausible epistemic situations, the criminal in question would be undertaking actions with an almost certain effect of worsening the prospects for humanity, in the name of an unlikely and limited gain. I.e., the act would have terrible expected consequences. The danger is not that rational consequentialists are going to go around bringing about terrible consequences (in between stealing kidneys from out-of-town patients, torturing accused criminals, and other misleading hypotheticals in which we are asked to consider an act with bad consequences under the implausible supposition that it has good consequences), it’s providing encouragement and direction to mentally unstable people who don’t think things through.
Absolutely. This is by far the most actually rational comment in this whole benighted thread (including mine), and I regret that I can only upvote it once.
For discussion of the general response to hypothetical ticking time-bomb cases in which one knows with unrealistic certainty than a violation of an ethical injunction will pay off, when in reality such an apparent assessment is more likely to be a result of bias and a shortsighted incomplete picture of the situation (e.g. the impact of being the kind of person who would do such a thing), see the linked post.
With respect to the idea of neo-Luddite wrongdoing, I’ll quote a previous comment:
In any plausible epistemic situations, the criminal in question would be undertaking actions with an almost certain effect of worsening the prospects for humanity, in the name of an unlikely and limited gain. I.e., the act would have terrible expected consequences. The danger is not that rational consequentialists are going to go around bringing about terrible consequences (in between stealing kidneys from out-of-town patients, torturing accused criminals, and other misleading hypotheticals in which we are asked to consider an act with bad consequences under the implausible supposition that it has good consequences), it’s providing encouragement and direction to mentally unstable people who don’t think things through.
Absolutely. This is by far the most actually rational comment in this whole benighted thread (including mine), and I regret that I can only upvote it once.