“If there is an act such that one believed that, conditional on one’s performing it, the world had a 0.00000000000001% greater probability of containing infinite good than it would otherwise have (and the act has no offsetting effect on the probability of an infinite bad), then according to EDR one ought to do it even if it had the certain side‐effect of laying to waste a million human species in a galactic‐scale calamity. This stupendous sacrifice would be judged morally right even though it was practically certain to achieve no good. We are confronted here with what we may term the fanaticism problem.”
Later:
“Aggregative consequentialism is often criticized for being too “coldly numerical” or too revisionist of common morality even in the more familiar finite context. Suppose that I know that a certain course of action, though much less desirable in every other respect than an available alternative, offers a one‐in‐a‐million chance of avoiding catastrophe involving x people, where x is finite. Whatever else is at stake, this possibility will overwhelm my calculations so long as x is large enough. Even in the finite case, therefore, we might fear that speculations about low‐probability‐high‐stakes scenarios will come to dominate our moral decision making if we follow aggregative consequentialism.”
Exactly. Utility maximizing together with an unbounded utility function necessarily lead to what Nick calls fanaticism. This is the usual use of the term: people call other people fanatics when their utility functions seem to be unbounded.
As Eliezer has pointed out, it is a dangerous sign when many people agree that something is wrong without agreeing why; we see this happening in the case of Pascal’s Wager and Pascal’s Mugging. In reality, a utility maximizer with an unbounded utility function would accept both. The readers of this blog, being human, are not utility maximizers. But they are unwilling to admit it because certain criteria of rationality seem to require being such.
From Nick Bostrom’s paper on infinite ethics:
“If there is an act such that one believed that, conditional on one’s performing it, the world had a 0.00000000000001% greater probability of containing infinite good than it would otherwise have (and the act has no offsetting effect on the probability of an infinite bad), then according to EDR one ought to do it even if it had the certain side‐effect of laying to waste a million human species in a galactic‐scale calamity. This stupendous sacrifice would be judged morally right even though it was practically certain to achieve no good. We are confronted here with what we may term the fanaticism problem.”
Later:
“Aggregative consequentialism is often criticized for being too “coldly numerical” or too revisionist of common morality even in the more familiar finite context. Suppose that I know that a certain course of action, though much less desirable in every other respect than an available alternative, offers a one‐in‐a‐million chance of avoiding catastrophe involving x people, where x is finite. Whatever else is at stake, this possibility will overwhelm my calculations so long as x is large enough. Even in the finite case, therefore, we might fear that speculations about low‐probability‐high‐stakes scenarios will come to dominate our moral decision making if we follow aggregative consequentialism.”
Exactly. Utility maximizing together with an unbounded utility function necessarily lead to what Nick calls fanaticism. This is the usual use of the term: people call other people fanatics when their utility functions seem to be unbounded.
As Eliezer has pointed out, it is a dangerous sign when many people agree that something is wrong without agreeing why; we see this happening in the case of Pascal’s Wager and Pascal’s Mugging. In reality, a utility maximizer with an unbounded utility function would accept both. The readers of this blog, being human, are not utility maximizers. But they are unwilling to admit it because certain criteria of rationality seem to require being such.