If there’s any slightest chance that it will result in infinite bad, then the problem is much more complicated.
There’s always a nonzero chance that any action will cause an infinite bad. Also an infinite good. Even with finite but unbounded utility functions, this divergence occurs.
Utility maximizing together with an unbounded utility function necessarily lead to what Nick calls fanaticism.
Bounded utility functions have counterintuitive results as well. Most of these only show up in rare (but still realistic) global “what sort of world should we create” situations, but there can be local effects too; as I believe Carl Shulman pointed out, bounded utility causes your decisions to be dominated by low-probability hypotheses that there are few people (so your actions can have a large effect.)
There’s always a nonzero chance that any action will cause an infinite bad. Also an infinite good. Even with finite but unbounded utility functions, this divergence occurs.
Bounded utility functions have counterintuitive results as well. Most of these only show up in rare (but still realistic) global “what sort of world should we create” situations, but there can be local effects too; as I believe Carl Shulman pointed out, bounded utility causes your decisions to be dominated by low-probability hypotheses that there are few people (so your actions can have a large effect.)