You mention that there are finite fanaticism problems (in addition to infinite ones), but I don’t think you illustrated this. So just in case a reader is inclined to think they can solve fanaticism by somehow ignoring infinity—which would make ignoring infinity more appealing—here’s an example of how you’re still left with fanaticism:
We should have credence at least 10−12 that long-term value is not linear in resources, but exponential, and then this possibility dominates our expected utility, so that rather than maximizing expected resources we do something much closer to maximizing maximum possible resources (even if extremely unlikely), with implications including building superintelligence as fast as possible (as long as you think that it’s more likely to do optimize for value than optimize for disvalue).
(Also versions of Pascal’s Mugging that are truly finite, and acausal trade in some cases.)
You mention that there are finite fanaticism problems (in addition to infinite ones), but I don’t think you illustrated this. So just in case a reader is inclined to think they can solve fanaticism by somehow ignoring infinity—which would make ignoring infinity more appealing—here’s an example of how you’re still left with fanaticism:
We should have credence at least 10−12 that long-term value is not linear in resources, but exponential, and then this possibility dominates our expected utility, so that rather than maximizing expected resources we do something much closer to maximizing maximum possible resources (even if extremely unlikely), with implications including building superintelligence as fast as possible (as long as you think that it’s more likely to do optimize for value than optimize for disvalue).
(Also versions of Pascal’s Mugging that are truly finite, and acausal trade in some cases.)