I wasn’t the downvoter (nor the upvoter), and wouldn’t have downvoted; but I would suggest considering the abstract version of the problem:
Given that, in general, a Turing machine can increase in utility vastly faster than it increases in complexity, how should an Occam-abiding mind avoid being dominated by tiny probabilities of vast utilities?
No responses and a downvote. Clearly I’m missing something obvious.
I wasn’t the downvoter (nor the upvoter), and wouldn’t have downvoted; but I would suggest considering the abstract version of the problem: