In practice, I think the appeal of the expected utility maximiser is that it is more attractive to philosophers and mathematicians: it involves solving everything perfectly ahead of time, and then everything is implementation. I can see the unlosing agent being more attractive to an engineer, though.
Postulating a utility function makes for cleaner exposition. It probably is more realistic to suppose that one’s utility function is only imperfectly known and/or difficult to calculate (at least outside a narrow setting), so some other approach might not be a bad idea.
But if you were sure that you’d face it only a few thousand times, what then? Take a forward-thinking unlosing agent. If it expected that it would get Pascal mugged only a few thousand times, it could perfectly well reject all of them without hesitation (and derive all the advantages of this). If it expected that there was a significant risk of getting Pascal mugged over and over and over again, it would decide to accept.
If it expected a significant risk of getting mugged over and over, it would take its $5*3^^^^3 and build an army capable of utterly annihilating any known mugger.
Postulating a utility function makes for cleaner exposition. It probably is more realistic to suppose that one’s utility function is only imperfectly known and/or difficult to calculate (at least outside a narrow setting), so some other approach might not be a bad idea.
If it expected a significant risk of getting mugged over and over, it would take its $5*3^^^^3 and build an army capable of utterly annihilating any known mugger.