If it is possible for an agent—or, say, the human species—to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.
The problem with Pascal’s Wager is that it allows absurdly large utilities into the equation. If I’m looking at a nice fresh apple, and it’s 11:45am just before lunch, and breakfast was at 7am, then suppose the utility increment from eating that apple is X. I’d subjectively estimate that my utility for the best possible future (Heaven for Pascal’s wager, the infinite wonderful future in the scenario quoted above) is a utility increment less than one trillion times X, probably less than a billion, perhaps more than a million, definitely more than a thousand. If we make the increment much more, say 3^^^3 times X, then we get into Pascal’s Wager problems.
The problem with Pascal’s Wager is that it allows absurdly large utilities into the equation. If I’m looking at a nice fresh apple, and it’s 11:45am just before lunch, and breakfast was at 7am, then suppose the utility increment from eating that apple is X. I’d subjectively estimate that my utility for the best possible future (Heaven for Pascal’s wager, the infinite wonderful future in the scenario quoted above) is a utility increment less than one trillion times X, probably less than a billion, perhaps more than a million, definitely more than a thousand. If we make the increment much more, say 3^^^3 times X, then we get into Pascal’s Wager problems.