The two responses are not equivalent. The utility in a utility function is subjective in the sense that it represents how much I care about something; and I am saying that there is literally nothing that I care enough about to pay $100 for a probability of one in a googolplex of accomplishing it. So for example if I knew for an absolute fact that for $100 I could get that probability of saving 3^^^^^^^^^3 lives, I would not do it. Saying the utility can’t be that enormous does not rule out any objective facts: it just says I don’t care that much. The only way it could turn out that “if the utility really were that enormous” would be if I started to care that much. And yes, I would pay $100 if it turned out that I was willing to pay $100. But I’m not.
Attempting to rule out a probability by axioms, on the other hand, is ruling out objective possibilities, since objective facts in the world cause probabilities. The whole purpose of your axiom is that you are unwilling to pay that $100, even if the probability really were one in a googolplex. Your probability axiom is simply not your true rejection.
Saying the utility can’t be that enormous does not rule out any objective facts: it just says I don’t care that much.
To say you don’t care that much is a claim of objective fact. People sometimes discover that they do very much care (or, if you like, change to begin to very much care) about something they did not before. For example, conversion to ethical veganism. You may say that you will never entertain enormous utility, and this claim may be true, but it is still an objective claim.
And how do you even know? No-one can exhibit their utility function, supposing they have one, nor can they choose it.
As I said, I concede that I would pay $100 for that probability of that result, if I cared enough about that result, but my best estimate of how much I care about that probability of that result is “too little to consider.” And I think that is currently the same for every other human being.
(Also, you consistently seem to be implying that “entertaining enormous utility” is something different from being willing to pay a meaningful price for small probability of something: but these are simply identical—asking whether I might objectively accept an enormous utility assignment is just the same thing as asking whether there might be some principles which would cause me to pay the price for the small probability.)
The two responses are not equivalent. The utility in a utility function is subjective in the sense that it represents how much I care about something; and I am saying that there is literally nothing that I care enough about to pay $100 for a probability of one in a googolplex of accomplishing it. So for example if I knew for an absolute fact that for $100 I could get that probability of saving 3^^^^^^^^^3 lives, I would not do it. Saying the utility can’t be that enormous does not rule out any objective facts: it just says I don’t care that much. The only way it could turn out that “if the utility really were that enormous” would be if I started to care that much. And yes, I would pay $100 if it turned out that I was willing to pay $100. But I’m not.
Attempting to rule out a probability by axioms, on the other hand, is ruling out objective possibilities, since objective facts in the world cause probabilities. The whole purpose of your axiom is that you are unwilling to pay that $100, even if the probability really were one in a googolplex. Your probability axiom is simply not your true rejection.
To say you don’t care that much is a claim of objective fact. People sometimes discover that they do very much care (or, if you like, change to begin to very much care) about something they did not before. For example, conversion to ethical veganism. You may say that you will never entertain enormous utility, and this claim may be true, but it is still an objective claim.
And how do you even know? No-one can exhibit their utility function, supposing they have one, nor can they choose it.
As I said, I concede that I would pay $100 for that probability of that result, if I cared enough about that result, but my best estimate of how much I care about that probability of that result is “too little to consider.” And I think that is currently the same for every other human being.
(Also, you consistently seem to be implying that “entertaining enormous utility” is something different from being willing to pay a meaningful price for small probability of something: but these are simply identical—asking whether I might objectively accept an enormous utility assignment is just the same thing as asking whether there might be some principles which would cause me to pay the price for the small probability.)