I’m not convinced that the numbers are stacked the way you say they are. Specifically, conditional on ethical egoism, I am not at all uncertain whether my utility curve is logarithmic or square-root; the question is whether it’s log, log-log, or something slower growing and perhaps even bounded. So from my perspective the single biggest bit of “stacking” goes in the direction that favours A over B.
Note 1: The non-egoist case is different since it’s very plausible prima facie that the utilities of others should be combined additively. Note 2: I suppose “not at all uncertain” is, as usual, an overstatement, but I think that if I’m wrong on this point then my understanding of my own preferences is so badly wrong that “all bets are off” and focusing on the particular possibility you have in mind here is privileging the hypothesis you happen to prefer. For instance, I think an egoist with apparent-to-self preferences that broadly resemble mine should give at least as much weight to the possibility that s/he isn’t really an egoist as to the possibility that his/her utility function is as rapidly growing as you suggest. Note that, e.g., conditional on non-egoism one should probably give non-negligible probability to various egalitarian principles, either as axioms or as consequences of rapidly-diminishing individual utility functions, and these will tend to give strong reason for preferring B over A.
I’m not convinced that the numbers are stacked the way you say they are. Specifically, conditional on ethical egoism, I am not at all uncertain whether my utility curve is logarithmic or square-root; the question is whether it’s log, log-log, or something slower growing and perhaps even bounded. So from my perspective the single biggest bit of “stacking” goes in the direction that favours A over B.
Note 1: The non-egoist case is different since it’s very plausible prima facie that the utilities of others should be combined additively. Note 2: I suppose “not at all uncertain” is, as usual, an overstatement, but I think that if I’m wrong on this point then my understanding of my own preferences is so badly wrong that “all bets are off” and focusing on the particular possibility you have in mind here is privileging the hypothesis you happen to prefer. For instance, I think an egoist with apparent-to-self preferences that broadly resemble mine should give at least as much weight to the possibility that s/he isn’t really an egoist as to the possibility that his/her utility function is as rapidly growing as you suggest. Note that, e.g., conditional on non-egoism one should probably give non-negligible probability to various egalitarian principles, either as axioms or as consequences of rapidly-diminishing individual utility functions, and these will tend to give strong reason for preferring B over A.