A minimal non-anthropic example that illustrates the difference
The decision you describe in not stable under pre-commitments. Ahead of time, all agents would pre-commit to the $2/3. Yet they seem to change their mind when presented with the decision. You seem to be double counting, using the Bayesian updating once and the fact that their own decision is responsible for the other agent’s decision as well.
In the terminology of paper http://www.fhi.ox.ac.uk/anthropics-why-probability-isnt-enough.pdf , your agents are altruists using linked decisions with total responsibility and no precommitments, which is a foolish thing to do. If they were altruists using linked decisions with divided responsibility (or if they used precommitments), everything would be fine (I don’t like or use that old terminology—UDT does it better—but it seems relevant here).
But that’s detracting from the main point: still don’t see any difference between indexical and non-indexical total utilitarianism. I don’t see why a non-indexical total utilitarian can’t follow the wrong reasoning you used in your example just as well as an indexical one, if either of them can—and similarly for the right reasoning.
The decision you describe in not stable under pre-commitments. Ahead of time, all agents would pre-commit to the $2/3. Yet they seem to change their mind when presented with the decision. You seem to be double counting, using the Bayesian updating once and the fact that their own decision is responsible for the other agent’s decision as well.
Yes, this is exactly the point I was trying to make—I was pointing out a fallacy. I never intended “lexicality-dependent utilitarianism” to be a meaningful concept, it’s only a name for thinking in this fallacious way.
The decision you describe in not stable under pre-commitments. Ahead of time, all agents would pre-commit to the $2/3. Yet they seem to change their mind when presented with the decision. You seem to be double counting, using the Bayesian updating once and the fact that their own decision is responsible for the other agent’s decision as well.
In the terminology of paper http://www.fhi.ox.ac.uk/anthropics-why-probability-isnt-enough.pdf , your agents are altruists using linked decisions with total responsibility and no precommitments, which is a foolish thing to do. If they were altruists using linked decisions with divided responsibility (or if they used precommitments), everything would be fine (I don’t like or use that old terminology—UDT does it better—but it seems relevant here).
But that’s detracting from the main point: still don’t see any difference between indexical and non-indexical total utilitarianism. I don’t see why a non-indexical total utilitarian can’t follow the wrong reasoning you used in your example just as well as an indexical one, if either of them can—and similarly for the right reasoning.
Yes, this is exactly the point I was trying to make—I was pointing out a fallacy. I never intended “lexicality-dependent utilitarianism” to be a meaningful concept, it’s only a name for thinking in this fallacious way.