The decision you describe in not stable under pre-commitments. Ahead of time, all agents would pre-commit to the $2/3. Yet they seem to change their mind when presented with the decision. You seem to be double counting, using the Bayesian updating once and the fact that their own decision is responsible for the other agent’s decision as well.
Yes, this is exactly the point I was trying to make—I was pointing out a fallacy. I never intended “lexicality-dependent utilitarianism” to be a meaningful concept, it’s only a name for thinking in this fallacious way.
Yes, this is exactly the point I was trying to make—I was pointing out a fallacy. I never intended “lexicality-dependent utilitarianism” to be a meaningful concept, it’s only a name for thinking in this fallacious way.