Regarding the third point: even if every moral agent (and their option space and their mind) is always finite, it’s still conceivable that a single binary decision—made over the course of, say, a week—could involve sophisticated and perhaps even “correct” reasoning that balances the interests of an infinite set of moral patients. There seem to be at least some clear-cut cases, like choosing to create an infinite heaven instead of an infinite hell. The question here is then what formal normative principles we could apply to judge (and guide) such reasoning in general.
Regarding the third point: even if every moral agent (and their option space and their mind) is always finite, it’s still conceivable that a single binary decision—made over the course of, say, a week—could involve sophisticated and perhaps even “correct” reasoning that balances the interests of an infinite set of moral patients. There seem to be at least some clear-cut cases, like choosing to create an infinite heaven instead of an infinite hell. The question here is then what formal normative principles we could apply to judge (and guide) such reasoning in general.