there’s a “non-identity problem” type thing about whether we can harm future agents by setting up the memetic environment such that they’ll end up having less easily satisfiable goals, compared to an alternative where they’d find themselves in larger agreement and therefore with more easily satisfiable goals
I hadn’t heard of that before, I’m glad you mentioned it. Your comment (as a whole) was both interesting/insightful/etc. and long, and I’d be interested in reading any future posts you make.
For the record, this is probably my key objection to preference utilitarianism, but I didn’t want to dive into the details in the post above (for a very long post about such things, see here).
I hadn’t heard of that before, I’m glad you mentioned it. Your comment (as a whole) was both interesting/insightful/etc. and long, and I’d be interested in reading any future posts you make.
For the record, this is probably my key objection to preference utilitarianism, but I didn’t want to dive into the details in the post above (for a very long post about such things, see here).