What you describe sounds like a more radical version of my post where, in your account, ethics is all about individuals pursuing their personal life goals while being civil towards one another. I think a part of ethics is about that, but those of us who are motivated to dedicate our lives to helping others can still ask “What would it entail to do what’s best for others?” – that’s where consequentialism (“care morality”) comes into play.
I agree with you that ranking populations according to the value they contain, in a sense that’s meant to be independent of the preferences of people within that population (how they want the future to go), seems quite strange. Admittedly, I think there’s a reason many people, and effective altruists in particular, are interested in coming up with such rankings. Namely, if we’re motivated to go beyond “don’t be a jerk” and want to dedicate our lives to altruism or even the ambitious goal of “doing the most moral/altruistic thing,” we need to form views on welfare tradeoffs and things like whether it’s good to bring people into existence who would be grateful to be alive. That said, I think such rankings (at least for population-ethical contexts where the number of people/beings isn’t fixed or where it isn’t fixed what types of interests/goals a new mind is going to have) always contain a subjective element. I think it’s misguided to assume that there’s a correct world ranking, an “objective axiology,” for population ethics. Even so, individual people may want to form informed views on the matter because there’s a sense in which we can’t avoid forming opinions on this. (Not forming an opinion just means “anything goes” / “this whole topic isn’t important” – which doesn’t seem true, either.)
I’m copy-pasting a comment I made on the EA forum version of this post: