All of these seem to engage with op’s caveats. E.g. one of OP’s objections is “However, this means that you should also actually create the utility for all these new lives, or they will not add to (or even subtract from) your utility calculation”, and that’s something the posts consider:
In addition, you need to also consider that future humans would lead vastly better lives than today’s humans due to the enormous amount of technological progress that humanity would have reached by then.
Against neutrality about creating happy lives
ABSTRACT. With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe.
Caring about the future of sentience is sometimes taken to imply reducing the risk of human extinction as a moral priority. However, this implication is not obvious so long as one is uncertain whether a future with humanity would be better or worse than one without it.
(Emphasis added.) Since all four of your links directly engage with the points OP raises, I don’t think they’re a good example of rationalists ignoring such points.
Could you give links to two examples?
One two three four
All of these seem to engage with op’s caveats. E.g. one of OP’s objections is “However, this means that you should also actually create the utility for all these new lives, or they will not add to (or even subtract from) your utility calculation”, and that’s something the posts consider:
(Emphasis added.) Since all four of your links directly engage with the points OP raises, I don’t think they’re a good example of rationalists ignoring such points.