clearly the system is a lot less contextual than base models, and it seems like you are predicting a reversal of that trend?
The trend may be bounded, the trend may not go far by the time AI can invent nanotechnology—would be great if someone actually measured such things.
And there being a trend at all is not predicted by utility-maximization frame, right?
Not necessary—you can treat creating new people differently from already existing and avoid creating bad (in Endurist sense—not enough positive experiences, regardless of suffering) lives without accepting death for existing people. I, for example, don’t get why would you bring more death to the world by creating low-lifespan people, if you don’t like death.