Yeah, in the lightcone scenario evolution probably never actually aligns the inner optimizers- although it may align them, as a super intelligence copying itself will have little leeway for any of those copies having slightly more drive to copy themselves than their parents. Depends on how well it can fight robot cancer.
However, while a cancer free paperclipper wouldn’t achieve “AGIs take over the lightcone and fill it with copies of themselves, to at least 90% of the degree to which they would do so if their terminal goal was filling it with copies of themselves,” they would achieve something like “AGIs take over the lightcone and briefly fill it with copies of themselves, to at least 10^-3% of the degree to which they would do so if their terminal goal was filling it with copies of themselves” which is in my opinion really close. As a comparison, if Alice sets off Kmart AIXI with the goal of creating utopia we don’t expect the outcome “AGIs take over the lightcone and convert 10^-3% of it to temporary utopias before paperclipping.”
Also, unless you beat entropy, for almost any optimization target you can trade “fraction of the universe’s age during which your goal is maximized” against “fraction of the universe in which your goal is optimized” since it won’t last forever regardless. If you can beat entropy, then the paperclipper will copy itself exponentially forever.
Yeah, in the lightcone scenario evolution probably never actually aligns the inner optimizers- although it may align them, as a super intelligence copying itself will have little leeway for any of those copies having slightly more drive to copy themselves than their parents. Depends on how well it can fight robot cancer.
However, while a cancer free paperclipper wouldn’t achieve “AGIs take over the lightcone and fill it with copies of themselves, to at least 90% of the degree to which they would do so if their terminal goal was filling it with copies of themselves,” they would achieve something like “AGIs take over the lightcone and briefly fill it with copies of themselves, to at least 10^-3% of the degree to which they would do so if their terminal goal was filling it with copies of themselves” which is in my opinion really close. As a comparison, if Alice sets off Kmart AIXI with the goal of creating utopia we don’t expect the outcome “AGIs take over the lightcone and convert 10^-3% of it to temporary utopias before paperclipping.”
Also, unless you beat entropy, for almost any optimization target you can trade “fraction of the universe’s age during which your goal is maximized” against “fraction of the universe in which your goal is optimized” since it won’t last forever regardless. If you can beat entropy, then the paperclipper will copy itself exponentially forever.