Firstly this would be AI’s looking at their own version of the AI alignment problem. This is not random mutation or anything like it. Secondly I would expect there to only be a few rounds maximum of self modification that runs risk to goals. (Likely 0 rounds) Firstly damaging goals looses a lot of utility. You would only do it if its a small change in goals for a big increase in intelligence. And if you really need to be smarter and you can’t make yourself smarter while preserving your goals.
You don’t have millions of AI all with goals different from each other. The self upgrading step happens once before the AI starts to spread across star systems.
Firstly this would be AI’s looking at their own version of the AI alignment problem. This is not random mutation or anything like it. Secondly I would expect there to only be a few rounds maximum of self modification that runs risk to goals. (Likely 0 rounds) Firstly damaging goals looses a lot of utility. You would only do it if its a small change in goals for a big increase in intelligence. And if you really need to be smarter and you can’t make yourself smarter while preserving your goals.
You don’t have millions of AI all with goals different from each other. The self upgrading step happens once before the AI starts to spread across star systems.