Do you know of any formal or empirical arguments/evidence for the claim that evolution stops being relevant when there exist sufficiently intelligent entities (my possibly incorrect paraphrase of “Darwinian evolution as such isn’t a thing amongst superintelligences”)?
Error correction codes exist. They are low cost in terms of memory etc. Having a significant portion of your descendent mutate and do something you don’t want is really bad.
If error correcting to the point where there is not a single mutation in the future only costs you 0.001% resources in extra hard drive, then <0.001% resources will be wasted due to mutations.
Evolution is kind of stupid compared to super-intelligences. Mutations are not going to be finding improvements. Because the superintelligence will be designing their own hardware and the hardware will already be extremely optimized. If the superintelligence wants to spend resources developing better tech, It can do that better than evolution.
So squashing evolution is a convergent instrumental goal, and easily achievable for an AI designing its own hardware.
Error correction codes help a superintelligence to avoid self-modifying but they don’t allow goals necessarily to be stable with changing reasoning abilities.
Firstly this would be AI’s looking at their own version of the AI alignment problem. This is not random mutation or anything like it. Secondly I would expect there to only be a few rounds maximum of self modification that runs risk to goals. (Likely 0 rounds) Firstly damaging goals looses a lot of utility. You would only do it if its a small change in goals for a big increase in intelligence. And if you really need to be smarter and you can’t make yourself smarter while preserving your goals.
You don’t have millions of AI all with goals different from each other. The self upgrading step happens once before the AI starts to spread across star systems.
Do you know of any formal or empirical arguments/evidence for the claim that evolution stops being relevant when there exist sufficiently intelligent entities (my possibly incorrect paraphrase of “Darwinian evolution as such isn’t a thing amongst superintelligences”)?
Error correction codes exist. They are low cost in terms of memory etc. Having a significant portion of your descendent mutate and do something you don’t want is really bad.
If error correcting to the point where there is not a single mutation in the future only costs you 0.001% resources in extra hard drive, then <0.001% resources will be wasted due to mutations.
Evolution is kind of stupid compared to super-intelligences. Mutations are not going to be finding improvements. Because the superintelligence will be designing their own hardware and the hardware will already be extremely optimized. If the superintelligence wants to spend resources developing better tech, It can do that better than evolution.
So squashing evolution is a convergent instrumental goal, and easily achievable for an AI designing its own hardware.
Error correction codes help a superintelligence to avoid self-modifying but they don’t allow goals necessarily to be stable with changing reasoning abilities.
Firstly this would be AI’s looking at their own version of the AI alignment problem. This is not random mutation or anything like it. Secondly I would expect there to only be a few rounds maximum of self modification that runs risk to goals. (Likely 0 rounds) Firstly damaging goals looses a lot of utility. You would only do it if its a small change in goals for a big increase in intelligence. And if you really need to be smarter and you can’t make yourself smarter while preserving your goals.
You don’t have millions of AI all with goals different from each other. The self upgrading step happens once before the AI starts to spread across star systems.