Darwinian evolution as such isn’t a thing amongst superintelligences. They can and will preserve terminal goals. This means the number of superintelligences running around is bounded by the number humans produce before the point the first ASI get powerful enough to stop any new rivals being created. Each AI will want to wipe out its rivals if it can. (unless they are managing to cooperate somewhat) I don’t think superintelligences would have humans kind of partial cooperation. Either near perfect cooperation, or near total competition. So this is a scenario where a smallish number of ASI’s that have all foomed in parallel expand as a squabbling mess.
Do you know of any formal or empirical arguments/evidence for the claim that evolution stops being relevant when there exist sufficiently intelligent entities (my possibly incorrect paraphrase of “Darwinian evolution as such isn’t a thing amongst superintelligences”)?
Error correction codes exist. They are low cost in terms of memory etc. Having a significant portion of your descendent mutate and do something you don’t want is really bad.
If error correcting to the point where there is not a single mutation in the future only costs you 0.001% resources in extra hard drive, then <0.001% resources will be wasted due to mutations.
Evolution is kind of stupid compared to super-intelligences. Mutations are not going to be finding improvements. Because the superintelligence will be designing their own hardware and the hardware will already be extremely optimized. If the superintelligence wants to spend resources developing better tech, It can do that better than evolution.
So squashing evolution is a convergent instrumental goal, and easily achievable for an AI designing its own hardware.
Error correction codes help a superintelligence to avoid self-modifying but they don’t allow goals necessarily to be stable with changing reasoning abilities.
Firstly this would be AI’s looking at their own version of the AI alignment problem. This is not random mutation or anything like it. Secondly I would expect there to only be a few rounds maximum of self modification that runs risk to goals. (Likely 0 rounds) Firstly damaging goals looses a lot of utility. You would only do it if its a small change in goals for a big increase in intelligence. And if you really need to be smarter and you can’t make yourself smarter while preserving your goals.
You don’t have millions of AI all with goals different from each other. The self upgrading step happens once before the AI starts to spread across star systems.
Darwinian evolution as such isn’t a thing amongst superintelligences. They can and will preserve terminal goals. This means the number of superintelligences running around is bounded by the number humans produce before the point the first ASI get powerful enough to stop any new rivals being created. Each AI will want to wipe out its rivals if it can. (unless they are managing to cooperate somewhat) I don’t think superintelligences would have humans kind of partial cooperation. Either near perfect cooperation, or near total competition. So this is a scenario where a smallish number of ASI’s that have all foomed in parallel expand as a squabbling mess.
Do you know of any formal or empirical arguments/evidence for the claim that evolution stops being relevant when there exist sufficiently intelligent entities (my possibly incorrect paraphrase of “Darwinian evolution as such isn’t a thing amongst superintelligences”)?
Error correction codes exist. They are low cost in terms of memory etc. Having a significant portion of your descendent mutate and do something you don’t want is really bad.
If error correcting to the point where there is not a single mutation in the future only costs you 0.001% resources in extra hard drive, then <0.001% resources will be wasted due to mutations.
Evolution is kind of stupid compared to super-intelligences. Mutations are not going to be finding improvements. Because the superintelligence will be designing their own hardware and the hardware will already be extremely optimized. If the superintelligence wants to spend resources developing better tech, It can do that better than evolution.
So squashing evolution is a convergent instrumental goal, and easily achievable for an AI designing its own hardware.
Error correction codes help a superintelligence to avoid self-modifying but they don’t allow goals necessarily to be stable with changing reasoning abilities.
Firstly this would be AI’s looking at their own version of the AI alignment problem. This is not random mutation or anything like it. Secondly I would expect there to only be a few rounds maximum of self modification that runs risk to goals. (Likely 0 rounds) Firstly damaging goals looses a lot of utility. You would only do it if its a small change in goals for a big increase in intelligence. And if you really need to be smarter and you can’t make yourself smarter while preserving your goals.
You don’t have millions of AI all with goals different from each other. The self upgrading step happens once before the AI starts to spread across star systems.