An approach could be to say under what conditions natural selection will and will not sneak in.
Yes!
Natural selection requires variation. Information theory tells us that all information is subject to noise and therefore variation across time. However, we can reduce error rates to arbitrarily low probabilities using coding schemes. Essentially this means that it is possible to propagate information across finite timescales with arbitrary precision. If there is no variation then there is no natural selection.
Yes! The big question to me is if we can reduced error rates enough. And “error rates” here is not just hardware signal error, but also randomness that comes from interacting with the environment.
In abstract terms, evolutionary dynamics require either a smooth adaptive landscape such that incremental changes drive organisms towards adaptive peaks and/or unlikely leaps away from local optima into attraction basins of other optima. In principle AI systems could exist that stay in safe local optima and/or have very low probabilities of jumps to unsafe attraction basins.
It has to be smooth relative to the jumps the jumps that can be achieved what ever is generating the variation. Natural mutation don’t typically do large jumps. But if you have a smal change in motivation for an intelligent system, this may cause a large shift in behaviour.
I believe that natural selection requires a population of “agents” competing for resources. If we only had a single AI system then there is no competition and no immediate adaptive pressure.
I though so too to start with. I still don’t know what is the right conclusion, but I think that substrate-needs convergence it at least still a risk even with a singleton. Something that is smart enough to be a general intelligence, is probably complex enough to have internal parts that operate semi independently, and therefore these parts can compete with each other.
I think the singleton scenario is the most interesting, since I think that if we have several competing AI’s, then we are just super doomed.
And by singleton I don’t necessarily mean a single entity. It could also be a single alliance. The boundaries between group and individual is might not be as clear with AIs as with humans.
Other dynamics will be at play which may drown out natural selection. There may be dynamics that occur at much faster timescales that this kind of natural selection, such that adaptive pressure towards resource accumulation cannot get a foothold.
This will probably be correct for a time. But will it be true forever? One of the possible end goals for Alignment research is to build the aligned super intelligence that saves us all. If substrate convergence is true, then this end goal is of the table. Because even if we reach this goal, it will inevitable start to either value drift towards self replication, or get eaten from the inside by parts that has mutated towards self replication (AI cancer), or something like that.
Other dynamics may be at play that can act against natural selection. We see existence-proofs of this in immune responses against tumours and cancers. Although these don’t work perfectly in the biological world, perhaps an advanced AI could build a type of immune system that effectively prevents individual parts from undergoing runaway self-replication.
Cancer is an excellent analogy. Humans defeat it in a few ways that works together
We have evolved to have cells that mostly don’t defect
We have an evolved immune system that attracts cancer when it does happen
We have developed technology to help us find and fight cancer when it happens
When someone gets cancer anyway and it can’t be defeated, only they die, it don’t spread to other individuals.
Point 4 is very important. If there is only one agent, this agent needs perfect cancer fighting ability to avoid being eaten by natural selection. The big question to me is: Is this possible?
If you on the other hand have several agents, they you defiantly don’t escape natural selection, because these entities will compete with each other.
I think it might be true that substrate convergence is inevitable eventually. But it would be helpful to know how long it would take. Potentially we might be ok with it if the expected timescale is long enough (or the probability of it happening in a given timescale is low enough).
I think the singleton scenario is the most interesting, since I think that if we have several competing AI’s, then we are just super doomed.
If that’s true then that is a super important finding! And also an important thing to communicate to people! I hear a lot of people who say the opposite and that we need lots of competing AIs.
I agree that analogies to organic evolution can be very generative. Both in terms of describing the general shape of dynamics, and how AI could be different. That line of thinking could give us a good foundation to start asking how substrate convergence could be exacerbated or avoided.
Potentially we might be ok with it if the expected timescale is long enough (or the probability of it happening in a given timescale is low enough).
Agreed. I’d love for someone to investigate the possibility of slowing down substrate-convergence enough to be basically solved.
If that’s true then that is a super important finding! And also an important thing to communicate to people! I hear a lot of people who say the opposite and that we need lots of competing AIs.
Hm, to me this conclusion seem fairly obvious. I don’t know how to communicate it though, since I don’t know what the crux is. I’d be up for participating in a public debate about this, if you can find me an opponent. Although, not until after AISC research lead applications are over, and I got some time to recover. So maybe late November at the earliest.
Yes!
Yes! The big question to me is if we can reduced error rates enough. And “error rates” here is not just hardware signal error, but also randomness that comes from interacting with the environment.
It has to be smooth relative to the jumps the jumps that can be achieved what ever is generating the variation. Natural mutation don’t typically do large jumps. But if you have a smal change in motivation for an intelligent system, this may cause a large shift in behaviour.
I though so too to start with. I still don’t know what is the right conclusion, but I think that substrate-needs convergence it at least still a risk even with a singleton. Something that is smart enough to be a general intelligence, is probably complex enough to have internal parts that operate semi independently, and therefore these parts can compete with each other.
I think the singleton scenario is the most interesting, since I think that if we have several competing AI’s, then we are just super doomed.
And by singleton I don’t necessarily mean a single entity. It could also be a single alliance. The boundaries between group and individual is might not be as clear with AIs as with humans.
This will probably be correct for a time. But will it be true forever? One of the possible end goals for Alignment research is to build the aligned super intelligence that saves us all. If substrate convergence is true, then this end goal is of the table. Because even if we reach this goal, it will inevitable start to either value drift towards self replication, or get eaten from the inside by parts that has mutated towards self replication (AI cancer), or something like that.
Cancer is an excellent analogy. Humans defeat it in a few ways that works together
We have evolved to have cells that mostly don’t defect
We have an evolved immune system that attracts cancer when it does happen
We have developed technology to help us find and fight cancer when it happens
When someone gets cancer anyway and it can’t be defeated, only they die, it don’t spread to other individuals.
Point 4 is very important. If there is only one agent, this agent needs perfect cancer fighting ability to avoid being eaten by natural selection. The big question to me is: Is this possible?
If you on the other hand have several agents, they you defiantly don’t escape natural selection, because these entities will compete with each other.
Thanks for the reply!
I think it might be true that substrate convergence is inevitable eventually. But it would be helpful to know how long it would take. Potentially we might be ok with it if the expected timescale is long enough (or the probability of it happening in a given timescale is low enough).
If that’s true then that is a super important finding! And also an important thing to communicate to people! I hear a lot of people who say the opposite and that we need lots of competing AIs.
I agree that analogies to organic evolution can be very generative. Both in terms of describing the general shape of dynamics, and how AI could be different. That line of thinking could give us a good foundation to start asking how substrate convergence could be exacerbated or avoided.
Agreed. I’d love for someone to investigate the possibility of slowing down substrate-convergence enough to be basically solved.
Hm, to me this conclusion seem fairly obvious. I don’t know how to communicate it though, since I don’t know what the crux is. I’d be up for participating in a public debate about this, if you can find me an opponent. Although, not until after AISC research lead applications are over, and I got some time to recover. So maybe late November at the earliest.