Thank you for the great comments! I think I can sum up a lot of that as “the situation is way more complicated and high dimensional and life will find a way”. Yes I agree.
I think what I had in mind was an AI system that is supervising all other AIs (or AI components) and preventing them from undergoing natural selection. A kind of immune system. I don’t see any reason why that would be naturally selected for in the short-term in a way that also ensures human survival. So it would have to be built on purpose. In that model, the level of abstraction that would need to be copied faithfully would be the high-level goal to prevent runaway natural selection.
It would be difficult to build for all the reasons that you highlight. If there is an immunity/self-replicating arms race then you might ordinarily expect the self-replication to win because it only has to win once while the immune system has to win every time. But if the immune response had enough oversight and understanding of the system then it could potentially prevent the self-replication from ever getting started. I guess that comes down to whether a future AI can predict or control future innovations of itself indefinitely.
I guess that comes down to whether a future AI can predict or control future innovations of itself indefinitely.
That’s a key question. You might be interested in this section on limits of controllability.
Clarifying questions: 1. To what extent can AI predict the code they will learn from future unknown inputs, and how that code will subsequently interact with then connected surroundings of the environment?
2. To what extent can AI predict all the (microscopic) modifications that will result from all the future processes involved in the future re-production of hardware components?
Thank you for the great comments! I think I can sum up a lot of that as “the situation is way more complicated and high dimensional and life will find a way”. Yes I agree.
I think what I had in mind was an AI system that is supervising all other AIs (or AI components) and preventing them from undergoing natural selection. A kind of immune system. I don’t see any reason why that would be naturally selected for in the short-term in a way that also ensures human survival. So it would have to be built on purpose. In that model, the level of abstraction that would need to be copied faithfully would be the high-level goal to prevent runaway natural selection.
It would be difficult to build for all the reasons that you highlight. If there is an immunity/self-replicating arms race then you might ordinarily expect the self-replication to win because it only has to win once while the immune system has to win every time. But if the immune response had enough oversight and understanding of the system then it could potentially prevent the self-replication from ever getting started. I guess that comes down to whether a future AI can predict or control future innovations of itself indefinitely.
That’s a key question. You might be interested in this section on limits of controllability.
Clarifying questions:
1. To what extent can AI predict the code they will learn from future unknown inputs, and how that code will subsequently interact with then connected surroundings of the environment?
2. To what extent can AI predict all the (microscopic) modifications that will result from all the future processes involved in the future re-production of hardware components?