It does kinda make sense to plant the world thick with various AIs and counter-AIs, because that makes it harder for one AI to rise and take over everything. It’s a flimsy defense but maybe better than none at all.
The elephant in the room though is that OpenAI’s alignment efforts for now seem to be mostly about stopping the AI from saying nasty words, and even that in an inefficient way. It makes sense from a market perspective, but it sure doesn’t inspire confidence.
It does kinda make sense to plant the world thick with various AIs and counter-AIs, because that makes it harder for one AI to rise and take over everything.
I’m not sure about that. It makes sense if the AIs stay more or less equal in intelligence and power, similar to humans. But it doesn’t make sense if the strongest AI is to the next powerful like we are to Gorillas, or mice. The problem is that each of the AGIs will have the same instrumental goals of power-seeking and self-improvment, so there will be a race very similar to the race between Google and Microsoft, only much quicker and more fierce. It’s extremely unlikely that they will all grow in power at about the same rate, so one will outpace the others pretty soon. In the end “the winner takes it all”, as they say.
It may be that we’ll find ways to contain AGIs, limit their power-seeking, etc., for a while. But I can’t see how this will remain stable for long. It seems like trying to stop evolution.
It does kinda make sense to plant the world thick with various AIs and counter-AIs, because that makes it harder for one AI to rise and take over everything. It’s a flimsy defense but maybe better than none at all.
The elephant in the room though is that OpenAI’s alignment efforts for now seem to be mostly about stopping the AI from saying nasty words, and even that in an inefficient way. It makes sense from a market perspective, but it sure doesn’t inspire confidence.
I’m not sure about that. It makes sense if the AIs stay more or less equal in intelligence and power, similar to humans. But it doesn’t make sense if the strongest AI is to the next powerful like we are to Gorillas, or mice. The problem is that each of the AGIs will have the same instrumental goals of power-seeking and self-improvment, so there will be a race very similar to the race between Google and Microsoft, only much quicker and more fierce. It’s extremely unlikely that they will all grow in power at about the same rate, so one will outpace the others pretty soon. In the end “the winner takes it all”, as they say.
It may be that we’ll find ways to contain AGIs, limit their power-seeking, etc., for a while. But I can’t see how this will remain stable for long. It seems like trying to stop evolution.