Yes, I think you are right about both the difficulty / chance of failure and about the fact that there would inevitably be a lot of people opposed. Those aren’t enough to guarantee such coordination would fail, perhaps especially if it was enacted through a redundant mishmash of organizations?
I’m pretty sure there’s going to be some significant conflict along the way, no matter which path the future stumbles down.
I doubt you, or any human being, would even want to live in a world where such coordination ‘succeeded’, since it would almost certainly be in the ruins of society wrecked by countless WMDs, flung by the warring parties until all were exhausted except the ‘winners’, who would probably not have long to live.
In that sense the possible futures where control of powerful AI ‘succeeded’ could be even worse then where it failed.
I really hoping it doesn’t go that way, but I do see us as approaching a time in which the military and economic implications of AI will become so pressing that large-scale international conflict is likely unless agreements are reached. There are specific ways I anticipate tool AI advances affecting the power balance between superpower countries, even before autonomous AGI is a threat. I wake in the night in a cold sweat worrying about these things. I am terrified. I think there’s a real chance we all die soon, or that there is massive suffering and chaos, perhaps with or without war. The balance of power has shifted massively in favor of offense, and a new tenuous balance of Mutually Assured Destruction has not yet been established. This is a very dangerous time.
Yes, I think you are right about both the difficulty / chance of failure and about the fact that there would inevitably be a lot of people opposed. Those aren’t enough to guarantee such coordination would fail, perhaps especially if it was enacted through a redundant mishmash of organizations?
I’m pretty sure there’s going to be some significant conflict along the way, no matter which path the future stumbles down.
I doubt you, or any human being, would even want to live in a world where such coordination ‘succeeded’, since it would almost certainly be in the ruins of society wrecked by countless WMDs, flung by the warring parties until all were exhausted except the ‘winners’, who would probably not have long to live.
In that sense the possible futures where control of powerful AI ‘succeeded’ could be even worse then where it failed.
I really hoping it doesn’t go that way, but I do see us as approaching a time in which the military and economic implications of AI will become so pressing that large-scale international conflict is likely unless agreements are reached. There are specific ways I anticipate tool AI advances affecting the power balance between superpower countries, even before autonomous AGI is a threat. I wake in the night in a cold sweat worrying about these things. I am terrified. I think there’s a real chance we all die soon, or that there is massive suffering and chaos, perhaps with or without war. The balance of power has shifted massively in favor of offense, and a new tenuous balance of Mutually Assured Destruction has not yet been established. This is a very dangerous time.