The warning that AI will be deeply incorporated into human affairs (making decisions that no one understands etc.) is legit though there’s a strong argument that a lot of decisions that governments and organizations make today are not well understood by the general population.
A solution could be governments making deals similar to anti-nuclear proliferation treaties but for AI. This would require a lot of working out of details regarding constraints, incentives, punishments and mechanisms for oversight. There’s also a risk that once we have a good treatise/agreement someone will simply invent newer technology that circumvents agreed upon constraints. Short of outlawing AI across the globe, monitoring AI development just seems too complex. The first chernobyl-type event involving AI may have already happened without anyone the wiser.
There could be a tragic public mishap involving AI that does a lot of damage and would likely inform AI laws and policy for years to come which I referred to it as a ‘chernobyl-type event.’ This seems the most likely (if it occurs at all) but on the other hand we can also imagine an AI that becomes self aware and quietly grows while hiding malicious intentions. It can then affect human affairs over long periods of time. This would be a disaster on the same level as chernobyl (if not magnitudes worse) and go unnoticed because it would be so slow and subtle. This may have already started. Perhaps there’s a rogue AI out there that’s causing increased infertility, climate disasters or pandemics. This is not to scare anyone or launch conspiracy theories but illustrate the complexity of pioneering concrete measures that would keep AI in check across the globe.
The warning that AI will be deeply incorporated into human affairs (making decisions that no one understands etc.) is legit though there’s a strong argument that a lot of decisions that governments and organizations make today are not well understood by the general population.
A solution could be governments making deals similar to anti-nuclear proliferation treaties but for AI. This would require a lot of working out of details regarding constraints, incentives, punishments and mechanisms for oversight. There’s also a risk that once we have a good treatise/agreement someone will simply invent newer technology that circumvents agreed upon constraints. Short of outlawing AI across the globe, monitoring AI development just seems too complex. The first chernobyl-type event involving AI may have already happened without anyone the wiser.
How could an event on the same level as chernobyl be unnoticed? Or did you mean the same type, not level?
There could be a tragic public mishap involving AI that does a lot of damage and would likely inform AI laws and policy for years to come which I referred to it as a ‘chernobyl-type event.’ This seems the most likely (if it occurs at all) but on the other hand we can also imagine an AI that becomes self aware and quietly grows while hiding malicious intentions. It can then affect human affairs over long periods of time. This would be a disaster on the same level as chernobyl (if not magnitudes worse) and go unnoticed because it would be so slow and subtle. This may have already started. Perhaps there’s a rogue AI out there that’s causing increased infertility, climate disasters or pandemics. This is not to scare anyone or launch conspiracy theories but illustrate the complexity of pioneering concrete measures that would keep AI in check across the globe.