I don’t think this is core to alignment, though it’s probably a good idea overall. Making it easier to kill or threaten to kill people is anti-human-friendly on it’s own, even if all the agency involved is from a subset of humanity.
More importantly, I don’t know that anyone who disagrees is likely to engage here—“let’s agree” doesn’t move very far forward unless you can identify those who need to agree and why they don’t. I’d start with how to overcome the argument that some humans (which ones depends on who you ask) need to be stopped in their harmful actions, and killing or threatening to kill is the most expeditious way to do so. Without that underlying agreement, it’s hard to argue that safer (for “us”) mechanisms are wrong.
Yes, sometimes we need to prevent humans from causing harm. For sub-national cases, current technology is sufficient for this. On the scale of nations, we should agree to concrete limits on the intelligence of weapons, and have faith in our fellow humans to follow these limits. Our governments have made progress on this issue, though there is more to be made.
“With such loud public support in prominent Chinese venues, one might think that the U.S. military need only ask in order to begin a dialogue on AI risk reduction with the Chinese military.
Alas, during my tenure as the Director of Strategy and Policy at the DOD Joint Artificial Intelligence Center, the DOD did just that, twice. Both times the Chinese military refused to allow the topic on the agenda.
Though the fact of the DOD’s request for a dialogue and China’s refusal is unclassified—nearly everything that the United States says to China in formal channels is—the U.S. government has not yet publicly acknowledged this fact. It is time for this telling detail to come to light.
...(Gregory C. Allen is the director of the Artificial Intelligence (AI) Governance Project and a senior fellow in the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C)”
On such a vital topic to international welfare, officials from these two countries should have many discussions, especially considering how video-conference technology has made international discussion much more convenient.
Why then, have we heard of so little progress in this matter? To the contrary, development of lethal AI weapons continues at a brisk pace.
I don’t think this is core to alignment, though it’s probably a good idea overall. Making it easier to kill or threaten to kill people is anti-human-friendly on it’s own, even if all the agency involved is from a subset of humanity.
More importantly, I don’t know that anyone who disagrees is likely to engage here—“let’s agree” doesn’t move very far forward unless you can identify those who need to agree and why they don’t. I’d start with how to overcome the argument that some humans (which ones depends on who you ask) need to be stopped in their harmful actions, and killing or threatening to kill is the most expeditious way to do so. Without that underlying agreement, it’s hard to argue that safer (for “us”) mechanisms are wrong.
Yes, sometimes we need to prevent humans from causing harm. For sub-national cases, current technology is sufficient for this. On the scale of nations, we should agree to concrete limits on the intelligence of weapons, and have faith in our fellow humans to follow these limits. Our governments have made progress on this issue, though there is more to be made.
For example:
https://www.csis.org/analysis/one-key-challenge-diplomacy-ai-chinas-military-does-not-want-talk
“With such loud public support in prominent Chinese venues, one might think that the U.S. military need only ask in order to begin a dialogue on AI risk reduction with the Chinese military.
Alas, during my tenure as the Director of Strategy and Policy at the DOD Joint Artificial Intelligence Center, the DOD did just that, twice. Both times the Chinese military refused to allow the topic on the agenda.
Though the fact of the DOD’s request for a dialogue and China’s refusal is unclassified—nearly everything that the United States says to China in formal channels is—the U.S. government has not yet publicly acknowledged this fact. It is time for this telling detail to come to light.
...(Gregory C. Allen is the director of the Artificial Intelligence (AI) Governance Project and a senior fellow in the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C)”
On such a vital topic to international welfare, officials from these two countries should have many discussions, especially considering how video-conference technology has made international discussion much more convenient.
Why then, have we heard of so little progress in this matter? To the contrary, development of lethal AI weapons continues at a brisk pace.