Addressing your question, Szilard’s political action: https://en.m.wikipedia.org/wiki/Einstein–Szilárd_letter directly led to the construction of the a-bomb and the nuclear arms race. The jury is still out on whether that wipes out the human race.
I assert that at present, the number of AGIs capable of doing as much damage as the two human figures you named is zero. I further assert that the number of humans capable of doing tremendous damage to the earth or the human race is likely to increase, not decrease.
I assert that the risk posed of AGI acting without human influence destroying the human race will never exceed the risk of humans, making use of technology (including AGI), destroying the human race through malice or incompetence.
Therefore, I assert that your If-Then statement is more likely to become true in the future than the opposite (if no humans have the capability to kill all humans then long-term ai safety is probably a good priority).
(Please forgive me for a nitpick: The opposite statement would be “Many humans have the ability to kill all humans AND AI Safety is a good priority”. NOT (A IMPLIES B) is equivalent to A AND NOT B. )
Addressing your question, Szilard’s political action: https://en.m.wikipedia.org/wiki/Einstein–Szilárd_letter directly led to the construction of the a-bomb and the nuclear arms race. The jury is still out on whether that wipes out the human race.
I assert that at present, the number of AGIs capable of doing as much damage as the two human figures you named is zero. I further assert that the number of humans capable of doing tremendous damage to the earth or the human race is likely to increase, not decrease.
I assert that the risk posed of AGI acting without human influence destroying the human race will never exceed the risk of humans, making use of technology (including AGI), destroying the human race through malice or incompetence.
Therefore, I assert that your If-Then statement is more likely to become true in the future than the opposite (if no humans have the capability to kill all humans then long-term ai safety is probably a good priority).
I think I agree with all your assertions :).
(Please forgive me for a nitpick: The opposite statement would be “Many humans have the ability to kill all humans AND AI Safety is a good priority”. NOT (A IMPLIES B) is equivalent to A AND NOT B. )