Do you think Leo Szilard would have had more success through through overt means (political campaigning to end the human race) or surreptitiously adding kilotons of cobalt to a device intended for use in a nuclear test? I think both strategies would be unsuccessful (p<0.001 conditional on Szilard wishing to kill all humans).
I fully accept the following proposition: IF many humans currently have the capability to kill all humans THEN worrying about long-term AI Safety is probably a bad priority.
I strongly deny the antecedent.
I guess the two most plausible candidates would be Trump and Putin, and I believe they are exceedingly likely to leave survivors (p=0.9999).
Addressing your question, Szilard’s political action: https://en.m.wikipedia.org/wiki/Einstein–Szilárd_letter directly led to the construction of the a-bomb and the nuclear arms race. The jury is still out on whether that wipes out the human race.
I assert that at present, the number of AGIs capable of doing as much damage as the two human figures you named is zero. I further assert that the number of humans capable of doing tremendous damage to the earth or the human race is likely to increase, not decrease.
I assert that the risk posed of AGI acting without human influence destroying the human race will never exceed the risk of humans, making use of technology (including AGI), destroying the human race through malice or incompetence.
Therefore, I assert that your If-Then statement is more likely to become true in the future than the opposite (if no humans have the capability to kill all humans then long-term ai safety is probably a good priority).
(Please forgive me for a nitpick: The opposite statement would be “Many humans have the ability to kill all humans AND AI Safety is a good priority”. NOT (A IMPLIES B) is equivalent to A AND NOT B. )
Do you think Leo Szilard would have had more success through through overt means (political campaigning to end the human race) or surreptitiously adding kilotons of cobalt to a device intended for use in a nuclear test? I think both strategies would be unsuccessful (p<0.001 conditional on Szilard wishing to kill all humans).
I fully accept the following proposition: IF many humans currently have the capability to kill all humans THEN worrying about long-term AI Safety is probably a bad priority. I strongly deny the antecedent.
I guess the two most plausible candidates would be Trump and Putin, and I believe they are exceedingly likely to leave survivors (p=0.9999).
Addressing your question, Szilard’s political action: https://en.m.wikipedia.org/wiki/Einstein–Szilárd_letter directly led to the construction of the a-bomb and the nuclear arms race. The jury is still out on whether that wipes out the human race.
I assert that at present, the number of AGIs capable of doing as much damage as the two human figures you named is zero. I further assert that the number of humans capable of doing tremendous damage to the earth or the human race is likely to increase, not decrease.
I assert that the risk posed of AGI acting without human influence destroying the human race will never exceed the risk of humans, making use of technology (including AGI), destroying the human race through malice or incompetence.
Therefore, I assert that your If-Then statement is more likely to become true in the future than the opposite (if no humans have the capability to kill all humans then long-term ai safety is probably a good priority).
I think I agree with all your assertions :).
(Please forgive me for a nitpick: The opposite statement would be “Many humans have the ability to kill all humans AND AI Safety is a good priority”. NOT (A IMPLIES B) is equivalent to A AND NOT B. )