What evil can be perpetrated by AGI that cannot be perpetrated by a sufficiently capable human or group of colluding humans?
Leo Szilard could probably have built a bomb that would wipe out the human race, we are still here, and do not credit that to the success of developing a ‘Friendly Hungarian’ or the success of the ‘Hungarian Safety’ research community. Arguably, Edward Teller was a ‘slightly unfriendly’ Hungarian, and we did OK with him too.
He also worked in the field of nuclear weapons development, and may have had access to the necessary material, equipment, and personnel required to construct such a device, or modify an existing device intended for use in a nuclear test.
I assert that my use of ‘sufficiently’ in this context is appropriate, the intellectual threshold for humanity-destroying action is fairly low, and certainly within the capacity of many humans today.
Do you think Leo Szilard would have had more success through through overt means (political campaigning to end the human race) or surreptitiously adding kilotons of cobalt to a device intended for use in a nuclear test? I think both strategies would be unsuccessful (p<0.001 conditional on Szilard wishing to kill all humans).
I fully accept the following proposition: IF many humans currently have the capability to kill all humans THEN worrying about long-term AI Safety is probably a bad priority.
I strongly deny the antecedent.
I guess the two most plausible candidates would be Trump and Putin, and I believe they are exceedingly likely to leave survivors (p=0.9999).
Addressing your question, Szilard’s political action: https://en.m.wikipedia.org/wiki/Einstein–Szilárd_letter directly led to the construction of the a-bomb and the nuclear arms race. The jury is still out on whether that wipes out the human race.
I assert that at present, the number of AGIs capable of doing as much damage as the two human figures you named is zero. I further assert that the number of humans capable of doing tremendous damage to the earth or the human race is likely to increase, not decrease.
I assert that the risk posed of AGI acting without human influence destroying the human race will never exceed the risk of humans, making use of technology (including AGI), destroying the human race through malice or incompetence.
Therefore, I assert that your If-Then statement is more likely to become true in the future than the opposite (if no humans have the capability to kill all humans then long-term ai safety is probably a good priority).
(Please forgive me for a nitpick: The opposite statement would be “Many humans have the ability to kill all humans AND AI Safety is a good priority”. NOT (A IMPLIES B) is equivalent to A AND NOT B. )
What evil can be perpetrated by AGI that cannot be perpetrated by a sufficiently capable human or group of colluding humans?
Leo Szilard could probably have built a bomb that would wipe out the human race, we are still here, and do not credit that to the success of developing a ‘Friendly Hungarian’ or the success of the ‘Hungarian Safety’ research community. Arguably, Edward Teller was a ‘slightly unfriendly’ Hungarian, and we did OK with him too.
The word ‘sufficiently’ makes your claim a tautology. A ‘sufficiently’ capable human is capable of anything, by definition.
Your claim that Leo Szilard probably could have wiped out the human race seems very far from the historical consensus.
He produced a then novel scenario for a technological development which could potentially have that consequence: https://en.m.wikipedia.org/wiki/Cobalt_bomb
He also worked in the field of nuclear weapons development, and may have had access to the necessary material, equipment, and personnel required to construct such a device, or modify an existing device intended for use in a nuclear test.
I assert that my use of ‘sufficiently’ in this context is appropriate, the intellectual threshold for humanity-destroying action is fairly low, and certainly within the capacity of many humans today.
Do you think Leo Szilard would have had more success through through overt means (political campaigning to end the human race) or surreptitiously adding kilotons of cobalt to a device intended for use in a nuclear test? I think both strategies would be unsuccessful (p<0.001 conditional on Szilard wishing to kill all humans).
I fully accept the following proposition: IF many humans currently have the capability to kill all humans THEN worrying about long-term AI Safety is probably a bad priority. I strongly deny the antecedent.
I guess the two most plausible candidates would be Trump and Putin, and I believe they are exceedingly likely to leave survivors (p=0.9999).
Addressing your question, Szilard’s political action: https://en.m.wikipedia.org/wiki/Einstein–Szilárd_letter directly led to the construction of the a-bomb and the nuclear arms race. The jury is still out on whether that wipes out the human race.
I assert that at present, the number of AGIs capable of doing as much damage as the two human figures you named is zero. I further assert that the number of humans capable of doing tremendous damage to the earth or the human race is likely to increase, not decrease.
I assert that the risk posed of AGI acting without human influence destroying the human race will never exceed the risk of humans, making use of technology (including AGI), destroying the human race through malice or incompetence.
Therefore, I assert that your If-Then statement is more likely to become true in the future than the opposite (if no humans have the capability to kill all humans then long-term ai safety is probably a good priority).
I think I agree with all your assertions :).
(Please forgive me for a nitpick: The opposite statement would be “Many humans have the ability to kill all humans AND AI Safety is a good priority”. NOT (A IMPLIES B) is equivalent to A AND NOT B. )