How does militarisation of AI and so-called slaughterbots don’t affect your p-doom at all? Plus, I mean, we are clearly teaching AI how to kill, giving it more power and direct access to important systems, weapons and information.
… man, now that the post has been downvoted a bunch I feel bad for leaving such a snarky answer. It’s a perfectly reasonable question, folks!
Overcompressed actual answer: core pieces of a standard doom-argument involve things like “killing all the humans will be very easy for a moderately-generally-smarter-than-human AI” and “killing all the humans (either as a subgoal or a side-effect of other things) is convergently instrumentally useful for the vast majority of terminal objectives”. A standard doom counterargument usually doesn’t dispute those two pieces (though there are of course exceptions); a standard doom counterargument usually argues that we’ll have ample opportunity to iterate, and therefore it doesn’t matter that the vast majority of terminal objectives instrumentally incentivize killing humans, we’ll iterate until we find ways to avoid that sort of thing.
The standard core disagreement is then mostly about the extent to which we’ll be able to iterate, or will in fact iterate in ways which actually help. In particular, cruxy subquestions tend to include:
How visible will “bad behavior” be early on? Will there be “warning shots”? Will we have ways to detect unwanted internal structures?
How sharply/suddenly will capabilities increase?
Insofar as problems are visible, will labs and/or governments actually respond in useful ways?
Militarization isn’t very centrally relevant to any of these; it’s mostly relevant to things which are mostly not in doubt anyways, at least in the medium-to-long term.
I’d say one of the main reasons is because military-AI technology isn’t being optimized towards things we’re afraid of. We’re concerned about generally intelligent entities capable of e. g. automated R&D and social manipulation and long-term scheming. Military-AI technology, last I checked, was mostly about teaching drones and missiles to fly straight and recognize camouflaged tanks and shoot designated targets while not shooting not designated targets.
And while this still may result in a generally capable superintelligence in the limit (since “which targets would my commanders want me to shoot?” can be phrased as a very open-ended problem), it’s not a particularly efficient way to approach this limit at all. Militaries, so far, just aren’t really pushing in the directions where doom lies, while the AGI labs are doing their best to beeline there.
The proliferation of drone armies that could be easily co-opted by a hostile superintelligence… It doesn’t have no impact on p(doom), but it’s approximately a rounding error. A hostile superintelligence doesn’t need extant drone armies; it could build its own, and co-opt humans in the meantime.
How does militarisation of AI and so-called slaughterbots don’t affect your p-doom at all? Plus, I mean, we are clearly teaching AI how to kill, giving it more power and direct access to important systems, weapons and information.
… man, now that the post has been downvoted a bunch I feel bad for leaving such a snarky answer. It’s a perfectly reasonable question, folks!
Overcompressed actual answer: core pieces of a standard doom-argument involve things like “killing all the humans will be very easy for a moderately-generally-smarter-than-human AI” and “killing all the humans (either as a subgoal or a side-effect of other things) is convergently instrumentally useful for the vast majority of terminal objectives”. A standard doom counterargument usually doesn’t dispute those two pieces (though there are of course exceptions); a standard doom counterargument usually argues that we’ll have ample opportunity to iterate, and therefore it doesn’t matter that the vast majority of terminal objectives instrumentally incentivize killing humans, we’ll iterate until we find ways to avoid that sort of thing.
The standard core disagreement is then mostly about the extent to which we’ll be able to iterate, or will in fact iterate in ways which actually help. In particular, cruxy subquestions tend to include:
How visible will “bad behavior” be early on? Will there be “warning shots”? Will we have ways to detect unwanted internal structures?
How sharply/suddenly will capabilities increase?
Insofar as problems are visible, will labs and/or governments actually respond in useful ways?
Militarization isn’t very centrally relevant to any of these; it’s mostly relevant to things which are mostly not in doubt anyways, at least in the medium-to-long term.
I’d say one of the main reasons is because military-AI technology isn’t being optimized towards things we’re afraid of. We’re concerned about generally intelligent entities capable of e. g. automated R&D and social manipulation and long-term scheming. Military-AI technology, last I checked, was mostly about teaching drones and missiles to fly straight and recognize camouflaged tanks and shoot designated targets while not shooting not designated targets.
And while this still may result in a generally capable superintelligence in the limit (since “which targets would my commanders want me to shoot?” can be phrased as a very open-ended problem), it’s not a particularly efficient way to approach this limit at all. Militaries, so far, just aren’t really pushing in the directions where doom lies, while the AGI labs are doing their best to beeline there.
The proliferation of drone armies that could be easily co-opted by a hostile superintelligence… It doesn’t have no impact on p(doom), but it’s approximately a rounding error. A hostile superintelligence doesn’t need extant drone armies; it could build its own, and co-opt humans in the meantime.