The problem of “humans hostile to humans” has two heavy tails: nuclear war and biological terrorism, which could kill all humans. A similar problem is the main AI risk: AI killing everyone for paperclips.
The central (and not often discussed) claim of AI safety is that the second situation is much more likely: it is more probable that AI will kill all humans than that humans will kill all humans. For example, by advocating for pausing AI development, we assume that the risks of nuclear war causing extinction are less than AI extinction risks.
If AI is used to kill humans as just one more weapon, it doesn’t change anything stated above until AI evolves into an existential weapon (like a billion-drone swarm).
These aren’t the only heavy tails, just the ones with highest potential to happen quickly. You could also have e.g. people regulating themselves to extinction.
There might be humans who set it up in exchange for power/similar, and then it continues after they are gone (perhaps simply because it is “spaghetti code”).
The presence of the regulations might also be forced by other factors, e.g. to suppress AI-powered frauds, gangsters, disinformation spreaders, etc..
The problem of “humans hostile to humans” has two heavy tails: nuclear war and biological terrorism, which could kill all humans. A similar problem is the main AI risk: AI killing everyone for paperclips.
The central (and not often discussed) claim of AI safety is that the second situation is much more likely: it is more probable that AI will kill all humans than that humans will kill all humans. For example, by advocating for pausing AI development, we assume that the risks of nuclear war causing extinction are less than AI extinction risks.
If AI is used to kill humans as just one more weapon, it doesn’t change anything stated above until AI evolves into an existential weapon (like a billion-drone swarm).
These aren’t the only heavy tails, just the ones with highest potential to happen quickly. You could also have e.g. people regulating themselves to extinction.
Need to be proved as x-risk. For example, if population fails below 100 people, then regulation fails first.
Not if the regulation is sufficiently self-sustainably AI-run.
If not AGI, it will fail without enough humans. If AGI, it is just an example of misalignment.
There might be humans who set it up in exchange for power/similar, and then it continues after they are gone (perhaps simply because it is “spaghetti code”).
The presence of the regulations might also be forced by other factors, e.g. to suppress AI-powered frauds, gangsters, disinformation spreaders, etc..