In the second paper, you mention radical negative utilitarians as a force that could be motivated to kill everyone, but similar considerations seem to apply to utilitarianism in general. Hedonistic utilitarians would want to convert the world into orgasmium (killing everyone in the process), varieties of preference utilitarianism might want to rewire everyone’s brains so that those brains experience maximum preference satisfaction (thus effectively killing everyone), etc.
You could argue that mere destruction would be easier than converting everything to orgasmium, but both seem hard enough to basically require a superintelligence. And if you can set the goals of a superintelligence, it’s not clear that one of the goals would be much harder than the other.
You could argue that mere destruction would be easier than converting everything to orgasmium, but both seem hard enough to basically require a superintelligence.
We can kill everyone today or in the near future by diverting a large asteroid to crash into Earth, or by engineering a super-plague. Doing either would take significant resources but isn’t anywhere near requiring a superintelligence. In comparison, converting everything to orgasmium seems much harder and is far beyond our current technological capabilities.
On super-plagues, I’ve understood the consensus position to be that even though you could create one that had really big death tolls, actual human extinction would be very unlikely. E.g.
Asked by Representative Christopher Shays (R-Conn.) whether a pathogen could be engineered that would be virulent enough to “wipe out all of humanity,” Fauci and other top officials at the hearing said such an agent was technically feasible but in practice unlikely.
Centers for Disease Control and Prevention Director Julie Gerberding said a deadly agent could be engineered with relative ease that could spread throughout the world if left unchecked, but that the outbreak would be unlikely to defeat countries’ detection and response systems.
“The technical obstacles are really trivial,” Gerberding said. “What’s difficult is the distribution of agents in ways that would bypass our capacity to recognize and intervene effectively.”
Fauci said creating an agent whose transmissibility could be sustained on such a scale, even as authorities worked to counter it, would be a daunting task.
“Would you end up with a microbe that functionally will … essentially wipe out everyone from the face of the Earth? … It would be very, very difficult to do that,” he said.
Asteroid strikes do sound more plausible, though there too I would expect a lot of people to be aware of the possibility and thus devote considerable measures to ensuring the safety of any space operations capable of actually diverting asteroids.
I’m not an expert on bioweapons, but I note that the paper you cite is dated 2005, before the advent of synthetic biology. The recent report from FHI seems to consider bioweapons to be a realistic existential risk.
The problem this consensus position is that it failed to imagine that several deadly pandemics could run simultaneously, and existential terrorists could deliberately organize it by manipulating several viruses. Rather simple AI may help to engineer deadly plagues in droves, and it should not be superintelligent to do so.
Personally, I see the big failure of all x-risks community in ignoring and not even discussing such risks.
Perhaps have any bioprinter, or other tool, be constantly connected to a narrow AI, to make sure it doesn’t accidentally, or intentionally , print ANY viruses, bacteria, or prions.
Jump ASAP to friendly AI or to another global control system, may be using many interconnected narrow AIs as AI police. Basically, if we don’t create global control system, we are doomed. But it may be decentralised to escape the worst side of the totalitarianism.
Regarding FAI research it is a catch-22. If we slow down AI research effectively, biorisks will start to dominate. If we accelerate AI, we more likely create it before AI safety theory implementation is ready.
I could send anyone interested my article about the biorisks and all this, which I don’t want to publish openly on the internet, hoping for some journal publication.
In the second paper, you mention radical negative utilitarians as a force that could be motivated to kill everyone, but similar considerations seem to apply to utilitarianism in general. Hedonistic utilitarians would want to convert the world into orgasmium (killing everyone in the process), varieties of preference utilitarianism might want to rewire everyone’s brains so that those brains experience maximum preference satisfaction (thus effectively killing everyone), etc.
You could argue that mere destruction would be easier than converting everything to orgasmium, but both seem hard enough to basically require a superintelligence. And if you can set the goals of a superintelligence, it’s not clear that one of the goals would be much harder than the other.
We can kill everyone today or in the near future by diverting a large asteroid to crash into Earth, or by engineering a super-plague. Doing either would take significant resources but isn’t anywhere near requiring a superintelligence. In comparison, converting everything to orgasmium seems much harder and is far beyond our current technological capabilities.
On super-plagues, I’ve understood the consensus position to be that even though you could create one that had really big death tolls, actual human extinction would be very unlikely. E.g.
Asteroid strikes do sound more plausible, though there too I would expect a lot of people to be aware of the possibility and thus devote considerable measures to ensuring the safety of any space operations capable of actually diverting asteroids.
I’m not an expert on bioweapons, but I note that the paper you cite is dated 2005, before the advent of synthetic biology. The recent report from FHI seems to consider bioweapons to be a realistic existential risk.
Thanks, I hadn’t seen that. Interesting (and scary).
The problem this consensus position is that it failed to imagine that several deadly pandemics could run simultaneously, and existential terrorists could deliberately organize it by manipulating several viruses. Rather simple AI may help to engineer deadly plagues in droves, and it should not be superintelligent to do so.
Personally, I see the big failure of all x-risks community in ignoring and not even discussing such risks.
Is there anything we can realistically do about it? Without crippling the whole of biotech?
Perhaps have any bioprinter, or other tool, be constantly connected to a narrow AI, to make sure it doesn’t accidentally, or intentionally , print ANY viruses, bacteria, or prions.
Jump ASAP to friendly AI or to another global control system, may be using many interconnected narrow AIs as AI police. Basically, if we don’t create global control system, we are doomed. But it may be decentralised to escape the worst side of the totalitarianism.
Regarding FAI research it is a catch-22. If we slow down AI research effectively, biorisks will start to dominate. If we accelerate AI, we more likely create it before AI safety theory implementation is ready.
I could send anyone interested my article about the biorisks and all this, which I don’t want to publish openly on the internet, hoping for some journal publication.