The biggest risk of “existential risk mitigation” is that it will be used by the “precautionary principle” zealots to shut down scientific research. There is some evidence that it has been attempted already, see the fear-mongering associated with the startup of the new collider at CERN.
A slowdown, much less an actual halt, in new science is the one thing I am certain will increase future risks, since it will undercut our ability to deal with any disasters that actually do occur.
see the fear-mongering associated with the startup of the new collider at CERN.
Was there really deceptive fear-mongering? That’s news to me. Fear was overblown, but I don’t think anyone was using it for anything other than what they thought was safety.
A slowdown in new science is the one thing I am certain will increase future risks
I highly doubt this. All plausible major x-risks appear to be man-made. Slowing down would give us more time to see them coming. Why would it undercut our ability to deal with a disaster?
Fear was overblown, but I don’t think anyone was using it for anything other than what they thought was safety.
I’m not highly read on the criticisms, but it wouldn’t surprise me if someone vaguely influential invoked the CERN hysteria to argue for reducing the funding of basic research. But I don’t have a cite for you.
I highly doubt this. All plausible major x-risks appear to be man-made. Slowing down would give us more time to see them coming. Why would it undercut our ability to deal with a disaster?
It’s not clear to me that asteroid impacts, major plagues, or becoming caught in a Malthusian trap are not x-risks on the same order of magnitude as man-made x-risks. (Yes, a Malthusian trap is man-made, but it can’t necessarily be prevented by stopping scientific research). And for man-made x-risks, what is the mechanism for “seeing the disaster coming” that isn’t essentially doing more research?
Making science slow down means that you make the best and brightest not do their best in the research. So this drives them to optimizing algorithmical trading.
Also, you would want to slow down the research of new things and imncrease the research of implications; but how do you draw a line? Is the fact that a nuclear reactor can go critical and level a nearby city a useful cautionary knowledge about building power plant or a “stop giving them ideas” thing?
ETA: I do not mean that any of the currently running reactors is that bad — I mean how to research nuclear fission in years 1900-1925 to have a safe nuclear power plant before a nuclear bomb.
I do not say that a modern nuclear reactor can level a city. I don’t even claim or disclaim that the worst currently running nuclear reactor can level a city under reasonably imaginable coditions (I tend to agree that the fallout will be a problem, but a full-scale nuclear explosion is very unlikely but I have not enough evidence and knowledge to be sure either way).
I describe a situtation of the research of nuclear fission. Imagine that someone knows that a bigger pile of uranium emits more radiation and wants to build a power plant based on this in 10–20 years. Some research is done to be able to predict the behaviour of such a system — of course, there are no power plant designs from Earth-2010-our-timeline.
How should one do the research to prevent Chernobyl type disasters, minimize the risk of Fukushima type disasters and not find something that makes military build a nuclear bomb before first nuclear power plant is built?
Note that one needs to do enrichment both for a power plant and for a bomb.
It is true that simply piling even warhead-grade enriched uranium will not lead to a weapon-scale explosion, but the results of building a reactor without careful research into implications are not likely to be good.
Will a halt in new science undercut our ability to deal with those disasters to a greater extent than it makes those disasters more likely? What if the halt was only in certain domains, life genetic engineering of deadly viruses?
There’s no reason to believe that we’ve reached the optimum point for ending scientific research in any particular field. If we’d stopped medical research in 1900, the 1918 flu pandemic would have been worse. And basic research doesn’t have a label telling us how it’s going to be useful, yet the evidence is pretty strong that basic research is worth the money.
Regarding your specific example, isn’t it worth knowing that the mutations to make that virus (1) already exist in nature, and (2) aren’t really that far from being naturally incorporated into a single virus. If it took 500 passes instead of 10, we’d be relieved to learn that, right? In short, it seems like this kind of research is likely to be of practical use in treating serious flu virii (spelling?) in the relatively near future.
The question is not “Is it useful?” but “Is it useful enough to justify the risk?” In that case, the answer might well be yes, but there will probably be cases in the future where the knowledge is not worth the risk.
I agree that you have identified the right question. I disagree with you on when the balance shifts. In particular, I think you’ve picked a bad example of “dangerous” research, because I don’t think the virus research you identified is a close question.
The biggest risk of “existential risk mitigation” is that it will be used by the “precautionary principle” zealots to shut down scientific research. There is some evidence that it has been attempted already, see the fear-mongering associated with the startup of the new collider at CERN.
A slowdown, much less an actual halt, in new science is the one thing I am certain will increase future risks, since it will undercut our ability to deal with any disasters that actually do occur.
Was there really deceptive fear-mongering? That’s news to me. Fear was overblown, but I don’t think anyone was using it for anything other than what they thought was safety.
I highly doubt this. All plausible major x-risks appear to be man-made. Slowing down would give us more time to see them coming. Why would it undercut our ability to deal with a disaster?
I’m not highly read on the criticisms, but it wouldn’t surprise me if someone vaguely influential invoked the CERN hysteria to argue for reducing the funding of basic research. But I don’t have a cite for you.
It’s not clear to me that asteroid impacts, major plagues, or becoming caught in a Malthusian trap are not x-risks on the same order of magnitude as man-made x-risks. (Yes, a Malthusian trap is man-made, but it can’t necessarily be prevented by stopping scientific research). And for man-made x-risks, what is the mechanism for “seeing the disaster coming” that isn’t essentially doing more research?
A major plague is not, strictly speaking, an existential risk, although it would deal a lot of suffering. It will delay malthusian trap, though...
Making science slow down means that you make the best and brightest not do their best in the research. So this drives them to optimizing algorithmical trading.
Also, you would want to slow down the research of new things and imncrease the research of implications; but how do you draw a line? Is the fact that a nuclear reactor can go critical and level a nearby city a useful cautionary knowledge about building power plant or a “stop giving them ideas” thing?
ETA: I do not mean that any of the currently running reactors is that bad — I mean how to research nuclear fission in years 1900-1925 to have a safe nuclear power plant before a nuclear bomb.
If you claim that a modern nuclear reactor can level a nearby city, you are telling a falsehood.
I was slightly unclear. Your statement is true.
I do not say that a modern nuclear reactor can level a city. I don’t even claim or disclaim that the worst currently running nuclear reactor can level a city under reasonably imaginable coditions (I tend to agree that the fallout will be a problem, but a full-scale nuclear explosion is very unlikely but I have not enough evidence and knowledge to be sure either way).
I describe a situtation of the research of nuclear fission. Imagine that someone knows that a bigger pile of uranium emits more radiation and wants to build a power plant based on this in 10–20 years. Some research is done to be able to predict the behaviour of such a system — of course, there are no power plant designs from Earth-2010-our-timeline.
How should one do the research to prevent Chernobyl type disasters, minimize the risk of Fukushima type disasters and not find something that makes military build a nuclear bomb before first nuclear power plant is built?
Note that one needs to do enrichment both for a power plant and for a bomb.
It is true that simply piling even warhead-grade enriched uranium will not lead to a weapon-scale explosion, but the results of building a reactor without careful research into implications are not likely to be good.
Will a halt in new science undercut our ability to deal with those disasters to a greater extent than it makes those disasters more likely? What if the halt was only in certain domains, life genetic engineering of deadly viruses?
There’s no reason to believe that we’ve reached the optimum point for ending scientific research in any particular field. If we’d stopped medical research in 1900, the 1918 flu pandemic would have been worse. And basic research doesn’t have a label telling us how it’s going to be useful, yet the evidence is pretty strong that basic research is worth the money.
Regarding your specific example, isn’t it worth knowing that the mutations to make that virus (1) already exist in nature, and (2) aren’t really that far from being naturally incorporated into a single virus. If it took 500 passes instead of 10, we’d be relieved to learn that, right? In short, it seems like this kind of research is likely to be of practical use in treating serious flu virii (spelling?) in the relatively near future.
The question is not “Is it useful?” but “Is it useful enough to justify the risk?” In that case, the answer might well be yes, but there will probably be cases in the future where the knowledge is not worth the risk.
I agree that you have identified the right question. I disagree with you on when the balance shifts. In particular, I think you’ve picked a bad example of “dangerous” research, because I don’t think the virus research you identified is a close question.
(That said, not my downvotes)
Upon further research, you’re right. The research appears not to be as dangerous as it seemed at first glance.