There is actual world-dangerous work going on in biochemistry. Every single day, People work with ebola, marburg, bird/swine flus, and hosts of other deadly diseases that have the potential to wipe out huge portions of humanity. All of this is treated EXTREMELY seriously with quarantines, regulations, laws, and massively redundant safety procedures. This is to protect us from things like ebola outbreaks in new york that have never happened outside of science fiction. If CS is not any simpler than biochemistry, and yet NO ONE is taking the dangers as seriously as those of biochemistry, then maybe there SHOULD be someone talking about “science fiction” risks.
Perhaps you should instead update on the fact that the experts in the field clearly are not reckless morons whom could be corrected by ignorant outsiders, in case of biochemistry, and probably, in case of CS as well.
I think we are justified, as a society, in taking biological risks much more seriously than computational risks.
My sense is that in practice, programming is much simpler than biochemistry. With software, we typically are working within a completely designed environment, and one designed to be easy for humans to reason about. We can do correctness proofs for software, we can’t do anything like it for biology.
Programs basically stay put the way they are created; organisms don’t. For practical purposes, software never evolves; we don’t have a measurable rate of bit-flip errors or the like resulting in working-but-strange programs. (And we have good theoretical reasons to believe this will remain true.)
If a virulent disease does break loose, we have a hard time countering it, because we can’t re-engineer our bodies. But we routinely patch deployed computer systems to make them resistant to particular instances of malware. The cost of a piece of experimental malware getting loose is very much smaller than with a disease.
the entire point of researching self improving AI is to move programs from the world of software that stays put the way it’s created, never evolving, into a world we don’t directly control.
Yes. I think the skeptics don’t take self-improving AI very seriously. Self-modifying programs in general are too hard to engineer, except in very narrow specialized way. A self-modifying program that rapidly achieves across-the-board superhuman ability seems like a fairy tale, not a serious engineering concern.
If there were an example of a program that self-improves in any nontrivial way at all, people might take this concern more seriously.
While Ebola outbreaks in New York haven’t happened, Ebola is a real disease where we know exactly what it would do if there were an outbreak in New York. In all these cases we have a pretty good handle of what the diseases would do, and we’ve seen extreme examples of diseases in history, such as the Black Death wiping out much of Europe. That does seem like a distinct issue where no one has seen any form of serious danger from AI in the historical or present-day world.
If anything that’s underlies it even more- in the small sample we do have in this case things haven’t done much damage except for the narrow bit of damage they were programmed to do. So the essential point that we haven’t seen any serious danger from AI seems valid. (Although there’s been some work on making automated exploit searchers which conceivably attached to something like Stuxnet with a more malevolent goal set could be quite nasty.)
There is actual world-dangerous work going on in biochemistry. Every single day, People work with ebola, marburg, bird/swine flus, and hosts of other deadly diseases that have the potential to wipe out huge portions of humanity. All of this is treated EXTREMELY seriously with quarantines, regulations, laws, and massively redundant safety procedures. This is to protect us from things like ebola outbreaks in new york that have never happened outside of science fiction. If CS is not any simpler than biochemistry, and yet NO ONE is taking the dangers as seriously as those of biochemistry, then maybe there SHOULD be someone talking about “science fiction” risks.
Perhaps you should instead update on the fact that the experts in the field clearly are not reckless morons whom could be corrected by ignorant outsiders, in case of biochemistry, and probably, in case of CS as well.
I think we are justified, as a society, in taking biological risks much more seriously than computational risks.
My sense is that in practice, programming is much simpler than biochemistry. With software, we typically are working within a completely designed environment, and one designed to be easy for humans to reason about. We can do correctness proofs for software, we can’t do anything like it for biology.
Programs basically stay put the way they are created; organisms don’t. For practical purposes, software never evolves; we don’t have a measurable rate of bit-flip errors or the like resulting in working-but-strange programs. (And we have good theoretical reasons to believe this will remain true.)
If a virulent disease does break loose, we have a hard time countering it, because we can’t re-engineer our bodies. But we routinely patch deployed computer systems to make them resistant to particular instances of malware. The cost of a piece of experimental malware getting loose is very much smaller than with a disease.
the entire point of researching self improving AI is to move programs from the world of software that stays put the way it’s created, never evolving, into a world we don’t directly control.
Yes. I think the skeptics don’t take self-improving AI very seriously. Self-modifying programs in general are too hard to engineer, except in very narrow specialized way. A self-modifying program that rapidly achieves across-the-board superhuman ability seems like a fairy tale, not a serious engineering concern.
If there were an example of a program that self-improves in any nontrivial way at all, people might take this concern more seriously.
While Ebola outbreaks in New York haven’t happened, Ebola is a real disease where we know exactly what it would do if there were an outbreak in New York. In all these cases we have a pretty good handle of what the diseases would do, and we’ve seen extreme examples of diseases in history, such as the Black Death wiping out much of Europe. That does seem like a distinct issue where no one has seen any form of serious danger from AI in the historical or present-day world.
http://en.wikipedia.org/wiki/Stuxnet
If anything that’s underlies it even more- in the small sample we do have in this case things haven’t done much damage except for the narrow bit of damage they were programmed to do. So the essential point that we haven’t seen any serious danger from AI seems valid. (Although there’s been some work on making automated exploit searchers which conceivably attached to something like Stuxnet with a more malevolent goal set could be quite nasty.)