I think we are justified, as a society, in taking biological risks much more seriously than computational risks.
My sense is that in practice, programming is much simpler than biochemistry. With software, we typically are working within a completely designed environment, and one designed to be easy for humans to reason about. We can do correctness proofs for software, we can’t do anything like it for biology.
Programs basically stay put the way they are created; organisms don’t. For practical purposes, software never evolves; we don’t have a measurable rate of bit-flip errors or the like resulting in working-but-strange programs. (And we have good theoretical reasons to believe this will remain true.)
If a virulent disease does break loose, we have a hard time countering it, because we can’t re-engineer our bodies. But we routinely patch deployed computer systems to make them resistant to particular instances of malware. The cost of a piece of experimental malware getting loose is very much smaller than with a disease.
the entire point of researching self improving AI is to move programs from the world of software that stays put the way it’s created, never evolving, into a world we don’t directly control.
Yes. I think the skeptics don’t take self-improving AI very seriously. Self-modifying programs in general are too hard to engineer, except in very narrow specialized way. A self-modifying program that rapidly achieves across-the-board superhuman ability seems like a fairy tale, not a serious engineering concern.
If there were an example of a program that self-improves in any nontrivial way at all, people might take this concern more seriously.
I think we are justified, as a society, in taking biological risks much more seriously than computational risks.
My sense is that in practice, programming is much simpler than biochemistry. With software, we typically are working within a completely designed environment, and one designed to be easy for humans to reason about. We can do correctness proofs for software, we can’t do anything like it for biology.
Programs basically stay put the way they are created; organisms don’t. For practical purposes, software never evolves; we don’t have a measurable rate of bit-flip errors or the like resulting in working-but-strange programs. (And we have good theoretical reasons to believe this will remain true.)
If a virulent disease does break loose, we have a hard time countering it, because we can’t re-engineer our bodies. But we routinely patch deployed computer systems to make them resistant to particular instances of malware. The cost of a piece of experimental malware getting loose is very much smaller than with a disease.
the entire point of researching self improving AI is to move programs from the world of software that stays put the way it’s created, never evolving, into a world we don’t directly control.
Yes. I think the skeptics don’t take self-improving AI very seriously. Self-modifying programs in general are too hard to engineer, except in very narrow specialized way. A self-modifying program that rapidly achieves across-the-board superhuman ability seems like a fairy tale, not a serious engineering concern.
If there were an example of a program that self-improves in any nontrivial way at all, people might take this concern more seriously.