by default, AGI will be extremely destructive, and we don’t yet know how to make AGI not be destructive.
Much the same was true for the invention of fire (can easily burn down houses), metal (can easily be used to make knives and swords and tanks) and flight (can easily be used to drop bombs on people).
In each case, some people died through misuse of the technology.
What happens “by default” typically has very much to do with what actually happens the first few times, unless extraordinary levels of caution and preparation are applied.
Things like commercial flight and the space program show that reasonable levels of caution can be routinely applied when lives are at stake.
The usual situation with engineering is that you can have whatever level of safety you are prepared to pay for.
As I understand it, most lives are currently lost to engineering through society-approved tradeoffs—in the form of motor vehicle accidents. We know how to decrease the death rate there, but getting from A to B rapidly is widely judged to be more important.
It is easy to imagine how machine intelligence is likely to produce similar effects—via unemployment. We could rectify such effects via a welfare state, but it isn’t clear how popular that willl be. We can—pretty easily—see this one coming. If the concern is with human lives, we can see now that we will need to make sure that unemployed humans have a robust safety net—and that’s a relatively straightforwards political issue.
If you have a system in which a single failure could mean extinction of anything very important, then it seems likey that there must have been many failures in safety systems and backups leading up to that situation—which would seem to count against the idea of a single failure. We have had many millions of IT failures so far already.
Much the same was true for the invention of fire (can easily burn down houses), metal (can easily be used to make knives and swords and tanks) and flight (can easily be used to drop bombs on people).
In each case, some people died through misuse of the technology.
We can expect that again—though this time it may not be quite so bad—due to the shifting moral zeitgeist and moral progress.
Your examples are not by default extremely destructive. (Well, people might have been justified in thinking fire was destructive by default.)
Computer programs and aeroplanes crash “by default”. That has little to do with what computer programs and aeroplanes actually do.
What happens “by default” typically has precious little to do with what actually happens—since agent preferences get involved in between.
What happens “by default” typically has very much to do with what actually happens the first few times, unless extraordinary levels of caution and preparation are applied.
Things like commercial flight and the space program show that reasonable levels of caution can be routinely applied when lives are at stake.
The usual situation with engineering is that you can have whatever level of safety you are prepared to pay for.
As I understand it, most lives are currently lost to engineering through society-approved tradeoffs—in the form of motor vehicle accidents. We know how to decrease the death rate there, but getting from A to B rapidly is widely judged to be more important.
It is easy to imagine how machine intelligence is likely to produce similar effects—via unemployment. We could rectify such effects via a welfare state, but it isn’t clear how popular that willl be. We can—pretty easily—see this one coming. If the concern is with human lives, we can see now that we will need to make sure that unemployed humans have a robust safety net—and that’s a relatively straightforwards political issue.
If you accept that a single failure could mean extinction and worse the history of rockets and powered flight isn’t exactly inspiring.
If you have a system in which a single failure could mean extinction of anything very important, then it seems likey that there must have been many failures in safety systems and backups leading up to that situation—which would seem to count against the idea of a single failure. We have had many millions of IT failures so far already.