I haven’t worked on any projects that are either as novel or as large as a recursively self modifying AI. On those projects that I have worked on not all of them worked without any hiccups and novelty and scope did not seem to make things any easier to pull off smoothly. It would not surprise me terribly if the first AI created does not go entirely according to plan.
I haven’t worked on any projects that are either as novel or as large as a recursively self modifying AI. On those projects that I have worked on not all of them worked without any hiccups and novelty and scope did not seem to make things any easier to pull off smoothly. It would not surprise me terribly if the first AI created does not go entirely according to plan.
Sure. Looking at the invention of powered flight, some people may even die—but that is a bit different from everyone dying.
Do we have any reason to believe that aeroplanes will be able to kill the human race, even if everything goes wrong?