When programmers code faulty software then it usually fails to do its job.
It often does it’s job, but only in perfect conditions, or only once per restart, or with unwanted side effects, or while taking too long or too many resources or requiring too many permissions, or not keeping track that it isn’t doing anything except it’s job.
Buffer overflows for instance, are one of the bigger security failure causes, and are only possible because the software works well enough to be put into production while still having the fault present.
In fact, all production software that we see which has faults (a lot) works well enough to be put into production with those faults.
What you are suggesting is that humans succeed at creating the seed for an artificial intelligence with the incentive necessary to correct its own errors.
I think he’s suggesting that humans will think we have succeeded at that, while not actually doing so (rigorously and without room for error).
It often does it’s job, but only in perfect conditions, or only once per restart, or with unwanted side effects, or while taking too long or too many resources or requiring too many permissions, or not keeping track that it isn’t doing anything except it’s job.
Buffer overflows for instance, are one of the bigger security failure causes, and are only possible because the software works well enough to be put into production while still having the fault present.
In fact, all production software that we see which has faults (a lot) works well enough to be put into production with those faults.
I think he’s suggesting that humans will think we have succeeded at that, while not actually doing so (rigorously and without room for error).