AI researchers are likely to stop and correct broken systems rather than hack around and redeploy them.
Ordinary computer programmers don’t do this. (As it is written, “move fast and break things.”) What will spur AI developers to greater caution?
Ordinary computer programmers don’t do this. (As it is written, “move fast and break things.”) What will spur AI developers to greater caution?