Meaningful human oversight of AI decisions. While AI systems may grow capable of assisting human beings in making important decisions, AI decision-making should not be made fully autonomous, as the inner workings of AIs are inscrutable, and while they can often give reasonable results, they fail to give highly reliable results [68]. It is crucial that actors are vigilant to coordinate on maintaining these standards in the face of future competitive pressures. By keeping humans in the loop on key decisions, irreversible decisions can be double-checked and foreseeable errors can be avoided.
As AIs automate “routine” and low-level aspects of practice in the decision-making domains, from systems engineering to law or medicine, the capability of experts to integrate information from lower-level and aspectual models to make a rational high-level decision (or to review an AI decision) will rapidly deteriorate. To make good decisions, people should “own the whole stack” in their minds. There are perhaps technological ways to circumvent this but only with tight human-AI integration, via brain-computer interfaces. Helping humans to evaluate decisions through explanations and demonstrations in natural language will not lead to greater reliability than AI self-evaluation using the same natural language explanations, but these explanations might be mere justification whereas the decision is actually made with connectionistic, intuitive reasoning/integration of evidence and models.
So, even if systems for human oversight of AI decisions will be in place, I just don’t see that they could be realistically used effectively if we continue to automate the “boring parts” of programming, engineering, analysis, management, etc. as quickly as we do today.
As AIs automate “routine” and low-level aspects of practice in the decision-making domains, from systems engineering to law or medicine, the capability of experts to integrate information from lower-level and aspectual models to make a rational high-level decision (or to review an AI decision) will rapidly deteriorate. To make good decisions, people should “own the whole stack” in their minds. There are perhaps technological ways to circumvent this but only with tight human-AI integration, via brain-computer interfaces. Helping humans to evaluate decisions through explanations and demonstrations in natural language will not lead to greater reliability than AI self-evaluation using the same natural language explanations, but these explanations might be mere justification whereas the decision is actually made with connectionistic, intuitive reasoning/integration of evidence and models.
So, even if systems for human oversight of AI decisions will be in place, I just don’t see that they could be realistically used effectively if we continue to automate the “boring parts” of programming, engineering, analysis, management, etc. as quickly as we do today.