Perhaps it would be helpful to provide some examples of how closed-loop AI optimization systems are used today—this may illuminate the negative consequences of generalized policy to restrict their implementation.
The majority of advanced process manufacturing systems use some form of closed-loop AI control (Model Predictive Control) that incorporate neural networks for state estimation, and even neural nets for inference on the dynamics of the process (how does a change in a manipulated variable lead to a change in a target control variable, and how do these changes evolve over time). The ones that don’t use neural nets use some sort of symbolic regression algorithm that can handle high dimensionality, non-linearity and multiple competing objective functions.
These systems have been in place since the mid-90s (and are in fact one of the earliest commercial applications of neural nets—check the patent history)
Self driving cars, autonomous mobile robots, unmanned aircraft, etc—all of these things are closed-loop AI optimization systems. Even advanced HVAC systems, ovens and temperature control systems adopt these techniques.
These systems are already constrained in the sense that limitations are imposed on the degree and magnitude of adaptation that is allowed to take place. For example—the rate, direction and magnitude of a change to a manipulated variable is constrained by upper and lower control limits, and other factors that account for safety and robustness.
To determine whether (or how) rules around ‘human in the loop’ should be enforced, we should start by acknowledging how control engineers have solved similar problems in applications that are already ubiquitous in industry.
Perhaps it would be helpful to provide some examples of how closed-loop AI optimization systems are used today—this may illuminate the negative consequences of generalized policy to restrict their implementation.
The majority of advanced process manufacturing systems use some form of closed-loop AI control (Model Predictive Control) that incorporate neural networks for state estimation, and even neural nets for inference on the dynamics of the process (how does a change in a manipulated variable lead to a change in a target control variable, and how do these changes evolve over time). The ones that don’t use neural nets use some sort of symbolic regression algorithm that can handle high dimensionality, non-linearity and multiple competing objective functions.
These systems have been in place since the mid-90s (and are in fact one of the earliest commercial applications of neural nets—check the patent history)
Self driving cars, autonomous mobile robots, unmanned aircraft, etc—all of these things are closed-loop AI optimization systems. Even advanced HVAC systems, ovens and temperature control systems adopt these techniques.
These systems are already constrained in the sense that limitations are imposed on the degree and magnitude of adaptation that is allowed to take place. For example—the rate, direction and magnitude of a change to a manipulated variable is constrained by upper and lower control limits, and other factors that account for safety and robustness.
To determine whether (or how) rules around ‘human in the loop’ should be enforced, we should start by acknowledging how control engineers have solved similar problems in applications that are already ubiquitous in industry.