We have another tool in the toolbox to potentially add to this.
Each time a frame comes in to the AI system, it is checking if that frame is within the latent space of the training distribution, yes. (the frame is the state of all sensor inputs + the values saved from the outputs from the last execution)
Each output can enumerate what the machine believes will be the next n frames, factoring in changes from it’s own actions, and it can be more than 1 frame for stochastic processes.
For example, if a machine is observing a coin flip, it would need to output future frames for the coin settling on the 3 outcome states of heads, tails, on edge, and some representation of the probability distribution.
This is hard to represent in data, but for example in an autonomous car use case, it can fill in a voxel representation of the space around the car with the probability of a collision risk.
You then in a separate process at runtime compare each of these “prediction maps” with the “ground truth map” obtained from the sensors.
You accumulate prediction error over time. If integration of prediction error is “unusually” high ( some multiple of how it is compared to how high it is during simulation) you shut the machine down.
This is a more complex method, and I’m not completely confident on how to do the math to sum prediction errors in a robust way. I’m just noting that this is a measurable term, we can measure it with a simple algorithm we can write, and prediction error will rise a lot when the machine leaves distribution.
Also we can “fix” this—take the examples that had high prediction error, and add these to the training environment and future frame estimator for the machine. This lets you auto-add robustness to a robotics stack.
Presumably you would want humans to have to authorize these updates, otherwise yeah the machines could escape the factory, freeze when they hit high prediction error, add the “outside” environment to the sim automatically, train on that, unfreeze, freeze again when something unusual happens, and so on as they make their daring escape.
We have another tool in the toolbox to potentially add to this.
Each time a frame comes in to the AI system, it is checking if that frame is within the latent space of the training distribution, yes. (the frame is the state of all sensor inputs + the values saved from the outputs from the last execution)
Each output can enumerate what the machine believes will be the next n frames, factoring in changes from it’s own actions, and it can be more than 1 frame for stochastic processes.
For example, if a machine is observing a coin flip, it would need to output future frames for the coin settling on the 3 outcome states of heads, tails, on edge, and some representation of the probability distribution.
This is hard to represent in data, but for example in an autonomous car use case, it can fill in a voxel representation of the space around the car with the probability of a collision risk.
You then in a separate process at runtime compare each of these “prediction maps” with the “ground truth map” obtained from the sensors.
You accumulate prediction error over time. If integration of prediction error is “unusually” high ( some multiple of how it is compared to how high it is during simulation) you shut the machine down.
This is a more complex method, and I’m not completely confident on how to do the math to sum prediction errors in a robust way. I’m just noting that this is a measurable term, we can measure it with a simple algorithm we can write, and prediction error will rise a lot when the machine leaves distribution.
Also we can “fix” this—take the examples that had high prediction error, and add these to the training environment and future frame estimator for the machine. This lets you auto-add robustness to a robotics stack.
Presumably you would want humans to have to authorize these updates, otherwise yeah the machines could escape the factory, freeze when they hit high prediction error, add the “outside” environment to the sim automatically, train on that, unfreeze, freeze again when something unusual happens, and so on as they make their daring escape.