My default assumption is that the metrics themselves are useless for AI purposes, but I think the intuitions behind their development might be fruitful.
I also observe that the software component of this process is stuff like complicated avionics software, used to being tested under adversarial conditions. It seems likely to me that if a dangerous AI were to be built using modern techniques like machine learning, it would probably be assembled in a process broadly similar to this.
My default assumption is that the metrics themselves are useless for AI purposes, but I think the intuitions behind their development might be fruitful.
I also observe that the software component of this process is stuff like complicated avionics software, used to being tested under adversarial conditions. It seems likely to me that if a dangerous AI were to be built using modern techniques like machine learning, it would probably be assembled in a process broadly similar to this.