I think they are headed in the right direction, but I’m skeptical of the usefulness their work on complexity. The metrics ignore computational complexity of the model, and assume all the variance is modeled based on sources like historical data and expert opinion. It’s also not at all useful unless we can fully characterize the components of the system, which isn’t usually viable.
It also seems to ignore the (in my mind critical) difference between “we know this is evenly distributed in the range 0-1” and “we have no idea what the distribution of this is over the space 0-1.” But I may be asking for too much in a complexity metric.
My default assumption is that the metrics themselves are useless for AI purposes, but I think the intuitions behind their development might be fruitful.
I also observe that the software component of this process is stuff like complicated avionics software, used to being tested under adversarial conditions. It seems likely to me that if a dangerous AI were to be built using modern techniques like machine learning, it would probably be assembled in a process broadly similar to this.
Good finds!
I think they are headed in the right direction, but I’m skeptical of the usefulness their work on complexity. The metrics ignore computational complexity of the model, and assume all the variance is modeled based on sources like historical data and expert opinion. It’s also not at all useful unless we can fully characterize the components of the system, which isn’t usually viable.
It also seems to ignore the (in my mind critical) difference between “we know this is evenly distributed in the range 0-1” and “we have no idea what the distribution of this is over the space 0-1.” But I may be asking for too much in a complexity metric.
My default assumption is that the metrics themselves are useless for AI purposes, but I think the intuitions behind their development might be fruitful.
I also observe that the software component of this process is stuff like complicated avionics software, used to being tested under adversarial conditions. It seems likely to me that if a dangerous AI were to be built using modern techniques like machine learning, it would probably be assembled in a process broadly similar to this.