It certainly makes interpretability harder
I’m not at all convinced of this. In fact, I suspect self-optimizing systems will be more interpretable (assuming we’re willing to bother putting any effort towards this goal). See my comment here making this case.
I’m not at all convinced of this. In fact, I suspect self-optimizing systems will be more interpretable (assuming we’re willing to bother putting any effort towards this goal). See my comment here making this case.