“Thing” is tricky. Maybe something like the set of intuitions and arguments we have around learned optimizers, i.e. the basic argument that ML will likely produce a system that is “trying” to do something, and that it can end up performing well on the training distribution regardless of what it is “trying” to do (and this is easier the more capable and knowledgeable it is). I don’t think we really know much about what’s going on here, but I do think it’s an important failure to be aware of and at least folks are looking for it now. So I do think that if it happens we’re likely to notice it earlier than we would if taking a purely experimentally-driven approach and it’s possible that at the extreme you would just totally miss the phenomenon. (This may not be fair to put in the last 10 years, but thinking about it sure seemed like a mess >10 years ago.)
(I may be overlooking something such that I really regret that answer in 5 minutes but so it goes.)
“Thing” is tricky. Maybe something like the set of intuitions and arguments we have around learned optimizers, i.e. the basic argument that ML will likely produce a system that is “trying” to do something, and that it can end up performing well on the training distribution regardless of what it is “trying” to do (and this is easier the more capable and knowledgeable it is). I don’t think we really know much about what’s going on here, but I do think it’s an important failure to be aware of and at least folks are looking for it now. So I do think that if it happens we’re likely to notice it earlier than we would if taking a purely experimentally-driven approach and it’s possible that at the extreme you would just totally miss the phenomenon. (This may not be fair to put in the last 10 years, but thinking about it sure seemed like a mess >10 years ago.)
(I may be overlooking something such that I really regret that answer in 5 minutes but so it goes.)