I agree with this post. However, I think it’s common amongst ML enthusiasts to eschew specification and defer to statistics on everything. (Or datapoints trying to capture an “I know it when I see it” “specification”.)
That’s true—and from what I can see, this emerges from the culture in academia. There, people are doing research, and the goal is to see if something can be done, or to see what happens if you try something new. That’s fine for discovery, but it’s insufficient for safety. And that’s why certain types of research, ones that pose dangers to researchers or the public, have at least some degree of oversight which imposes safety requirements. ML does not, yet.
I agree with this post. However, I think it’s common amongst ML enthusiasts to eschew specification and defer to statistics on everything. (Or datapoints trying to capture an “I know it when I see it” “specification”.)
That’s true—and from what I can see, this emerges from the culture in academia. There, people are doing research, and the goal is to see if something can be done, or to see what happens if you try something new. That’s fine for discovery, but it’s insufficient for safety. And that’s why certain types of research, ones that pose dangers to researchers or the public, have at least some degree of oversight which imposes safety requirements. ML does not, yet.