Even if what I said isn’t literally true, it’s still true that the cutting edge research in pattern recognition is in deep learning, where the algorithms that are in some sense highly nontransparent.
Upon reading your comment about non-transparency in DL I thought of the exact same paper on visualizing ANN features that Dr_Manhattan posted. There was a recent post on the machine learning subreddit about using similar techniques to investigate the neural representations automatically learned in language model RNNs.
There is no intrinsic connection between transparency and automatic feature learning techniques. Consider the case of a large research team where the work in creating a visual system is divided amongst dozens of researchers, who each create specific features for certain layers/modules. The resulting features are not intrinsically opaque just because the lead researcher doesn’t necessarily understand the details of each feature each engineer came up with. The lead researcher simply needs to invest the time in understanding those features (if so desired).
Deep learning simply automates the tedious feature engineering process. You can always investigate the features or specific circuits the machine came up with—if so desired. It is true that ML and DL optimization tools in particular are often used as black boxes where the researcher doesnt know or care about the details of the solution—but that does not imply that the algorithms themselves are intrinsically opaque.
Even if what I said isn’t literally true, it’s still true that the cutting edge research in pattern recognition is in deep learning, where the algorithms that are in some sense highly nontransparent.
Upon reading your comment about non-transparency in DL I thought of the exact same paper on visualizing ANN features that Dr_Manhattan posted. There was a recent post on the machine learning subreddit about using similar techniques to investigate the neural representations automatically learned in language model RNNs.
There is no intrinsic connection between transparency and automatic feature learning techniques. Consider the case of a large research team where the work in creating a visual system is divided amongst dozens of researchers, who each create specific features for certain layers/modules. The resulting features are not intrinsically opaque just because the lead researcher doesn’t necessarily understand the details of each feature each engineer came up with. The lead researcher simply needs to invest the time in understanding those features (if so desired).
Deep learning simply automates the tedious feature engineering process. You can always investigate the features or specific circuits the machine came up with—if so desired. It is true that ML and DL optimization tools in particular are often used as black boxes where the researcher doesnt know or care about the details of the solution—but that does not imply that the algorithms themselves are intrinsically opaque.