Image interpretability seems mostly so easy because humans are already really good
Thank you, this is a good point! I wonder how much of this is humans “doing the hard work” of interpreting the features. It raises the question of whether we will be able to interpret more advanced networks, especially if they evolve features that don’t overlap with the way humans process inputs.
The language model idea sounds cool! I don’t know language models well enough yet but I might come back to this once I get to work on transformers.
Thank you, this is a good point! I wonder how much of this is humans “doing the hard work” of interpreting the features. It raises the question of whether we will be able to interpret more advanced networks, especially if they evolve features that don’t overlap with the way humans process inputs.
The language model idea sounds cool! I don’t know language models well enough yet but I might come back to this once I get to work on transformers.