This is a good point, and this is where I think a good amount of the difficulty lies, especially as the cited example of human interpretable NNs (i.e. Microscope AI) doesn’t seem easily applicable to things outside of image recognition.
I just to want to flag that, like Evan, I don’t understand the usage of the term “microscope AI” in the OP. My understanding is that the term (as described here) describes a certain way to use a NN that implements a world model, namely, looking inside the NN and learning useful things about the world. It’s an idea about how to use transparency, not how to achieve transparency.
This is a good point, and this is where I think a good amount of the difficulty lies, especially as the cited example of human interpretable NNs (i.e. Microscope AI) doesn’t seem easily applicable to things outside of image recognition.
I just to want to flag that, like Evan, I don’t understand the usage of the term “microscope AI” in the OP. My understanding is that the term (as described here) describes a certain way to use a NN that implements a world model, namely, looking inside the NN and learning useful things about the world. It’s an idea about how to use transparency, not how to achieve transparency.