I just to want to flag that, like Evan, I don’t understand the usage of the term “microscope AI” in the OP. My understanding is that the term (as described here) describes a certain way to use a NN that implements a world model, namely, looking inside the NN and learning useful things about the world. It’s an idea about how to use transparency, not how to achieve transparency.
I just to want to flag that, like Evan, I don’t understand the usage of the term “microscope AI” in the OP. My understanding is that the term (as described here) describes a certain way to use a NN that implements a world model, namely, looking inside the NN and learning useful things about the world. It’s an idea about how to use transparency, not how to achieve transparency.