I’d say that there is a basis the network is thinking in in this hypothetical, it would just so happens to not match the human abstraction set for thinking about the problem in question.
Well, yes but the number of basis elements that make that basis human interpretable could theoretically be exponential in the number of neurons.
Sure, but that’s not a question I’m primarily interested in. I don’t want the most interpretable basis, I want the basis that network itself uses for thinking. My goal is to find the elementary unit of neural networks, to build theorems and eventually a whole predictive theory of neural network computation and selection on top of.
That this may possibly make current networks more human-interpretable even in the short run is just a neat side benefit to me.
Ah, I might have misunderstood your original point then, sorry!
I’m not sure what you mean by “basis” then. How strictly are you using this term?
I imagine you are basically going down the “features as elementary unit” route proposed in Circuits (although you might not be pre-disposed to assume features are the elementary unit).Finding the set of features used by the network and figuring out how its using them in its computations does not 1-to-1 translate to “find the basis the network is thinking in” in my mind.
I imagine you are basically going down the “features as elementary unit” route proposed in Circuits (although you might not be pre-disposed to assume features are the elementary unit).Finding the set of features used by the network and figuring out how its using them in its computations does not 1-to-1 translate to “find the basis the network is thinking in” in my mind.
Fair enough, imprecise use of language. For some definitions of “thinking” I’d guess a small vision CNN isn’t thinking anything.
Well, yes but the number of basis elements that make that basis human interpretable could theoretically be exponential in the number of neurons.
Sure, but that’s not a question I’m primarily interested in. I don’t want the most interpretable basis, I want the basis that network itself uses for thinking. My goal is to find the elementary unit of neural networks, to build theorems and eventually a whole predictive theory of neural network computation and selection on top of.
That this may possibly make current networks more human-interpretable even in the short run is just a neat side benefit to me.
Ah, I might have misunderstood your original point then, sorry!
I’m not sure what you mean by “basis” then. How strictly are you using this term?
I imagine you are basically going down the “features as elementary unit” route proposed in Circuits (although you might not be pre-disposed to assume features are the elementary unit).Finding the set of features used by the network and figuring out how its using them in its computations does not 1-to-1 translate to “find the basis the network is thinking in” in my mind.
Fair enough, imprecise use of language. For some definitions of “thinking” I’d guess a small vision CNN isn’t thinking anything.