Denotational equivalence is actually undecidable for any class of languages stronger than deterministic pushdown automata.
This doesn’t mean that we can’t obtain certain evidence that two languages/automata are equal/equivalent via some mechanism other than a decision algorithm, of course. It also doesn’t mean that we can’t assign a probability of equality in an entirely sensible way. In fact, in probabilistic programming, probabilistic extensional equality of random variables is trivial to model: the problem is that you’re talking, there, about zero-free-parameter thunks rather than arbitrarily parameterized lambdas.
So we can’t really decide the denotational equivalence of lambda expressions (or recurrent neural networks), but I think that decision algorithms aren’t really useful, from a cognitive perspective, for more than obtaining a 100% likelihood of equivalence. That’s powerful, when you can do it, but you should also be able to get non-100% likelihoods in other ways.
So, you’re thinking that human abstraction ability derives from probable morphisms rather than certain morphisms over weaker classes? That makes a lot of sense.
On the other hand, from what I’ve seen in CS classes, humans do not seem very good at recognizing equivalences even between pushdown automata beyond a few simple cases. A human equipped with pencil and lots of paper can do a good job, but that’s an awful lot more powerful than just a human.
A human equipped with pencil and lots of paper can do a good job, but that’s an awful lot more powerful than just a human.
A professional scientist falls more under “human equipped with pencil and lots of paper”. Nobody said second-level learning was easy.
So, you’re thinking that human abstraction ability derives from probable morphisms rather than certain morphisms over weaker classes?
Maybe? I think that first what happens is that we identify a probable morphism, and then we adjust the two theories at hand in order to eliminate the uncertainty from the morphism by pushing it “upwards” (into the imprecision of the higher-level model) and “downwards” (into the parameter dimensionality of the lower-level model).
Denotational equivalence is actually undecidable for any class of languages stronger than deterministic pushdown automata.
This doesn’t mean that we can’t obtain certain evidence that two languages/automata are equal/equivalent via some mechanism other than a decision algorithm, of course. It also doesn’t mean that we can’t assign a probability of equality in an entirely sensible way. In fact, in probabilistic programming, probabilistic extensional equality of random variables is trivial to model: the problem is that you’re talking, there, about zero-free-parameter thunks rather than arbitrarily parameterized lambdas.
So we can’t really decide the denotational equivalence of lambda expressions (or recurrent neural networks), but I think that decision algorithms aren’t really useful, from a cognitive perspective, for more than obtaining a 100% likelihood of equivalence. That’s powerful, when you can do it, but you should also be able to get non-100% likelihoods in other ways.
The various forms of probabilistic static analysis can probably handle that problem.
So, you’re thinking that human abstraction ability derives from probable morphisms rather than certain morphisms over weaker classes? That makes a lot of sense.
On the other hand, from what I’ve seen in CS classes, humans do not seem very good at recognizing equivalences even between pushdown automata beyond a few simple cases. A human equipped with pencil and lots of paper can do a good job, but that’s an awful lot more powerful than just a human.
A professional scientist falls more under “human equipped with pencil and lots of paper”. Nobody said second-level learning was easy.
Maybe? I think that first what happens is that we identify a probable morphism, and then we adjust the two theories at hand in order to eliminate the uncertainty from the morphism by pushing it “upwards” (into the imprecision of the higher-level model) and “downwards” (into the parameter dimensionality of the lower-level model).