It’s connecting this sort of “good models get themselves expressed” layer of abstraction to neurons that’s the hard part :) I think future breakthroughs in training RNNs will be a big aid to imagination.
Right now when I pattern-match what tou say onto ANN architectures, I can imagine something like making an RNN from a scale-free network and trying to tune less-connected nodes around different weightings of more-connected nodes. But I expect that in the future, I’ll have much better building blocks for imagining.
In case it helps, my main aids-to-imagination right now are the sequence memory / CHMM story (see my comment here) and Dileep George’s PGM-based vision model and his related follow-up papers like this, plus miscellaneous random other stuff.
It’s connecting this sort of “good models get themselves expressed” layer of abstraction to neurons that’s the hard part :) I think future breakthroughs in training RNNs will be a big aid to imagination.
Right now when I pattern-match what tou say onto ANN architectures, I can imagine something like making an RNN from a scale-free network and trying to tune less-connected nodes around different weightings of more-connected nodes. But I expect that in the future, I’ll have much better building blocks for imagining.
In case it helps, my main aids-to-imagination right now are the sequence memory / CHMM story (see my comment here) and Dileep George’s PGM-based vision model and his related follow-up papers like this, plus miscellaneous random other stuff.