Great post! I am very curious about how people are interpreting Q10 and Q11, and what their models are. What are prototypical examples of ‘insights on a similar level to deep learning’?
Here’s a break-down of examples of things that come to my mind:
Historical DL-level advances:
the development of RL (Q-learning algorithm, etc.)
Original formulation of a single neuron i.e. affine transformation + non-linearity
Future possible DL-level:
a successor to back-prop (e.g. the how biological neurons learn)
a successor to the Q-learning family (e.g. neatly generalizing and extending ‘intrinsic motivation’ hacks)
full brain simulation
an alternative to the affine+activation recipe
Below DL-level major advances:
an elegant solution to learn from cross-modal inputs in a self-supervised fashion (babies somehow do it)
a breakthrough in active learning
a generalizable solution to learning disentangled and compositional representations
a solution to adversarial examples
Grey areas:
breakthroughs in neural architecture search
a breakthrough in neural Turing machine-type research
I’d also like to know how people’s thinking fits in with my taxonomy: Are people who leaned yes on Q11 basing their reasoning on the inadequacy of the ‘below DL-level advances’ list, or perhaps on the necessity of the ‘DL-level advances’ list? Or perhaps people interpreted those questions completely differently, and don’t agree with my dividing lines?
Thank you for asking this question and for giving that break-down. I was wondering something similar. I am not an AI scientist but DL seems like a very big deal to me, and thus I was surprised that so many people seemed to think we need more insights on that level. My charitable interpretation is that they don’t think DL is a big deal.
Great post! I am very curious about how people are interpreting Q10 and Q11, and what their models are. What are prototypical examples of ‘insights on a similar level to deep learning’?
Here’s a break-down of examples of things that come to my mind:
Historical DL-level advances:
the development of RL (Q-learning algorithm, etc.)
Original formulation of a single neuron i.e. affine transformation + non-linearity
Future possible DL-level:
a successor to back-prop (e.g. the how biological neurons learn)
a successor to the Q-learning family (e.g. neatly generalizing and extending ‘intrinsic motivation’ hacks)
full brain simulation
an alternative to the affine+activation recipe
Below DL-level major advances:
an elegant solution to learn from cross-modal inputs in a self-supervised fashion (babies somehow do it)
a breakthrough in active learning
a generalizable solution to learning disentangled and compositional representations
a solution to adversarial examples
Grey areas:
breakthroughs in neural architecture search
a breakthrough in neural Turing machine-type research
I’d also like to know how people’s thinking fits in with my taxonomy: Are people who leaned yes on Q11 basing their reasoning on the inadequacy of the ‘below DL-level advances’ list, or perhaps on the necessity of the ‘DL-level advances’ list? Or perhaps people interpreted those questions completely differently, and don’t agree with my dividing lines?
Thank you for asking this question and for giving that break-down. I was wondering something similar. I am not an AI scientist but DL seems like a very big deal to me, and thus I was surprised that so many people seemed to think we need more insights on that level. My charitable interpretation is that they don’t think DL is a big deal.