I also read On Intelligence and it had a large impact on my reading habits. I was not previously aware that Andrew Ng had a similar experience, which leads me to wonder how many people became interested in neuroscience as a result of that one book.
On a side note: the only significance of Andrew Ng’s stated belief that AGI is far is as an indicator that he doesn’t see a route to get there in the near term. On a related note, he gave a kind of wierd comment recently at the end of a conference talk to the effect of “Worrying about the dangers of machine superintelligence today is like worrying about overpopulation on Mars.”
In one sense, the “one learning algorithm” hypothesis should not seem very surprising. In the fields of AI/machine learning, essentially all practical learning algorithms can be viewed as some approximation of general Bayesian inference (yes—this includes stochastic gradient descent). Given a utility function and a powerful inference system, defining a strong intelligent agent is straightforward (general reinforcement learning, AIXI, etc.)
The difficulty of course is in scaling up practical inference algorithms to compete with the brain. One of the older views in neuroscience was that the brain employed a huge number of specialized algorithms that have been fine tuned in deep time by evolution—specialized vision modules, audio modules, motor, language, etc etc. The novelty of the one learning hypothesis is the realization that all of that specialization is not hardwired, but instead is the lifetime accumulated result of a much simpler general learning algorithm.
On Intelligence is a well written pop sci book about a very important new development in neuroscience. However, Hawkin’s particular implementation of the general ideas—his HTM stuff—is neither groundbreaking, theoretically promising, nor very effective. There are dozens of unsupervised generative model frameworks that are more powerful in theory and in practice (as one example, look into any of Bengio’s recent work), and HTM itself has had little impact on machine learning.
I wonder also about Hassibis (founder of DeepMind) - who studied computational neuroscience and then started a deep learning company—did he read On Intelligence? Regardless, you can see the flow of influence in how deep learning papers cite neuroscience.
I also read On Intelligence and it had a large impact on my reading habits. I was not previously aware that Andrew Ng had a similar experience, which leads me to wonder how many people became interested in neuroscience as a result of that one book.
On a side note: the only significance of Andrew Ng’s stated belief that AGI is far is as an indicator that he doesn’t see a route to get there in the near term. On a related note, he gave a kind of wierd comment recently at the end of a conference talk to the effect of “Worrying about the dangers of machine superintelligence today is like worrying about overpopulation on Mars.”
In one sense, the “one learning algorithm” hypothesis should not seem very surprising. In the fields of AI/machine learning, essentially all practical learning algorithms can be viewed as some approximation of general Bayesian inference (yes—this includes stochastic gradient descent). Given a utility function and a powerful inference system, defining a strong intelligent agent is straightforward (general reinforcement learning, AIXI, etc.)
The difficulty of course is in scaling up practical inference algorithms to compete with the brain. One of the older views in neuroscience was that the brain employed a huge number of specialized algorithms that have been fine tuned in deep time by evolution—specialized vision modules, audio modules, motor, language, etc etc. The novelty of the one learning hypothesis is the realization that all of that specialization is not hardwired, but instead is the lifetime accumulated result of a much simpler general learning algorithm.
On Intelligence is a well written pop sci book about a very important new development in neuroscience. However, Hawkin’s particular implementation of the general ideas—his HTM stuff—is neither groundbreaking, theoretically promising, nor very effective. There are dozens of unsupervised generative model frameworks that are more powerful in theory and in practice (as one example, look into any of Bengio’s recent work), and HTM itself has had little impact on machine learning.
I wonder also about Hassibis (founder of DeepMind) - who studied computational neuroscience and then started a deep learning company—did he read On Intelligence? Regardless, you can see the flow of influence in how deep learning papers cite neuroscience.