Any analogy between the different ways the human brain operates and the different ways that machine learning algorithms operate is very loose, and I think it is important to keep that distinction and not think that we are learning much about one when we study the other. Yes there are issues with the lack of interpretability in neural network models, but the system 1 / system 2 dichotomy doesn’t shed any useful light on them.
Motivation for the post was Kahneman himself using the system 1 / system 2 as comparison when talking about NN / symbolic AI, and the clear connection between Stiegler’s philosophy and that dichotomy.
Of course, human brain and deep neural networks are not the same, but for example DeepMind advocates for using one to learn about the other:
“We believe that drawing inspiration from neuroscience in AI research is important for two reasons. First, neuroscience can help validate AI techniques that already exist. Put simply, if we discover one of our artificial algorithms mimics a function within the brain, it suggests our approach may be on the right track. Second, neuroscience can provide a rich source of inspiration for new types of algorithms and architectures to employ when building artificial brains. Traditional approaches to AI have historically been dominated by logic-based methods and theoretical mathematical models. We argue that neuroscience can complement these by identifying classes of biological computation that may be critical to cognitive function.”
I think the connection drawn in the post is valuable as it points to considering Stiegler’s work in the symbolic / connectionist AI context, which I think would be valuable for philosophical problems we encounter when designing fair or trustworthy AI.
Any analogy between the different ways the human brain operates and the different ways that machine learning algorithms operate is very loose, and I think it is important to keep that distinction and not think that we are learning much about one when we study the other. Yes there are issues with the lack of interpretability in neural network models, but the system 1 / system 2 dichotomy doesn’t shed any useful light on them.
Motivation for the post was Kahneman himself using the system 1 / system 2 as comparison when talking about NN / symbolic AI, and the clear connection between Stiegler’s philosophy and that dichotomy.
Of course, human brain and deep neural networks are not the same, but for example DeepMind advocates for using one to learn about the other:
“We believe that drawing inspiration from neuroscience in AI research is important for two reasons. First, neuroscience can help validate AI techniques that already exist. Put simply, if we discover one of our artificial algorithms mimics a function within the brain, it suggests our approach may be on the right track. Second, neuroscience can provide a rich source of inspiration for new types of algorithms and architectures to employ when building artificial brains. Traditional approaches to AI have historically been dominated by logic-based methods and theoretical mathematical models. We argue that neuroscience can complement these by identifying classes of biological computation that may be critical to cognitive function.”
A relevant example given in the article is studying firing properties of dopamine neurons in the mammalian basal ganglia, in which insights from reinforcement learning are applied for neurophysiological research.
I think the connection drawn in the post is valuable as it points to considering Stiegler’s work in the symbolic / connectionist AI context, which I think would be valuable for philosophical problems we encounter when designing fair or trustworthy AI.