Motivation for the post was Kahneman himself using the system 1 / system 2 as comparison when talking about NN / symbolic AI, and the clear connection between Stiegler’s philosophy and that dichotomy.
Of course, human brain and deep neural networks are not the same, but for example DeepMind advocates for using one to learn about the other:
“We believe that drawing inspiration from neuroscience in AI research is important for two reasons. First, neuroscience can help validate AI techniques that already exist. Put simply, if we discover one of our artificial algorithms mimics a function within the brain, it suggests our approach may be on the right track. Second, neuroscience can provide a rich source of inspiration for new types of algorithms and architectures to employ when building artificial brains. Traditional approaches to AI have historically been dominated by logic-based methods and theoretical mathematical models. We argue that neuroscience can complement these by identifying classes of biological computation that may be critical to cognitive function.”
I think the connection drawn in the post is valuable as it points to considering Stiegler’s work in the symbolic / connectionist AI context, which I think would be valuable for philosophical problems we encounter when designing fair or trustworthy AI.
Motivation for the post was Kahneman himself using the system 1 / system 2 as comparison when talking about NN / symbolic AI, and the clear connection between Stiegler’s philosophy and that dichotomy.
Of course, human brain and deep neural networks are not the same, but for example DeepMind advocates for using one to learn about the other:
“We believe that drawing inspiration from neuroscience in AI research is important for two reasons. First, neuroscience can help validate AI techniques that already exist. Put simply, if we discover one of our artificial algorithms mimics a function within the brain, it suggests our approach may be on the right track. Second, neuroscience can provide a rich source of inspiration for new types of algorithms and architectures to employ when building artificial brains. Traditional approaches to AI have historically been dominated by logic-based methods and theoretical mathematical models. We argue that neuroscience can complement these by identifying classes of biological computation that may be critical to cognitive function.”
A relevant example given in the article is studying firing properties of dopamine neurons in the mammalian basal ganglia, in which insights from reinforcement learning are applied for neurophysiological research.
I think the connection drawn in the post is valuable as it points to considering Stiegler’s work in the symbolic / connectionist AI context, which I think would be valuable for philosophical problems we encounter when designing fair or trustworthy AI.