I’m reluctant to frame engineering and philosophy as adversarial disciplines in this conversation as AI and ML research have long drawn on both. As an example, Minsky and Papert’s work on the “Society of Mind” and Minsky’s “Perceptrons” are hands that wash each other then reach forward to underpin much of what is now accepted in neural network research.
Moreover, there aren’t just two disciplines feeding this sport; learnings have been taken from computer science, philosophy, psychology and neuroscience over the fifty odd years of AI work. The more successful ML shops have been using the higher order language of psychology to describe and intervene on operational aspects (in game AlphaGo, i.e.) and neuroscience to create the models (Hassabis, 2009).
I will be surprised if biological models of neurotransmitters don’t make an appearance as an nth anchor in the next decade or so. These may well take inspiration from Patricia Churchland’s decades long cross disciplinary work in philosophy and neuroscience. They may also draw from the intersection of psychology and neuroscience that is informing mental health treatments; both chemical and experiential.
This is all without getting into those fjords of philosophy in which many spend their time prioritising happiness over truth; ethics and morality… which is what I think this blogpost is really talking about when it says philosophy. Will connectionist modelling learn from and contribute to deontological, utilitarian, consequentialist and hedonistic ethics? I don’t see how it cannot.
I’m a newcomer here, but this issue has been bothering me for some time. You’re right that these aren’t necessarily adverserial disciplines for many thinkers. The adversity can perhaps be expressed culturally: first, the financial economics of technological development leaves little room for philosophy, and second, people who have an intellectual predilection, plus the resources of time and energy, for engaging in cross-disciplinary thought are probably not prevalent among those doing the everyday, boots-on-the-ground work in AI/ML.
Which is why I’m glad to have found the discussions here about AI/ML, and alas, this comment will probably not be posted. It’s hard to be rational about culture, but it’s the water we’re swimming in.
I’m reluctant to frame engineering and philosophy as adversarial disciplines in this conversation as AI and ML research have long drawn on both. As an example, Minsky and Papert’s work on the “Society of Mind” and Minsky’s “Perceptrons” are hands that wash each other then reach forward to underpin much of what is now accepted in neural network research.
Moreover, there aren’t just two disciplines feeding this sport; learnings have been taken from computer science, philosophy, psychology and neuroscience over the fifty odd years of AI work. The more successful ML shops have been using the higher order language of psychology to describe and intervene on operational aspects (in game AlphaGo, i.e.) and neuroscience to create the models (Hassabis, 2009).
I will be surprised if biological models of neurotransmitters don’t make an appearance as an nth anchor in the next decade or so. These may well take inspiration from Patricia Churchland’s decades long cross disciplinary work in philosophy and neuroscience. They may also draw from the intersection of psychology and neuroscience that is informing mental health treatments; both chemical and experiential.
This is all without getting into those fjords of philosophy in which many spend their time prioritising happiness over truth; ethics and morality… which is what I think this blogpost is really talking about when it says philosophy. Will connectionist modelling learn from and contribute to deontological, utilitarian, consequentialist and hedonistic ethics? I don’t see how it cannot.
I think it was talking about how people approach/view ‘AI risk’.
I’m a newcomer here, but this issue has been bothering me for some time. You’re right that these aren’t necessarily adverserial disciplines for many thinkers. The adversity can perhaps be expressed culturally: first, the financial economics of technological development leaves little room for philosophy, and second, people who have an intellectual predilection, plus the resources of time and energy, for engaging in cross-disciplinary thought are probably not prevalent among those doing the everyday, boots-on-the-ground work in AI/ML.
Which is why I’m glad to have found the discussions here about AI/ML, and alas, this comment will probably not be posted. It’s hard to be rational about culture, but it’s the water we’re swimming in.