Naturalizing normativity just means explaining normative phenomena in terms of other natural phenomena whose existence we accept as part of our broader metaphysics. E.g. explaining biological function in terms of evolution by natural selection, where natural selection is explained by differential survival rates and other statistical facts. Or explaining facts about minds, beliefs, attitudes, etc., in terms of non-humoncular goings-on in the brain. The project is typically aimed at humans, but shows up as soon as you get to biology and the handful of normative concepts (life, function, health, fitness, etc.) that constitute its core subject matter.
I don’t think I’ve seen the term “normative phenomena” before. So basically normative concepts are concepts in everyday language (“life”, “health”), which get messy if you try to push them too hard? But what are normative phenomena then? We don’t see or touch “life” or “health”, we see something closer to the actual stuff going on in the world and then we come up with everyday word-concepts for it that sort of work until they don’t.
It’s not really helping in that I still have no real intuition about what you’re going on about, and your AI critique seems to be aimed at something from 30 years ago instead of contemporary stuff like Omohundro’s Basic AI Drives paper (you describe AIs as being “without the desire to evade death, nourish itself, and protect a physical body”, the paper’s point is that AGIs operating in the physical world would have exactly that) or the whole deep learning explosion with massive datasets of the last few years (“we under-estimate by many orders of magnitude the volume of inputs needed to shape our “models.””, right now people are in a race to feed ginormous input sets to deep learning systems and probably aren’t stopping anytime soon).
Like, yeah. People can be really impressive, but unless you want to make an explicit case for the contrary, people here still think people are made of parts and there exists some way to go from a large cloud of hydrogen to people. If you think there’s some impossible gap between the human and the nonhuman worlds, then how do you think actual humans got here? Right now you seem to be just giving some sort of smug shrug of someone who on one hand doesn’t want to ask that question themselves because it’s corrosive to dignified pre-Darwin liberal arts sensibilities, and on the other hand tries to hint at people genuinely interested in the question that it’s a stupid question to ask and they should have read better scholarship to convince themselves of that.
If you think there’s some impossible gap between the human and the nonhuman worlds, then how do you think actual humans got here?
There are many types of explanatory claims in our language. Some are causal (how did something come to be), others are constitutive (what is it to be something), others still are normative (why is something good or right). Most mathematical explanation is constitutive, most action explanation is rational, and most material explanation is causal. It’s totally possible to think there’s a plain causal explanation about how humans evolved (through a combination of drift and natural selection, in which proportion we will likely never know) - while still thinking that the prospects for coming up with a constitutive explanation of normativity are dim (at best) or outright confused (at worst).
A common project shape for reductive naturalists is to try and use causal explanations to form a constitutive explanation for the normative aspects of biological life. If you spend enough time studying the many historical attempts that have been made at these explanations, you begin to see this pattern emerge where a would-be reductive theorist will either smuggle in a normative concept to fill out their causal story (thereby begging the question), or fail to deliver a theory which has the explanatory power to make basic normative distinctions which we intuitively recognize and that the theory should be able to account for (there are several really good tests out there for this—see the various takes on rule-following problems developed by Wittgenstein). Terms like “information” “structure” “fitness” “processing” “innateness” and the like all are subject to this sort of dilemma if you really put them under scrutiny. Magic non-natural stuff (like souls or spirit or that kind of thing) are often devices that people have reached for when forced on to this dilemma. Postulating that kind of thing is just the other side of the coin, and makes exactly the same error.
So I guess I’d say, I find it totally plausible how normative phenomena could be sui generis in much the same way that mathematical phenomena are, without finding it problematic that natural creatures can come to understand those phenomena through their upbringing and education. Some people get wrapped up in bewilderment about how this could even be possible, and I think there’s good reason to believe that bewilderment reflects deep misunderstandings about the phenomena themselves, the recourse for which is sometimes called philosophical therapy.
Another point I want to be clear on:
right now people are in a race to feed ginormous input sets to deep learning systems and probably aren’t stopping anytime soon
I don’t think it’s in-principle impossible to get from non-intelligent physical stuff to intelligent physical stuff by doing this—and i’m actually sympathetic to the biological anchors approach described here which was recently discussed on this site. I just think that the training runs will need to pay the computational costs for evolution to arrive at human brains, and for human brains to develop to maturity. I tend to think that—and I think good research in child development backs this up—that the structure of our thought is inextricably linked to our physicality. If anything, I think that’d push the development point out past Karnovsky’s 2093 estimate. Again, not it’s clearly not in-principle impossible for a natural thing to get the right amount of inputs to become intelligent (it clearly is possible, every human does it when they go from babies to adults); I just often think we underestimate how deeply important our biological histories (evolutionary and ontogenetic) are in this process. So I hope my urgings don’t come across as advocating for a return to some kind of pre-darwinian darkness; if anything I hope they can be seen as advocating for an even more thorough-going biological understanding. That must start with taking very seriously the problems introduced by drift, and the problems with the attempts to derive the normative aspects of life from a concept like genetic information (one which is notoriously subject to the dilemma above).
Thanks for the tip on the Basic AI Drives paper. I’ll give it a read. My suspicion is that once the “basic drives” are specified comprehensively enough to yield an intelligible picture of agent in question, we’ll find that they’re so much like us that the alignment problem disappears; they can only be aligned. That’s what someone argues in one of the papers I linked above. A separate question I’ve wondered about, and please point me to any good discussion of this, is to compare our thinking about AI alignment with intelligent alien alignment.
Finally, to answer this:
So basically normative concepts are concepts in everyday language (“life”, “health”), which get messy if you try to push them too hard?
No—normative concepts are a narrower class than the messy ones, though many find them messy. Normative concepts are those which structure our evaluative thought and talk (about the good, the bad, the ugly, etc.).
Anyway, good stuff. Keep the questions coming, happy to answer.
It’s totally possible to think there’s a plain causal explanation about how humans evolved (through a combination of drift and natural selection, in which proportion we will likely never know) - while still thinking that the prospects for coming up with a constitutive explanation of normativity are dim (at best) or outright confused (at worst).
If we believe there is a plain causal explanation, that rules out some explanations we could imagine. It shouldn’t now be possible for humans to have been created by a supernatural agency (as was widely thought in Antiquity, the Middle Ages or Renaissance when most of the canon of philosophy was developed), and basic human functioning probably shouldn’t involve processes wildly contrary to known physics (still believed by some smart people like Roger Penrose).
The other aspect is computational complexity. If we assume the causal explanation, we also get quantifiable limits for how much evolutionary work and complexity can have gone into humans. People are generally aware that there’s a lot of it, and a lot less aware that it’s quantifiably finite. The size of the human genome, which we can measure, creates one hard limit on how complex a human being can be. The limited amount of sensory information a human can pick up growing to adulthood and the limited amount of computation the human brain can do during that time creates another. Evolutionary theory also gives us a very interesting extra hint that everything you see in nature should be reachable by a very gradual ascent of slightly different forms, all of which need to be viable and competitive, all the way from the simplest chemical replicators. So that’s another limit to the bin, whatever is going on with humans is probably not something that has to drop out of nowhere as a ball of intractable complexity, but can be reached by some series of small enough to be understandable improvements to a small enough to be understandable initial lifeform.
The entire sphere of complex but finite computational processes has been a blind spot for philosophy. Nobody really understood it until computers had become reasonably common. (Dennett talks about this in Darwin’s Dangerous Idea when discussion Conway’s Game of Life.) Actually figuring things out from the opaque blobs of computation like human DNA is another problem of course. If you want to have some fun, you can reach for Rice’s theorem (basically following from Turing’s halting problem) which shows that you can’t logically infer any semantic properties whatsoever from the code of an undocumented computer program. Various existing property inferrer groups like hackers and biologists will nod along and then go back to poking the opaque mystery blobs with various clever implements and taking copious notes of what they do when poked, even though full logical closure is not available.
So coming back to the problem,
If you spend enough time studying the many historical attempts that have been made at these explanations, you begin to see this pattern emerge where a would-be reductive theorist will either smuggle in a normative concept to fill out their causal story (thereby begging the question), or fail to deliver a theory which has the explanatory power to make basic normative distinctions which we intuitively recognize and that the theory should be able to account for (there are several really good tests out there for this—see the various takes on rule-following problems developed by Wittgenstein). Terms like “information” “structure” “fitness” “processing” “innateness” and the like all are subject to this sort of dilemma if you really put them under scrutiny.
Okay, two thoughts about this. First, yes. This sounds like pretty much the inadequacy of mainstream philosophy argument that was being made on Lesswrong back in the Sequences days. The lack of satisfactory descriptions of human-level concepts that actually bottom down to reductive gears is real, but the inability to come up with the descriptions might be pretty much equivalent to the inability to write an understandable human-level AI architecture. That might be impossible, or it might be doable, but it doesn’t seem like we’ll find it out watching philosophers keep doing things with present-day philosopher toolkits. The people poking at the stuff are neuroscientists and computer scientists, and there’s a new kind of looking a “mechanized” mind from the outside aspect to that work (see for instance the predictive coding stuff on the neuroscience side) that seems very foreign to how philosophy operates.
Second thing is, I read this and I’m asking “so, what’s the actual problem we’re trying to solve?” You seem to be talking from the point of general methodological unhappiness with philosophy, where the problem is something like “you want to do philosophy as it’s always been done and you want it to get traction at the cutting edge of intellectual problems of the present day”. Concrete problems might be “understand how humans came to be and how they are able to do all the complex human thinking stuff”, which is a lot of neuroscience plus some evolutionary biology, “build a human-level artificial intelligence that will act in human interests no matter how powerful it is”, which, well, the second part is looking pretty difficult so the ideal answer might be “don’t”, but the first part seems to be coming along with a whole lot of computer science and not having needed a lot of input from philosophy so far. “Help people understand their place in the world, themselves and find life satisfaction” is a different goal again, and something a lot of philosophy used to be about. Taking the high-level human concepts that we don’t have satisfactory reductions for yet as granted could work fine at this level. But there seems to be a sense of philosophers becoming glorified talk therapists here, which doesn’t really feel like a satisfactory answer either.
If you want to have some fun, you can reach for Rice’s theorem (basically following from Turing’s halting problem) which shows that you can’t logically infer any semantic properties whatsoever from the code of an undocumented computer program. Various existing property inferrer groups like hackers and biologists will nod along and then go back to poking the opaque mystery blobs with various clever implements and taking copious notes of what they do when poked, even though full logical closure is not available.
I take it that this is how most progress in artificial intelligence, neuroscience, and cogsci has (and will continue) to proceed. My caution—and whole point in wading in here—is just that we shouldn’t expect progress by trying to come up with a better theory of mind or agency, even with more sophisticated explanatory tools.
I think it’s totally coherent and likely even that future artificial agents (generally intelligent or not) will be created without a general theory of mind or action.
In this scenario, you get a complete causal understanding of the mechanisms that enable agents to become minded and intentionally active, but you still don’t know what that agency or intelligence consist in beyond our simple, non-reductive folk-psychological explanations. A lot of folks in this scenario would be inclined to say, “who cares, we got the gears-level understanding” and I guess the only people who would care would be those who wanted to use the reductive causal story to tell us what it means to be minded. The philosophers I admire (John McDowell is the best example) appreciate the difference between causal and constitutive explanations when it comes to facts about minds and agents, and urge that progress in the sciences is hindered by running these together. They see no obstacle to technical progress in neuroscientific understanding or artificial intelligence; they just see themselves as sorting out what these disciplines are and are not about. They don’t think they’re in the business of giving constitutive explanations of what minds and agents are, rather, they’re in the business of discovering what enable minds and agents to do their minded and agential work. I think this distinction is apparent even with basic biological concepts like life. Biology can give us a complete account of the gears that enable life to work as it does without shedding any light on what makes it the case that something is alive, functioning, fit, goal-directed, successful, etc. But that’s not a problem at all if you think the purpose of biology is just to enable better medicine and engineering (like making artificial life forms or agents). To a task like, “given a region of physical space, identify whether there’s an agent there” I don’t we should expect any theory, philosophical or otherwise, to be able to yield solutions to that problem. I’m sure we can build artificial systems that can do it reliably (probably already have some), but it won’t come by way of understanding what makes an agent an agent.
Insofar as one hopes to advance certain engineering projects by “sorting out fundamental confusions about agency” I just wanted to offer that (1) there’s a rich literature in contemporary philosophy, continuous with the sciences, about different approaches to doing just that; and (2) that there are interesting arguments in this literature which aim to demonstrate that any causal-historical theory of these things will face an apparently intractable dilemma: either beg the question or be unable to make the distinctions needed to explain what agency and mentality consist in.
To summarize the points I’ve been trying to make (meanderingly, I’ll admit): On the one hand, I applaud the author for prioritizing that confusion-resolution; on the other hand, I’d urge them not to fall into the trap of thinking that confusion-resolution must take the form of stating an alternative theory of action or mind. The best kind of confusion-resolution is the kind that Wittgenstein introduced into philosophy, the kind where the problems themselves disappear—not because we realize they’re impossible to solve with present tools and so we give up, but because we realize we weren’t even clear about what we were asking in the first place (so the problems fail to even arise). In this case, the problem that’s supposed to disappear is the felt need to give a reductive causal account of minds and agents in terms of the non-normative explanatory tools available from maths and physics. So, go ahead and sort out those confusions, but be warned about what that project involves, who has gone down the road before, and the structural obstacles they’ve encountered both in and outside of philosophy so that you can be clear-headed about what the inquiry can reasonably be expected to yield.
That’s all I’ll say on the matter. Great back and forth, I don’t think there’s really much distance between us here. And for what it’s worth, mine is a pretty niche view in philosophy, because taken to its conclusion it means that the whole pursuit of trying to explain what minds and agents are is just confused from the gun—not limited by the particular set of explanatory tools presently available—just conceptually confused. Once that’s understood, one stops practicing or funding that sort of work. It is totally possible and advisable to keep studying the enabling gears so we can do better medicine and engineering, but we should get clear on how that medical or engineering understanding will advance and what those advances mean for those fundamental questions about what makes life, agents, minds, what they are. Good philosophy helps to dislodge us from the grip of expecting anything non-circular and illuminating in answer to those questions.
Naturalizing normativity just means explaining normative phenomena in terms of other natural phenomena whose existence we accept as part of our broader metaphysics. E.g. explaining biological function in terms of evolution by natural selection, where natural selection is explained by differential survival rates and other statistical facts. Or explaining facts about minds, beliefs, attitudes, etc., in terms of non-humoncular goings-on in the brain. The project is typically aimed at humans, but shows up as soon as you get to biology and the handful of normative concepts (life, function, health, fitness, etc.) that constitute its core subject matter.
Hope that helps.
I don’t think I’ve seen the term “normative phenomena” before. So basically normative concepts are concepts in everyday language (“life”, “health”), which get messy if you try to push them too hard? But what are normative phenomena then? We don’t see or touch “life” or “health”, we see something closer to the actual stuff going on in the world and then we come up with everyday word-concepts for it that sort of work until they don’t.
It’s not really helping in that I still have no real intuition about what you’re going on about, and your AI critique seems to be aimed at something from 30 years ago instead of contemporary stuff like Omohundro’s Basic AI Drives paper (you describe AIs as being “without the desire to evade death, nourish itself, and protect a physical body”, the paper’s point is that AGIs operating in the physical world would have exactly that) or the whole deep learning explosion with massive datasets of the last few years (“we under-estimate by many orders of magnitude the volume of inputs needed to shape our “models.””, right now people are in a race to feed ginormous input sets to deep learning systems and probably aren’t stopping anytime soon).
Like, yeah. People can be really impressive, but unless you want to make an explicit case for the contrary, people here still think people are made of parts and there exists some way to go from a large cloud of hydrogen to people. If you think there’s some impossible gap between the human and the nonhuman worlds, then how do you think actual humans got here? Right now you seem to be just giving some sort of smug shrug of someone who on one hand doesn’t want to ask that question themselves because it’s corrosive to dignified pre-Darwin liberal arts sensibilities, and on the other hand tries to hint at people genuinely interested in the question that it’s a stupid question to ask and they should have read better scholarship to convince themselves of that.
There are many types of explanatory claims in our language. Some are causal (how did something come to be), others are constitutive (what is it to be something), others still are normative (why is something good or right). Most mathematical explanation is constitutive, most action explanation is rational, and most material explanation is causal. It’s totally possible to think there’s a plain causal explanation about how humans evolved (through a combination of drift and natural selection, in which proportion we will likely never know) - while still thinking that the prospects for coming up with a constitutive explanation of normativity are dim (at best) or outright confused (at worst).
A common project shape for reductive naturalists is to try and use causal explanations to form a constitutive explanation for the normative aspects of biological life. If you spend enough time studying the many historical attempts that have been made at these explanations, you begin to see this pattern emerge where a would-be reductive theorist will either smuggle in a normative concept to fill out their causal story (thereby begging the question), or fail to deliver a theory which has the explanatory power to make basic normative distinctions which we intuitively recognize and that the theory should be able to account for (there are several really good tests out there for this—see the various takes on rule-following problems developed by Wittgenstein). Terms like “information” “structure” “fitness” “processing” “innateness” and the like all are subject to this sort of dilemma if you really put them under scrutiny. Magic non-natural stuff (like souls or spirit or that kind of thing) are often devices that people have reached for when forced on to this dilemma. Postulating that kind of thing is just the other side of the coin, and makes exactly the same error.
So I guess I’d say, I find it totally plausible how normative phenomena could be sui generis in much the same way that mathematical phenomena are, without finding it problematic that natural creatures can come to understand those phenomena through their upbringing and education. Some people get wrapped up in bewilderment about how this could even be possible, and I think there’s good reason to believe that bewilderment reflects deep misunderstandings about the phenomena themselves, the recourse for which is sometimes called philosophical therapy.
Another point I want to be clear on:
I don’t think it’s in-principle impossible to get from non-intelligent physical stuff to intelligent physical stuff by doing this—and i’m actually sympathetic to the biological anchors approach described here which was recently discussed on this site. I just think that the training runs will need to pay the computational costs for evolution to arrive at human brains, and for human brains to develop to maturity. I tend to think that—and I think good research in child development backs this up—that the structure of our thought is inextricably linked to our physicality. If anything, I think that’d push the development point out past Karnovsky’s 2093 estimate. Again, not it’s clearly not in-principle impossible for a natural thing to get the right amount of inputs to become intelligent (it clearly is possible, every human does it when they go from babies to adults); I just often think we underestimate how deeply important our biological histories (evolutionary and ontogenetic) are in this process. So I hope my urgings don’t come across as advocating for a return to some kind of pre-darwinian darkness; if anything I hope they can be seen as advocating for an even more thorough-going biological understanding. That must start with taking very seriously the problems introduced by drift, and the problems with the attempts to derive the normative aspects of life from a concept like genetic information (one which is notoriously subject to the dilemma above).
Thanks for the tip on the Basic AI Drives paper. I’ll give it a read. My suspicion is that once the “basic drives” are specified comprehensively enough to yield an intelligible picture of agent in question, we’ll find that they’re so much like us that the alignment problem disappears; they can only be aligned. That’s what someone argues in one of the papers I linked above. A separate question I’ve wondered about, and please point me to any good discussion of this, is to compare our thinking about AI alignment with intelligent alien alignment.
Finally, to answer this:
No—normative concepts are a narrower class than the messy ones, though many find them messy. Normative concepts are those which structure our evaluative thought and talk (about the good, the bad, the ugly, etc.).
Anyway, good stuff. Keep the questions coming, happy to answer.
If we believe there is a plain causal explanation, that rules out some explanations we could imagine. It shouldn’t now be possible for humans to have been created by a supernatural agency (as was widely thought in Antiquity, the Middle Ages or Renaissance when most of the canon of philosophy was developed), and basic human functioning probably shouldn’t involve processes wildly contrary to known physics (still believed by some smart people like Roger Penrose).
The other aspect is computational complexity. If we assume the causal explanation, we also get quantifiable limits for how much evolutionary work and complexity can have gone into humans. People are generally aware that there’s a lot of it, and a lot less aware that it’s quantifiably finite. The size of the human genome, which we can measure, creates one hard limit on how complex a human being can be. The limited amount of sensory information a human can pick up growing to adulthood and the limited amount of computation the human brain can do during that time creates another. Evolutionary theory also gives us a very interesting extra hint that everything you see in nature should be reachable by a very gradual ascent of slightly different forms, all of which need to be viable and competitive, all the way from the simplest chemical replicators. So that’s another limit to the bin, whatever is going on with humans is probably not something that has to drop out of nowhere as a ball of intractable complexity, but can be reached by some series of small enough to be understandable improvements to a small enough to be understandable initial lifeform.
The entire sphere of complex but finite computational processes has been a blind spot for philosophy. Nobody really understood it until computers had become reasonably common. (Dennett talks about this in Darwin’s Dangerous Idea when discussion Conway’s Game of Life.) Actually figuring things out from the opaque blobs of computation like human DNA is another problem of course. If you want to have some fun, you can reach for Rice’s theorem (basically following from Turing’s halting problem) which shows that you can’t logically infer any semantic properties whatsoever from the code of an undocumented computer program. Various existing property inferrer groups like hackers and biologists will nod along and then go back to poking the opaque mystery blobs with various clever implements and taking copious notes of what they do when poked, even though full logical closure is not available.
So coming back to the problem,
Okay, two thoughts about this. First, yes. This sounds like pretty much the inadequacy of mainstream philosophy argument that was being made on Lesswrong back in the Sequences days. The lack of satisfactory descriptions of human-level concepts that actually bottom down to reductive gears is real, but the inability to come up with the descriptions might be pretty much equivalent to the inability to write an understandable human-level AI architecture. That might be impossible, or it might be doable, but it doesn’t seem like we’ll find it out watching philosophers keep doing things with present-day philosopher toolkits. The people poking at the stuff are neuroscientists and computer scientists, and there’s a new kind of looking a “mechanized” mind from the outside aspect to that work (see for instance the predictive coding stuff on the neuroscience side) that seems very foreign to how philosophy operates.
Second thing is, I read this and I’m asking “so, what’s the actual problem we’re trying to solve?” You seem to be talking from the point of general methodological unhappiness with philosophy, where the problem is something like “you want to do philosophy as it’s always been done and you want it to get traction at the cutting edge of intellectual problems of the present day”. Concrete problems might be “understand how humans came to be and how they are able to do all the complex human thinking stuff”, which is a lot of neuroscience plus some evolutionary biology, “build a human-level artificial intelligence that will act in human interests no matter how powerful it is”, which, well, the second part is looking pretty difficult so the ideal answer might be “don’t”, but the first part seems to be coming along with a whole lot of computer science and not having needed a lot of input from philosophy so far. “Help people understand their place in the world, themselves and find life satisfaction” is a different goal again, and something a lot of philosophy used to be about. Taking the high-level human concepts that we don’t have satisfactory reductions for yet as granted could work fine at this level. But there seems to be a sense of philosophers becoming glorified talk therapists here, which doesn’t really feel like a satisfactory answer either.
Yeah, I agree with a lot of this. Especially:
I take it that this is how most progress in artificial intelligence, neuroscience, and cogsci has (and will continue) to proceed. My caution—and whole point in wading in here—is just that we shouldn’t expect progress by trying to come up with a better theory of mind or agency, even with more sophisticated explanatory tools.
I think it’s totally coherent and likely even that future artificial agents (generally intelligent or not) will be created without a general theory of mind or action.
In this scenario, you get a complete causal understanding of the mechanisms that enable agents to become minded and intentionally active, but you still don’t know what that agency or intelligence consist in beyond our simple, non-reductive folk-psychological explanations. A lot of folks in this scenario would be inclined to say, “who cares, we got the gears-level understanding” and I guess the only people who would care would be those who wanted to use the reductive causal story to tell us what it means to be minded. The philosophers I admire (John McDowell is the best example) appreciate the difference between causal and constitutive explanations when it comes to facts about minds and agents, and urge that progress in the sciences is hindered by running these together. They see no obstacle to technical progress in neuroscientific understanding or artificial intelligence; they just see themselves as sorting out what these disciplines are and are not about. They don’t think they’re in the business of giving constitutive explanations of what minds and agents are, rather, they’re in the business of discovering what enable minds and agents to do their minded and agential work. I think this distinction is apparent even with basic biological concepts like life. Biology can give us a complete account of the gears that enable life to work as it does without shedding any light on what makes it the case that something is alive, functioning, fit, goal-directed, successful, etc. But that’s not a problem at all if you think the purpose of biology is just to enable better medicine and engineering (like making artificial life forms or agents). To a task like, “given a region of physical space, identify whether there’s an agent there” I don’t we should expect any theory, philosophical or otherwise, to be able to yield solutions to that problem. I’m sure we can build artificial systems that can do it reliably (probably already have some), but it won’t come by way of understanding what makes an agent an agent.
Insofar as one hopes to advance certain engineering projects by “sorting out fundamental confusions about agency” I just wanted to offer that (1) there’s a rich literature in contemporary philosophy, continuous with the sciences, about different approaches to doing just that; and (2) that there are interesting arguments in this literature which aim to demonstrate that any causal-historical theory of these things will face an apparently intractable dilemma: either beg the question or be unable to make the distinctions needed to explain what agency and mentality consist in.
To summarize the points I’ve been trying to make (meanderingly, I’ll admit): On the one hand, I applaud the author for prioritizing that confusion-resolution; on the other hand, I’d urge them not to fall into the trap of thinking that confusion-resolution must take the form of stating an alternative theory of action or mind. The best kind of confusion-resolution is the kind that Wittgenstein introduced into philosophy, the kind where the problems themselves disappear—not because we realize they’re impossible to solve with present tools and so we give up, but because we realize we weren’t even clear about what we were asking in the first place (so the problems fail to even arise). In this case, the problem that’s supposed to disappear is the felt need to give a reductive causal account of minds and agents in terms of the non-normative explanatory tools available from maths and physics. So, go ahead and sort out those confusions, but be warned about what that project involves, who has gone down the road before, and the structural obstacles they’ve encountered both in and outside of philosophy so that you can be clear-headed about what the inquiry can reasonably be expected to yield.
That’s all I’ll say on the matter. Great back and forth, I don’t think there’s really much distance between us here. And for what it’s worth, mine is a pretty niche view in philosophy, because taken to its conclusion it means that the whole pursuit of trying to explain what minds and agents are is just confused from the gun—not limited by the particular set of explanatory tools presently available—just conceptually confused. Once that’s understood, one stops practicing or funding that sort of work. It is totally possible and advisable to keep studying the enabling gears so we can do better medicine and engineering, but we should get clear on how that medical or engineering understanding will advance and what those advances mean for those fundamental questions about what makes life, agents, minds, what they are. Good philosophy helps to dislodge us from the grip of expecting anything non-circular and illuminating in answer to those questions.