Yeah, I agree with a lot of this. Especially:
If you want to have some fun, you can reach for Rice’s theorem (basically following from Turing’s halting problem) which shows that you can’t logically infer any semantic properties whatsoever from the code of an undocumented computer program. Various existing property inferrer groups like hackers and biologists will nod along and then go back to poking the opaque mystery blobs with various clever implements and taking copious notes of what they do when poked, even though full logical closure is not available.
I take it that this is how most progress in artificial intelligence, neuroscience, and cogsci has (and will continue) to proceed. My caution—and whole point in wading in here—is just that we shouldn’t expect progress by trying to come up with a better theory of mind or agency, even with more sophisticated explanatory tools.
I think it’s totally coherent and likely even that future artificial agents (generally intelligent or not) will be created without a general theory of mind or action.
In this scenario, you get a complete causal understanding of the mechanisms that enable agents to become minded and intentionally active, but you still don’t know what that agency or intelligence consist in beyond our simple, non-reductive folk-psychological explanations. A lot of folks in this scenario would be inclined to say, “who cares, we got the gears-level understanding” and I guess the only people who would care would be those who wanted to use the reductive causal story to tell us what it means to be minded. The philosophers I admire (John McDowell is the best example) appreciate the difference between causal and constitutive explanations when it comes to facts about minds and agents, and urge that progress in the sciences is hindered by running these together. They see no obstacle to technical progress in neuroscientific understanding or artificial intelligence; they just see themselves as sorting out what these disciplines are and are not about. They don’t think they’re in the business of giving constitutive explanations of what minds and agents are, rather, they’re in the business of discovering what enable minds and agents to do their minded and agential work. I think this distinction is apparent even with basic biological concepts like life. Biology can give us a complete account of the gears that enable life to work as it does without shedding any light on what makes it the case that something is alive, functioning, fit, goal-directed, successful, etc. But that’s not a problem at all if you think the purpose of biology is just to enable better medicine and engineering (like making artificial life forms or agents). To a task like, “given a region of physical space, identify whether there’s an agent there” I don’t we should expect any theory, philosophical or otherwise, to be able to yield solutions to that problem. I’m sure we can build artificial systems that can do it reliably (probably already have some), but it won’t come by way of understanding what makes an agent an agent.
Insofar as one hopes to advance certain engineering projects by “sorting out fundamental confusions about agency” I just wanted to offer that (1) there’s a rich literature in contemporary philosophy, continuous with the sciences, about different approaches to doing just that; and (2) that there are interesting arguments in this literature which aim to demonstrate that any causal-historical theory of these things will face an apparently intractable dilemma: either beg the question or be unable to make the distinctions needed to explain what agency and mentality consist in.
To summarize the points I’ve been trying to make (meanderingly, I’ll admit): On the one hand, I applaud the author for prioritizing that confusion-resolution; on the other hand, I’d urge them not to fall into the trap of thinking that confusion-resolution must take the form of stating an alternative theory of action or mind. The best kind of confusion-resolution is the kind that Wittgenstein introduced into philosophy, the kind where the problems themselves disappear—not because we realize they’re impossible to solve with present tools and so we give up, but because we realize we weren’t even clear about what we were asking in the first place (so the problems fail to even arise). In this case, the problem that’s supposed to disappear is the felt need to give a reductive causal account of minds and agents in terms of the non-normative explanatory tools available from maths and physics. So, go ahead and sort out those confusions, but be warned about what that project involves, who has gone down the road before, and the structural obstacles they’ve encountered both in and outside of philosophy so that you can be clear-headed about what the inquiry can reasonably be expected to yield.
That’s all I’ll say on the matter. Great back and forth, I don’t think there’s really much distance between us here. And for what it’s worth, mine is a pretty niche view in philosophy, because taken to its conclusion it means that the whole pursuit of trying to explain what minds and agents are is just confused from the gun—not limited by the particular set of explanatory tools presently available—just conceptually confused. Once that’s understood, one stops practicing or funding that sort of work. It is totally possible and advisable to keep studying the enabling gears so we can do better medicine and engineering, but we should get clear on how that medical or engineering understanding will advance and what those advances mean for those fundamental questions about what makes life, agents, minds, what they are. Good philosophy helps to dislodge us from the grip of expecting anything non-circular and illuminating in answer to those questions.
This is great work. Glad that folks here take these Ryle-influenced ideas seriously and understand what it means for a putative problem about mind or agency to dissolve. Bravo.
To take the next (and I think, final step) towards dissolution, I would recommend reading and reacting to a 1998 paper by John McDowell called “The Content of Perceptual Experience” which is critical of Dennett’s view and even more Rylian and Wittgensteinian in it’s spirit (Gilbert Ryle was one of Dennett’s teachers).
I think it’s the closest you’ll get to de-mystification and “de-confusion” of psychological and agential concepts. Understanding the difference between personal and subpersonal states, explanations, etc. as well as the difference between causal and constitutive explanations is essential to avoiding confusion when talking about what agency is and what enables agents to be what they are. After enough time reading McDowell, pretty much all of these questions about the nature of agency, mind, etc. lose their grip and you can get on with doing sub-personal causal investigation of the mechanisms which (contingently) enable psychology and agency (here on earth, in humans and similar physical systems).
For what it’s worth, one thing that McDowell does not address (and doesn’t need to for his criticism to work) but is nonetheless essential to Dennett’s theory is the idea that facts about design in organisms can reduce to facts about natural selection. To understand why this can’t be done so easily, check out the argument from drift. The sheer possibility of evolution by drift (non-selective forces), confounds any purely statistical reduction of fitness facts to frequency facts. Despite the appearance of consensus, it’s not at all obvious that the core concepts that define biology have been explained in terms of (reduced to) facts about maths, physics, and chemistry.
Here’s a link to Roberta Millstein’s SEP entry on drift (she believes drift can be theoretically and empirically distinguished from selection, so it’s also worth reading some folks who think it can’t be).
https://plato.stanford.edu/entries/genetic-drift/
Here’s the jstor link to the McDowell paper:
https://www.jstor.org/stable/2219740
Here are some summary papers of the McDowell-Dennett debate:
https://philarchive.org/archive/DRATPD-2v1
https://mlagflup.files.wordpress.com/2009/08/sofia-miguens-c-mlag-31.pdf