It’s totally possible to think there’s a plain causal explanation about how humans evolved (through a combination of drift and natural selection, in which proportion we will likely never know) - while still thinking that the prospects for coming up with a constitutive explanation of normativity are dim (at best) or outright confused (at worst).
If we believe there is a plain causal explanation, that rules out some explanations we could imagine. It shouldn’t now be possible for humans to have been created by a supernatural agency (as was widely thought in Antiquity, the Middle Ages or Renaissance when most of the canon of philosophy was developed), and basic human functioning probably shouldn’t involve processes wildly contrary to known physics (still believed by some smart people like Roger Penrose).
The other aspect is computational complexity. If we assume the causal explanation, we also get quantifiable limits for how much evolutionary work and complexity can have gone into humans. People are generally aware that there’s a lot of it, and a lot less aware that it’s quantifiably finite. The size of the human genome, which we can measure, creates one hard limit on how complex a human being can be. The limited amount of sensory information a human can pick up growing to adulthood and the limited amount of computation the human brain can do during that time creates another. Evolutionary theory also gives us a very interesting extra hint that everything you see in nature should be reachable by a very gradual ascent of slightly different forms, all of which need to be viable and competitive, all the way from the simplest chemical replicators. So that’s another limit to the bin, whatever is going on with humans is probably not something that has to drop out of nowhere as a ball of intractable complexity, but can be reached by some series of small enough to be understandable improvements to a small enough to be understandable initial lifeform.
The entire sphere of complex but finite computational processes has been a blind spot for philosophy. Nobody really understood it until computers had become reasonably common. (Dennett talks about this in Darwin’s Dangerous Idea when discussion Conway’s Game of Life.) Actually figuring things out from the opaque blobs of computation like human DNA is another problem of course. If you want to have some fun, you can reach for Rice’s theorem (basically following from Turing’s halting problem) which shows that you can’t logically infer any semantic properties whatsoever from the code of an undocumented computer program. Various existing property inferrer groups like hackers and biologists will nod along and then go back to poking the opaque mystery blobs with various clever implements and taking copious notes of what they do when poked, even though full logical closure is not available.
So coming back to the problem,
If you spend enough time studying the many historical attempts that have been made at these explanations, you begin to see this pattern emerge where a would-be reductive theorist will either smuggle in a normative concept to fill out their causal story (thereby begging the question), or fail to deliver a theory which has the explanatory power to make basic normative distinctions which we intuitively recognize and that the theory should be able to account for (there are several really good tests out there for this—see the various takes on rule-following problems developed by Wittgenstein). Terms like “information” “structure” “fitness” “processing” “innateness” and the like all are subject to this sort of dilemma if you really put them under scrutiny.
Okay, two thoughts about this. First, yes. This sounds like pretty much the inadequacy of mainstream philosophy argument that was being made on Lesswrong back in the Sequences days. The lack of satisfactory descriptions of human-level concepts that actually bottom down to reductive gears is real, but the inability to come up with the descriptions might be pretty much equivalent to the inability to write an understandable human-level AI architecture. That might be impossible, or it might be doable, but it doesn’t seem like we’ll find it out watching philosophers keep doing things with present-day philosopher toolkits. The people poking at the stuff are neuroscientists and computer scientists, and there’s a new kind of looking a “mechanized” mind from the outside aspect to that work (see for instance the predictive coding stuff on the neuroscience side) that seems very foreign to how philosophy operates.
Second thing is, I read this and I’m asking “so, what’s the actual problem we’re trying to solve?” You seem to be talking from the point of general methodological unhappiness with philosophy, where the problem is something like “you want to do philosophy as it’s always been done and you want it to get traction at the cutting edge of intellectual problems of the present day”. Concrete problems might be “understand how humans came to be and how they are able to do all the complex human thinking stuff”, which is a lot of neuroscience plus some evolutionary biology, “build a human-level artificial intelligence that will act in human interests no matter how powerful it is”, which, well, the second part is looking pretty difficult so the ideal answer might be “don’t”, but the first part seems to be coming along with a whole lot of computer science and not having needed a lot of input from philosophy so far. “Help people understand their place in the world, themselves and find life satisfaction” is a different goal again, and something a lot of philosophy used to be about. Taking the high-level human concepts that we don’t have satisfactory reductions for yet as granted could work fine at this level. But there seems to be a sense of philosophers becoming glorified talk therapists here, which doesn’t really feel like a satisfactory answer either.
If you want to have some fun, you can reach for Rice’s theorem (basically following from Turing’s halting problem) which shows that you can’t logically infer any semantic properties whatsoever from the code of an undocumented computer program. Various existing property inferrer groups like hackers and biologists will nod along and then go back to poking the opaque mystery blobs with various clever implements and taking copious notes of what they do when poked, even though full logical closure is not available.
I take it that this is how most progress in artificial intelligence, neuroscience, and cogsci has (and will continue) to proceed. My caution—and whole point in wading in here—is just that we shouldn’t expect progress by trying to come up with a better theory of mind or agency, even with more sophisticated explanatory tools.
I think it’s totally coherent and likely even that future artificial agents (generally intelligent or not) will be created without a general theory of mind or action.
In this scenario, you get a complete causal understanding of the mechanisms that enable agents to become minded and intentionally active, but you still don’t know what that agency or intelligence consist in beyond our simple, non-reductive folk-psychological explanations. A lot of folks in this scenario would be inclined to say, “who cares, we got the gears-level understanding” and I guess the only people who would care would be those who wanted to use the reductive causal story to tell us what it means to be minded. The philosophers I admire (John McDowell is the best example) appreciate the difference between causal and constitutive explanations when it comes to facts about minds and agents, and urge that progress in the sciences is hindered by running these together. They see no obstacle to technical progress in neuroscientific understanding or artificial intelligence; they just see themselves as sorting out what these disciplines are and are not about. They don’t think they’re in the business of giving constitutive explanations of what minds and agents are, rather, they’re in the business of discovering what enable minds and agents to do their minded and agential work. I think this distinction is apparent even with basic biological concepts like life. Biology can give us a complete account of the gears that enable life to work as it does without shedding any light on what makes it the case that something is alive, functioning, fit, goal-directed, successful, etc. But that’s not a problem at all if you think the purpose of biology is just to enable better medicine and engineering (like making artificial life forms or agents). To a task like, “given a region of physical space, identify whether there’s an agent there” I don’t we should expect any theory, philosophical or otherwise, to be able to yield solutions to that problem. I’m sure we can build artificial systems that can do it reliably (probably already have some), but it won’t come by way of understanding what makes an agent an agent.
Insofar as one hopes to advance certain engineering projects by “sorting out fundamental confusions about agency” I just wanted to offer that (1) there’s a rich literature in contemporary philosophy, continuous with the sciences, about different approaches to doing just that; and (2) that there are interesting arguments in this literature which aim to demonstrate that any causal-historical theory of these things will face an apparently intractable dilemma: either beg the question or be unable to make the distinctions needed to explain what agency and mentality consist in.
To summarize the points I’ve been trying to make (meanderingly, I’ll admit): On the one hand, I applaud the author for prioritizing that confusion-resolution; on the other hand, I’d urge them not to fall into the trap of thinking that confusion-resolution must take the form of stating an alternative theory of action or mind. The best kind of confusion-resolution is the kind that Wittgenstein introduced into philosophy, the kind where the problems themselves disappear—not because we realize they’re impossible to solve with present tools and so we give up, but because we realize we weren’t even clear about what we were asking in the first place (so the problems fail to even arise). In this case, the problem that’s supposed to disappear is the felt need to give a reductive causal account of minds and agents in terms of the non-normative explanatory tools available from maths and physics. So, go ahead and sort out those confusions, but be warned about what that project involves, who has gone down the road before, and the structural obstacles they’ve encountered both in and outside of philosophy so that you can be clear-headed about what the inquiry can reasonably be expected to yield.
That’s all I’ll say on the matter. Great back and forth, I don’t think there’s really much distance between us here. And for what it’s worth, mine is a pretty niche view in philosophy, because taken to its conclusion it means that the whole pursuit of trying to explain what minds and agents are is just confused from the gun—not limited by the particular set of explanatory tools presently available—just conceptually confused. Once that’s understood, one stops practicing or funding that sort of work. It is totally possible and advisable to keep studying the enabling gears so we can do better medicine and engineering, but we should get clear on how that medical or engineering understanding will advance and what those advances mean for those fundamental questions about what makes life, agents, minds, what they are. Good philosophy helps to dislodge us from the grip of expecting anything non-circular and illuminating in answer to those questions.
If we believe there is a plain causal explanation, that rules out some explanations we could imagine. It shouldn’t now be possible for humans to have been created by a supernatural agency (as was widely thought in Antiquity, the Middle Ages or Renaissance when most of the canon of philosophy was developed), and basic human functioning probably shouldn’t involve processes wildly contrary to known physics (still believed by some smart people like Roger Penrose).
The other aspect is computational complexity. If we assume the causal explanation, we also get quantifiable limits for how much evolutionary work and complexity can have gone into humans. People are generally aware that there’s a lot of it, and a lot less aware that it’s quantifiably finite. The size of the human genome, which we can measure, creates one hard limit on how complex a human being can be. The limited amount of sensory information a human can pick up growing to adulthood and the limited amount of computation the human brain can do during that time creates another. Evolutionary theory also gives us a very interesting extra hint that everything you see in nature should be reachable by a very gradual ascent of slightly different forms, all of which need to be viable and competitive, all the way from the simplest chemical replicators. So that’s another limit to the bin, whatever is going on with humans is probably not something that has to drop out of nowhere as a ball of intractable complexity, but can be reached by some series of small enough to be understandable improvements to a small enough to be understandable initial lifeform.
The entire sphere of complex but finite computational processes has been a blind spot for philosophy. Nobody really understood it until computers had become reasonably common. (Dennett talks about this in Darwin’s Dangerous Idea when discussion Conway’s Game of Life.) Actually figuring things out from the opaque blobs of computation like human DNA is another problem of course. If you want to have some fun, you can reach for Rice’s theorem (basically following from Turing’s halting problem) which shows that you can’t logically infer any semantic properties whatsoever from the code of an undocumented computer program. Various existing property inferrer groups like hackers and biologists will nod along and then go back to poking the opaque mystery blobs with various clever implements and taking copious notes of what they do when poked, even though full logical closure is not available.
So coming back to the problem,
Okay, two thoughts about this. First, yes. This sounds like pretty much the inadequacy of mainstream philosophy argument that was being made on Lesswrong back in the Sequences days. The lack of satisfactory descriptions of human-level concepts that actually bottom down to reductive gears is real, but the inability to come up with the descriptions might be pretty much equivalent to the inability to write an understandable human-level AI architecture. That might be impossible, or it might be doable, but it doesn’t seem like we’ll find it out watching philosophers keep doing things with present-day philosopher toolkits. The people poking at the stuff are neuroscientists and computer scientists, and there’s a new kind of looking a “mechanized” mind from the outside aspect to that work (see for instance the predictive coding stuff on the neuroscience side) that seems very foreign to how philosophy operates.
Second thing is, I read this and I’m asking “so, what’s the actual problem we’re trying to solve?” You seem to be talking from the point of general methodological unhappiness with philosophy, where the problem is something like “you want to do philosophy as it’s always been done and you want it to get traction at the cutting edge of intellectual problems of the present day”. Concrete problems might be “understand how humans came to be and how they are able to do all the complex human thinking stuff”, which is a lot of neuroscience plus some evolutionary biology, “build a human-level artificial intelligence that will act in human interests no matter how powerful it is”, which, well, the second part is looking pretty difficult so the ideal answer might be “don’t”, but the first part seems to be coming along with a whole lot of computer science and not having needed a lot of input from philosophy so far. “Help people understand their place in the world, themselves and find life satisfaction” is a different goal again, and something a lot of philosophy used to be about. Taking the high-level human concepts that we don’t have satisfactory reductions for yet as granted could work fine at this level. But there seems to be a sense of philosophers becoming glorified talk therapists here, which doesn’t really feel like a satisfactory answer either.
Yeah, I agree with a lot of this. Especially:
I take it that this is how most progress in artificial intelligence, neuroscience, and cogsci has (and will continue) to proceed. My caution—and whole point in wading in here—is just that we shouldn’t expect progress by trying to come up with a better theory of mind or agency, even with more sophisticated explanatory tools.
I think it’s totally coherent and likely even that future artificial agents (generally intelligent or not) will be created without a general theory of mind or action.
In this scenario, you get a complete causal understanding of the mechanisms that enable agents to become minded and intentionally active, but you still don’t know what that agency or intelligence consist in beyond our simple, non-reductive folk-psychological explanations. A lot of folks in this scenario would be inclined to say, “who cares, we got the gears-level understanding” and I guess the only people who would care would be those who wanted to use the reductive causal story to tell us what it means to be minded. The philosophers I admire (John McDowell is the best example) appreciate the difference between causal and constitutive explanations when it comes to facts about minds and agents, and urge that progress in the sciences is hindered by running these together. They see no obstacle to technical progress in neuroscientific understanding or artificial intelligence; they just see themselves as sorting out what these disciplines are and are not about. They don’t think they’re in the business of giving constitutive explanations of what minds and agents are, rather, they’re in the business of discovering what enable minds and agents to do their minded and agential work. I think this distinction is apparent even with basic biological concepts like life. Biology can give us a complete account of the gears that enable life to work as it does without shedding any light on what makes it the case that something is alive, functioning, fit, goal-directed, successful, etc. But that’s not a problem at all if you think the purpose of biology is just to enable better medicine and engineering (like making artificial life forms or agents). To a task like, “given a region of physical space, identify whether there’s an agent there” I don’t we should expect any theory, philosophical or otherwise, to be able to yield solutions to that problem. I’m sure we can build artificial systems that can do it reliably (probably already have some), but it won’t come by way of understanding what makes an agent an agent.
Insofar as one hopes to advance certain engineering projects by “sorting out fundamental confusions about agency” I just wanted to offer that (1) there’s a rich literature in contemporary philosophy, continuous with the sciences, about different approaches to doing just that; and (2) that there are interesting arguments in this literature which aim to demonstrate that any causal-historical theory of these things will face an apparently intractable dilemma: either beg the question or be unable to make the distinctions needed to explain what agency and mentality consist in.
To summarize the points I’ve been trying to make (meanderingly, I’ll admit): On the one hand, I applaud the author for prioritizing that confusion-resolution; on the other hand, I’d urge them not to fall into the trap of thinking that confusion-resolution must take the form of stating an alternative theory of action or mind. The best kind of confusion-resolution is the kind that Wittgenstein introduced into philosophy, the kind where the problems themselves disappear—not because we realize they’re impossible to solve with present tools and so we give up, but because we realize we weren’t even clear about what we were asking in the first place (so the problems fail to even arise). In this case, the problem that’s supposed to disappear is the felt need to give a reductive causal account of minds and agents in terms of the non-normative explanatory tools available from maths and physics. So, go ahead and sort out those confusions, but be warned about what that project involves, who has gone down the road before, and the structural obstacles they’ve encountered both in and outside of philosophy so that you can be clear-headed about what the inquiry can reasonably be expected to yield.
That’s all I’ll say on the matter. Great back and forth, I don’t think there’s really much distance between us here. And for what it’s worth, mine is a pretty niche view in philosophy, because taken to its conclusion it means that the whole pursuit of trying to explain what minds and agents are is just confused from the gun—not limited by the particular set of explanatory tools presently available—just conceptually confused. Once that’s understood, one stops practicing or funding that sort of work. It is totally possible and advisable to keep studying the enabling gears so we can do better medicine and engineering, but we should get clear on how that medical or engineering understanding will advance and what those advances mean for those fundamental questions about what makes life, agents, minds, what they are. Good philosophy helps to dislodge us from the grip of expecting anything non-circular and illuminating in answer to those questions.