I’ve read the SEP entry on agency and was surprised how irrelevant it feels to whatever it is that makes me interested in agency. Here I sketch some of these differences by comparing an imaginary Philosopher of Agency (roughly the embodiment of the approach that the “philosopher community” seems to have to these topics), and an Investigator of Agency (roughly the approach exemplified by the LW/AI Alignment crowd).[1]
If I were to put my finger on one specific difference, it would be that Philosopher is looking for the true-idealized-ontology-of-agency-independent-of-the-purpose-to-which-you-want-to-put-this-ontology, whereas Investigator wants a mechanistic model of agency, which would include a sufficient understanding of goals, values, dynamics of development of agency (or whatever adjacent concepts we’re going to use after conceptual refinement and deconfusion), etc.
Another important component is the readiness to take one’s intuitions as the starting point, but also assume they will require at least a bit of refinement before they start robustly carving reality at its joints. Sometimes you may even need to discard almost all of your intuitions and carefully rebuild your ontology from scratch, bottom-up. Philosopher, on the other hand, seems to (at least more often than Investigator) implicitly assume that their System 1 intuitions can be used as the ground truth of the matter and the quest for formalization of agency ends when the formalism perfectly captures all of our intuitions and doesn’t introduce any weird edge cases.
Philosopher asks, “what does it mean to be an agent?” Investigator asks, “how do we delineate agents from non-agents (or specify some spectrum of relevant agency-adjacent) properties, such that they tell us something of practical importance?”
Deviant causal chains are posed as a “challenge” to “reductive” theories of agency, which try to explain agency by reducing it to causal networks.[2] So what’s the problem? Quoting:
… it seems always possible that the relevant mental states and events cause the relevant event (a certain movement, for instance) in a deviant way: so that this event is clearly not an intentional action or not an action at all. … A murderous nephew intends to kill his uncle in order to inherit his fortune. He drives to his uncle’s house and on the way he kills a pedestrian by accident. As it turns out, this pedestrian is his uncle.
At least in my experience, this is another case of a Deep Philosophical Question that no longer feels like a question, once you’ve read The Sequences or had some equivalent exposure to the rationalist (or at least LW-rationalist) way of thinking.
About a year ago, I had a college course in philosophy of action. I recall having some reading assigned, in which the author basically argued that for an entity to be an agent, it needs to have an embodied feeling-understanding of action. Otherwise, it doesn’t act, so can’t be an agent. No, it doesn’t matter that it’s out there disassembling Mercury and reusing its matter to build the Dyson Sphere. It doesn’t have the relevant concept of action, so it’s not an agent.
You are suffused with a return-to-womb mentality—desperately destined for the material tomb. Your philosophy is unsupported. Why do AI researchers think they are philosophers when its very clear they are deeply uninvested in the human condition? there should be another term, ‘conjurers of the immaterial snake oil’, to describe the actions you take when you riff on Dyson Sphere narratives to legitimize your paltry and thoroughly uninteresting research
I’ve read the SEP entry on agency and was surprised how irrelevant it feels to whatever it is that makes me interested in agency. Here I sketch some of these differences by comparing an imaginary Philosopher of Agency (roughly the embodiment of the approach that the “philosopher community” seems to have to these topics), and an Investigator of Agency (roughly the approach exemplified by the LW/AI Alignment crowd).[1]
If I were to put my finger on one specific difference, it would be that Philosopher is looking for the true-idealized-ontology-of-agency-independent-of-the-purpose-to-which-you-want-to-put-this-ontology, whereas Investigator wants a mechanistic model of agency, which would include a sufficient understanding of goals, values, dynamics of development of agency (or whatever adjacent concepts we’re going to use after conceptual refinement and deconfusion), etc.
Another important component is the readiness to take one’s intuitions as the starting point, but also assume they will require at least a bit of refinement before they start robustly carving reality at its joints. Sometimes you may even need to discard almost all of your intuitions and carefully rebuild your ontology from scratch, bottom-up. Philosopher, on the other hand, seems to (at least more often than Investigator) implicitly assume that their System 1 intuitions can be used as the ground truth of the matter and the quest for formalization of agency ends when the formalism perfectly captures all of our intuitions and doesn’t introduce any weird edge cases.
Philosopher asks, “what does it mean to be an agent?” Investigator asks, “how do we delineate agents from non-agents (or specify some spectrum of relevant agency-adjacent) properties, such that they tell us something of practical importance?”
Deviant causal chains are posed as a “challenge” to “reductive” theories of agency, which try to explain agency by reducing it to causal networks.[2] So what’s the problem? Quoting:
At least in my experience, this is another case of a Deep Philosophical Question that no longer feels like a question, once you’ve read The Sequences or had some equivalent exposure to the rationalist (or at least LW-rationalist) way of thinking.
About a year ago, I had a college course in philosophy of action. I recall having some reading assigned, in which the author basically argued that for an entity to be an agent, it needs to have an embodied feeling-understanding of action. Otherwise, it doesn’t act, so can’t be an agent. No, it doesn’t matter that it’s out there disassembling Mercury and reusing its matter to build the Dyson Sphere. It doesn’t have the relevant concept of action, so it’s not an agent.
This is not a general diss on philosophizing, I certainly think there is value in philosophy-like thinking.
My wording, not SEP’s, but I think it’s correct.
You are suffused with a return-to-womb mentality—desperately destined for the material tomb. Your philosophy is unsupported. Why do AI researchers think they are philosophers when its very clear they are deeply uninvested in the human condition? there should be another term, ‘conjurers of the immaterial snake oil’, to describe the actions you take when you riff on Dyson Sphere narratives to legitimize your paltry and thoroughly uninteresting research