I’ve read the SEP entry on agency and was surprised how irrelevant it feels to whatever it is that makes me interested in agency. Here I sketch some of these differences by comparing an imaginary Philosopher of Agency (roughly the embodiment of the approach that the “philosopher community” seems to have to these topics), and an Investigator of Agency (roughly the approach exemplified by the LW/AI Alignment crowd).[1]
If I were to put my finger on one specific difference, it would be that Philosopher is looking for the true-idealized-ontology-of-agency-independent-of-the-purpose-to-which-you-want-to-put-this-ontology, whereas Investigator wants a mechanistic model of agency, which would include a sufficient understanding of goals, values, dynamics of development of agency (or whatever adjacent concepts we’re going to use after conceptual refinement and deconfusion), etc.
Another important component is the readiness to take one’s intuitions as the starting point, but also assume they will require at least a bit of refinement before they start robustly carving reality at its joints. Sometimes you may even need to discard almost all of your intuitions and carefully rebuild your ontology from scratch, bottom-up. Philosopher, on the other hand, seems to (at least more often than Investigator) implicitly assume that their System 1 intuitions can be used as the ground truth of the matter and the quest for formalization of agency ends when the formalism perfectly captures all of our intuitions and doesn’t introduce any weird edge cases.
Philosopher asks, “what does it mean to be an agent?” Investigator asks, “how do we delineate agents from non-agents (or specify some spectrum of relevant agency-adjacent) properties, such that they tell us something of practical importance?”
Deviant causal chains are posed as a “challenge” to “reductive” theories of agency, which try to explain agency by reducing it to causal networks.[2] So what’s the problem? Quoting:
… it seems always possible that the relevant mental states and events cause the relevant event (a certain movement, for instance) in a deviant way: so that this event is clearly not an intentional action or not an action at all. … A murderous nephew intends to kill his uncle in order to inherit his fortune. He drives to his uncle’s house and on the way he kills a pedestrian by accident. As it turns out, this pedestrian is his uncle.
At least in my experience, this is another case of a Deep Philosophical Question that no longer feels like a question, once you’ve read The Sequences or had some equivalent exposure to the rationalist (or at least LW-rationalist) way of thinking.
About a year ago, I had a college course in philosophy of action. I recall having some reading assigned, in which the author basically argued that for an entity to be an agent, it needs to have an embodied feeling-understanding of action. Otherwise, it doesn’t act, so can’t be an agent. No, it doesn’t matter that it’s out there disassembling Mercury and reusing its matter to build the Dyson Sphere. It doesn’t have the relevant concept of action, so it’s not an agent.
You are suffused with a return-to-womb mentality—desperately destined for the material tomb. Your philosophy is unsupported. Why do AI researchers think they are philosophers when its very clear they are deeply uninvested in the human condition? there should be another term, ‘conjurers of the immaterial snake oil’, to describe the actions you take when you riff on Dyson Sphere narratives to legitimize your paltry and thoroughly uninteresting research
The activity of the PAM enzyme [necessary for releasing oxytocin fromthe neuron] system is dependent upon vitamin C (ascorbate), which is a necessary vitamin cofactor.
I.e. if you don’t have enough vitamin C, your neurons can’t release oxytocin. Common sensically, this should lead to some psychological/neurological problems, maybe with empathy/bonding/social cognition?
Quick googling “scurvy mental problems” or “vitamin C deficiency mental symptoms” doesn’t return much on that. This meta-analysis finds some association of sub-scurvy vitamin C deficiency with depression, mood problems, worse cognitive functioning and some other psychiatric conditions but no mention of what I’d suspect from lack of oxytocin. Possibly oxytocin is produced in low enough levels that this doesn’t really matter because you need very little vit C? But on the other hand (Wikipedia again)
By chance, sodium ascorbate by itself was found to stimulate the production of oxytocin from ovarian tissue over a range of concentrations in a dose-dependent manner.
So either this (i.e. disturbed social cognition) is not how we should expect oxytocin deficiencies to manifest or vitamin C deficiency manifests in so many ways in the brain that you don’t even bother with “they have worse theory of mind than when they ate one apple a day”.
Googling for “scurvy low mood”, I find plenty of sources that indicate that scurvy is accompanied by “mood swings — often irritability and depression”. IIRC, this has remarked upon for at least two hundred years.
That’s also what this meta-analysis found but I was mostly wondering about social cognition deficits (though looking back I see it’s not clear in the original shortform)
I’ve read the SEP entry on agency and was surprised how irrelevant it feels to whatever it is that makes me interested in agency. Here I sketch some of these differences by comparing an imaginary Philosopher of Agency (roughly the embodiment of the approach that the “philosopher community” seems to have to these topics), and an Investigator of Agency (roughly the approach exemplified by the LW/AI Alignment crowd).[1]
If I were to put my finger on one specific difference, it would be that Philosopher is looking for the true-idealized-ontology-of-agency-independent-of-the-purpose-to-which-you-want-to-put-this-ontology, whereas Investigator wants a mechanistic model of agency, which would include a sufficient understanding of goals, values, dynamics of development of agency (or whatever adjacent concepts we’re going to use after conceptual refinement and deconfusion), etc.
Another important component is the readiness to take one’s intuitions as the starting point, but also assume they will require at least a bit of refinement before they start robustly carving reality at its joints. Sometimes you may even need to discard almost all of your intuitions and carefully rebuild your ontology from scratch, bottom-up. Philosopher, on the other hand, seems to (at least more often than Investigator) implicitly assume that their System 1 intuitions can be used as the ground truth of the matter and the quest for formalization of agency ends when the formalism perfectly captures all of our intuitions and doesn’t introduce any weird edge cases.
Philosopher asks, “what does it mean to be an agent?” Investigator asks, “how do we delineate agents from non-agents (or specify some spectrum of relevant agency-adjacent) properties, such that they tell us something of practical importance?”
Deviant causal chains are posed as a “challenge” to “reductive” theories of agency, which try to explain agency by reducing it to causal networks.[2] So what’s the problem? Quoting:
At least in my experience, this is another case of a Deep Philosophical Question that no longer feels like a question, once you’ve read The Sequences or had some equivalent exposure to the rationalist (or at least LW-rationalist) way of thinking.
About a year ago, I had a college course in philosophy of action. I recall having some reading assigned, in which the author basically argued that for an entity to be an agent, it needs to have an embodied feeling-understanding of action. Otherwise, it doesn’t act, so can’t be an agent. No, it doesn’t matter that it’s out there disassembling Mercury and reusing its matter to build the Dyson Sphere. It doesn’t have the relevant concept of action, so it’s not an agent.
This is not a general diss on philosophizing, I certainly think there is value in philosophy-like thinking.
My wording, not SEP’s, but I think it’s correct.
You are suffused with a return-to-womb mentality—desperately destined for the material tomb. Your philosophy is unsupported. Why do AI researchers think they are philosophers when its very clear they are deeply uninvested in the human condition? there should be another term, ‘conjurers of the immaterial snake oil’, to describe the actions you take when you riff on Dyson Sphere narratives to legitimize your paltry and thoroughly uninteresting research
Does severe vitamin C deficiency (i.e. scurvy) lead to oxytocin depletion?
According to Wikipedia
I.e. if you don’t have enough vitamin C, your neurons can’t release oxytocin. Common sensically, this should lead to some psychological/neurological problems, maybe with empathy/bonding/social cognition?
Quick googling “scurvy mental problems” or “vitamin C deficiency mental symptoms” doesn’t return much on that. This meta-analysis finds some association of sub-scurvy vitamin C deficiency with depression, mood problems, worse cognitive functioning and some other psychiatric conditions but no mention of what I’d suspect from lack of oxytocin. Possibly oxytocin is produced in low enough levels that this doesn’t really matter because you need very little vit C? But on the other hand (Wikipedia again)
So either this (i.e. disturbed social cognition) is not how we should expect oxytocin deficiencies to manifest or vitamin C deficiency manifests in so many ways in the brain that you don’t even bother with “they have worse theory of mind than when they ate one apple a day”.
Just a detail, but shouldn’t this be one orange a day? Apples do not contain much vitamin C.
Huh, you’re right. I thought most fruits have enough to cover daily requirements.
Googling for “scurvy low mood”, I find plenty of sources that indicate that scurvy is accompanied by “mood swings — often irritability and depression”. IIRC, this has remarked upon for at least two hundred years.
That’s also what this meta-analysis found but I was mostly wondering about social cognition deficits (though looking back I see it’s not clear in the original shortform)
Mlyyrczo, I summon Thee!
Hi.