I love this post! I’ve been thinking about it a lot. I think it’s mostly wrong, but it’s been very stimulating. The main problem is that it wobbles between treating agency in the stated sense of “a capacity to act and make choices”, and treating it as intelligence, and treating it as personhood. For one thing, a game player whose moves are predictable will seem dumb, but they still have agency in the stated sense.
Let me see if I can sketch out some slightly improved categories. A predictably moving pendulum is non-agentic, and therefore also not in the class of things that we see as stupid or intelligent: we don’t use inverse planning to infer a pendulum’s beliefs, preferences, goals, or behavioral intents. When we see an event that’s harder to predict or retrodict, like the swaying of a weather vane, the wandering of a tornado, or the identity of a card drawn out from a deck, we reach out for explanations, and planning process explanations may occasionally momentarily be credible, and may be credible more than momentarily if someone is indeed pulling the strings.
As for the relationship between estimation of intelligence and the algorithmic comprehensibility of a goal-directed agent, I propose that AlphaGo seems intelligent even to the people who wrote it while they are playing against it, and that it’s rather a judge’s uncertainty about individual moves or actions which is a determining factor (alongside the success of the move) in judging intelligence. Or maybe uncertainty about the justification of the move? Either way, a person’s ability to examine (or even edit) a mind in detail should not necessarily cause the person to see the mind as being inanimate or non-agentic or non-consequentialist or non-intelligent or non-personlike or unworthy of moral consideration. Whew! I’m glad that the moral status of a thing doesn’t depend on an attribution that I can only make in ignorance. That would be devastating!
To recap in more direct terms, predictable things can have agency. Choice-processes are a kind of explanation which may be considered when examining an unpredictable thing, but that doesn’t mean that prediction uncertainty always or typically leads to attributions of agency or intelligence (and I really think that a hypothesis space of “agenty or random” is very impoverished).
As for the cognitive antecedents of personification of inanimate things and the conditions that make useful an impersonal, debugging mindset in dealing with people, those are very interesting. From your examples, it seems like there’s an implicit belief when we attribute personhood that the thing will respond predictably to displays of anger, and that an impersonal stance is useful or appropriate when a person won’t alter their tendency toward a behavior when they hear expressions of negative moral judgement. This would mean that when a judge makes a well-founded attribution of personhood, what they’re doing is recognizing that a thing’s behavior is well explained by something like “a decision process that considers social norms and optimizes its behavior with an eye toward social consequences”. As for what leads to unfounded attributions of personhood, like getting uselessly angry at a beeping smoke alarm, that’s still an open question in my mind! Or maybe getting angry at a smoke alarm isn’t a kind of personification, and hitting inanimate things that won’t stop noising at you is actually a fine response? Hmmmm.
A while ago I asked myself, “If agency is bugs and uncertainty, does that tell us anything about akrasia?” Well, I no longer think that agency is bugs and uncertainty, but re-reading the comments in the thread gave me some new insight into the topic. Oh shit, it’s 3:00 am. Maybe I’ll sleep on it and write it up tomorrow.
I love this post! I’ve been thinking about it a lot. I think it’s mostly wrong, but it’s been very stimulating. The main problem is that it wobbles between treating agency in the stated sense of “a capacity to act and make choices”, and treating it as intelligence, and treating it as personhood. For one thing, a game player whose moves are predictable will seem dumb, but they still have agency in the stated sense.
Let me see if I can sketch out some slightly improved categories. A predictably moving pendulum is non-agentic, and therefore also not in the class of things that we see as stupid or intelligent: we don’t use inverse planning to infer a pendulum’s beliefs, preferences, goals, or behavioral intents. When we see an event that’s harder to predict or retrodict, like the swaying of a weather vane, the wandering of a tornado, or the identity of a card drawn out from a deck, we reach out for explanations, and planning process explanations may occasionally momentarily be credible, and may be credible more than momentarily if someone is indeed pulling the strings.
As for the relationship between estimation of intelligence and the algorithmic comprehensibility of a goal-directed agent, I propose that AlphaGo seems intelligent even to the people who wrote it while they are playing against it, and that it’s rather a judge’s uncertainty about individual moves or actions which is a determining factor (alongside the success of the move) in judging intelligence. Or maybe uncertainty about the justification of the move? Either way, a person’s ability to examine (or even edit) a mind in detail should not necessarily cause the person to see the mind as being inanimate or non-agentic or non-consequentialist or non-intelligent or non-personlike or unworthy of moral consideration. Whew! I’m glad that the moral status of a thing doesn’t depend on an attribution that I can only make in ignorance. That would be devastating!
To recap in more direct terms, predictable things can have agency. Choice-processes are a kind of explanation which may be considered when examining an unpredictable thing, but that doesn’t mean that prediction uncertainty always or typically leads to attributions of agency or intelligence (and I really think that a hypothesis space of “agenty or random” is very impoverished).
As for the cognitive antecedents of personification of inanimate things and the conditions that make useful an impersonal, debugging mindset in dealing with people, those are very interesting. From your examples, it seems like there’s an implicit belief when we attribute personhood that the thing will respond predictably to displays of anger, and that an impersonal stance is useful or appropriate when a person won’t alter their tendency toward a behavior when they hear expressions of negative moral judgement. This would mean that when a judge makes a well-founded attribution of personhood, what they’re doing is recognizing that a thing’s behavior is well explained by something like “a decision process that considers social norms and optimizes its behavior with an eye toward social consequences”. As for what leads to unfounded attributions of personhood, like getting uselessly angry at a beeping smoke alarm, that’s still an open question in my mind! Or maybe getting angry at a smoke alarm isn’t a kind of personification, and hitting inanimate things that won’t stop noising at you is actually a fine response? Hmmmm.
A while ago I asked myself, “If agency is bugs and uncertainty, does that tell us anything about akrasia?” Well, I no longer think that agency is bugs and uncertainty, but re-reading the comments in the thread gave me some new insight into the topic. Oh shit, it’s 3:00 am. Maybe I’ll sleep on it and write it up tomorrow.