I don’t know if I would put it this way, just that if you cannot predict someone’s or something’s behavior with any degree of certainty, they seem more agenty to you.
I don’t know if I would put it this way, just that if you cannot predict someone’s or something’s behavior with any degree of certainty, they seem more agenty to you.
The weather does not seem at all agenty to me. (People in former times have so regarded it; but we are not talking about former times.)
We have probabilistic models of the weather; ensemble forecasts. They’re fairly accurate. You can plan a picnic using them. You can not use probabilistic models to predict the conversation at the picnic (beyond that it will be about “the weather”, “the food”, etc.)
What I mean by computable probability distribution is that it’s tractable to build a probabilistic simulation that gives useful predictions. An uncomputable probability distribution is intractable to build such a simulation for. Knightian Uncertainty is a good name for the state of not being able to model something, but not a very quantitative one (and arguably I haven’t really quantified what makes a probabilistic model “useful” either).
I think the computability of probability distributions is probably the right way to classify relative agency but we also tend to recognize agency through goal detection. We think actions are “purposeful” because they correspond to actions we’re familiar with in our own goal-seeking behavior: searching, exploring, manipulating, energy-conserving motion, etc. We may even fail to recognize agency in systems that use actions we aren’t familiar with or whose goals are alien (e.g. are trees agents? I’d argue yes, but most people don’t treat them like agents compared to say, weeds). The weather’s “goal” is to reach thermodynamic equilibrium using tornadoes and other gusts of wind as its actions. It would be exceedingly efficient at that if it weren’t for the pesky sun. The sun’s goal is to expand, shed some mass, then cool and shrink into its own final thermodynamic equilibrium. It will Win unless other agents interfere or a particularly unlikely collision with another star happens.
Before modern science no one would have imagined those were the actual goals of the sun and the wind and so the periodic, meaningful-seeming actions suggested agency toward an unknown goal. After physics the goals and actions were so predictable that agency was lost.
I agree. As I mentioned in the post, expected randomness is not the same as unpredictability. Also as mentioned in the post, if you were trying to escape, say, a tornado, and repeatedly failing to predict where it would move and end up in danger again and again, it would feel to you like this weather phenomenon “has a mind of its own”.
Another example is the original Gaia hypothesis, which framed a local equilibrium of the Earth’s environment in teleological terms.
Also as mentioned in the post, if you were trying to escape, say, a tornado, and repeatedly failing to predict where it would move and end up in danger again and again, it would feel to you like this weather phenomenon “has a mind of its own”.
The tornado isn’t going to follow you by chance. In fact, if it does follow you despite your efforts to evade it, that would be evidence of agentiness, of purpose. Something would have to be actively trying to steer it towards you.
For those going to LWCW in Berlin this weekend, this is one of the things I’ll be talking about.
The tornado isn’t going to follow you by chance. In fact, if it does follow you despite your efforts to evade it, that would be evidence of agentiness, of purpose. Something would have to be actively trying to steer it towards you.
Here is a counterexample: Suppose, unbeknownst to you, your movement creates a disturbance in the air that results in the tornado changing its path. Unless you can deduce this, you would assign agentiness to a weather phenomenon, whereas the only agentiness here (if any) is your own.
Oh, and if you have slides or a transcript of your talk, feel free to post it here, could be interesting.
At this point we must play “follow the improbability”. When you imagine the tornado following you around, however you try to get away from it, I ask, “what is the mechanism of this remarkably improbable phenomenon?” It seems that the agency is being supplied by your imagination, wrapped up in the word “suppose”.
More illuminating are some real examples of something following something else around.
A bumper sticker follows the car it is attached to. Wherever the car goes, there goes the bumper sticker.
Iron filings follow a magnet around.
Objects near a planet follow the planet around.
A dog follows its master around.
Here, we’re looking closely at the edge of the concept of purpose, and that it may be fuzzy is of little significance, since everything is fuzzy under a sufficiently strong magnifying glass. I draw the line between purpose and no purpose between 3 and 4. One recipe for drawing the line is that purpose requires the expenditure of some source of energy to accomplish the task. Anything less and it is like a ball in a bowl: the energy with which it tries to go to the centre was supplied by the disturbance that knocked it away. You expend no energy to remain near the Earth’s surface; the dog does expend energy to stay with its master.
Oh, and if you have slides or a transcript of your talk, feel free to post it here, could be interesting.
I won’t know what I’m going to say until I’ve said it, but I’ll try to do a writeup afterwards.
That’s consistent with the following modified claim: in the absence of firm knowledge of how agenty a thing “really” is, you will tend to take its unpredictability as an indication of agentiness.
However, I am skeptical about that too; the results of die rolls and coin flips don’t seem every agenty to most people (though to some gamblers I believe they do). Perhaps what it takes is a combination of pattern and unpredictability? If your predictions are distinctly better than chance but nothing you can think of makes them perfect, that feels like agency. Especially if the difference between your best predictions and reality isn’t a stream of small random-looking errors but has big fat tails with occasional really large errors. Maybe.
the results of die rolls and coin flips don’t seem every agenty to most people
I think the perception of agency is linked not to unpredictability, but rather to the feeling of “I don’t understand”.
Coin flips are unpredictable, but we understand them very well. Weather is (somewhat) unpredictable as well, but we all have a lot of experience with it and think we understand it. But some kind of complex behaviour and we have no idea what’s behind it? Must be agency.
That’s consistent with the following modified claim: in the absence of firm knowledge of how agenty a thing “really” is, you will tend to take its unpredictability as an indication of agentiness.
I think unpredictability is a complete red herring here. What I notice about the original examples is that the perceived lack of agency was not merely because the game-player was predictable, but because they were predictably wrong. Had they been predictably right, in the sense that the expert player watching them had a sense of understanding from their play how they were thinking, and judging their strategy favourably, I doubt the expert would be saying they were “playing like a robot”.
I happen to have a simulation of a robot here. (Warning: it’s a Java applet, so if you really want to run it you may have to jump through security hoops to convince your machine to do so.) In hunting mode, it predictably finds and eats the virtual food particles. I am quite willing to say it has agency, even though I wrote it and know exactly how it works. A limited agency, to be sure, compared with humans, but the same sort of thing.
So agentiness is having an uncomputable probability distribution?
I don’t know if I would put it this way, just that if you cannot predict someone’s or something’s behavior with any degree of certainty, they seem more agenty to you.
The weather does not seem at all agenty to me. (People in former times have so regarded it; but we are not talking about former times.)
We have probabilistic models of the weather; ensemble forecasts. They’re fairly accurate. You can plan a picnic using them. You can not use probabilistic models to predict the conversation at the picnic (beyond that it will be about “the weather”, “the food”, etc.)
What I mean by computable probability distribution is that it’s tractable to build a probabilistic simulation that gives useful predictions. An uncomputable probability distribution is intractable to build such a simulation for. Knightian Uncertainty is a good name for the state of not being able to model something, but not a very quantitative one (and arguably I haven’t really quantified what makes a probabilistic model “useful” either).
I think the computability of probability distributions is probably the right way to classify relative agency but we also tend to recognize agency through goal detection. We think actions are “purposeful” because they correspond to actions we’re familiar with in our own goal-seeking behavior: searching, exploring, manipulating, energy-conserving motion, etc. We may even fail to recognize agency in systems that use actions we aren’t familiar with or whose goals are alien (e.g. are trees agents? I’d argue yes, but most people don’t treat them like agents compared to say, weeds). The weather’s “goal” is to reach thermodynamic equilibrium using tornadoes and other gusts of wind as its actions. It would be exceedingly efficient at that if it weren’t for the pesky sun. The sun’s goal is to expand, shed some mass, then cool and shrink into its own final thermodynamic equilibrium. It will Win unless other agents interfere or a particularly unlikely collision with another star happens.
Before modern science no one would have imagined those were the actual goals of the sun and the wind and so the periodic, meaningful-seeming actions suggested agency toward an unknown goal. After physics the goals and actions were so predictable that agency was lost.
I agree. As I mentioned in the post, expected randomness is not the same as unpredictability. Also as mentioned in the post, if you were trying to escape, say, a tornado, and repeatedly failing to predict where it would move and end up in danger again and again, it would feel to you like this weather phenomenon “has a mind of its own”.
Another example is the original Gaia hypothesis, which framed a local equilibrium of the Earth’s environment in teleological terms.
The tornado isn’t going to follow you by chance. In fact, if it does follow you despite your efforts to evade it, that would be evidence of agentiness, of purpose. Something would have to be actively trying to steer it towards you.
For those going to LWCW in Berlin this weekend, this is one of the things I’ll be talking about.
Here is a counterexample: Suppose, unbeknownst to you, your movement creates a disturbance in the air that results in the tornado changing its path. Unless you can deduce this, you would assign agentiness to a weather phenomenon, whereas the only agentiness here (if any) is your own.
Oh, and if you have slides or a transcript of your talk, feel free to post it here, could be interesting.
At this point we must play “follow the improbability”. When you imagine the tornado following you around, however you try to get away from it, I ask, “what is the mechanism of this remarkably improbable phenomenon?” It seems that the agency is being supplied by your imagination, wrapped up in the word “suppose”.
More illuminating are some real examples of something following something else around.
A bumper sticker follows the car it is attached to. Wherever the car goes, there goes the bumper sticker.
Iron filings follow a magnet around.
Objects near a planet follow the planet around.
A dog follows its master around.
Here, we’re looking closely at the edge of the concept of purpose, and that it may be fuzzy is of little significance, since everything is fuzzy under a sufficiently strong magnifying glass. I draw the line between purpose and no purpose between 3 and 4. One recipe for drawing the line is that purpose requires the expenditure of some source of energy to accomplish the task. Anything less and it is like a ball in a bowl: the energy with which it tries to go to the centre was supplied by the disturbance that knocked it away. You expend no energy to remain near the Earth’s surface; the dog does expend energy to stay with its master.
I won’t know what I’m going to say until I’ve said it, but I’ll try to do a writeup afterwards.
That’s consistent with the following modified claim: in the absence of firm knowledge of how agenty a thing “really” is, you will tend to take its unpredictability as an indication of agentiness.
However, I am skeptical about that too; the results of die rolls and coin flips don’t seem every agenty to most people (though to some gamblers I believe they do). Perhaps what it takes is a combination of pattern and unpredictability? If your predictions are distinctly better than chance but nothing you can think of makes them perfect, that feels like agency. Especially if the difference between your best predictions and reality isn’t a stream of small random-looking errors but has big fat tails with occasional really large errors. Maybe.
I think the perception of agency is linked not to unpredictability, but rather to the feeling of “I don’t understand”.
Coin flips are unpredictable, but we understand them very well. Weather is (somewhat) unpredictable as well, but we all have a lot of experience with it and think we understand it. But some kind of complex behaviour and we have no idea what’s behind it? Must be agency.
I think unpredictability is a complete red herring here. What I notice about the original examples is that the perceived lack of agency was not merely because the game-player was predictable, but because they were predictably wrong. Had they been predictably right, in the sense that the expert player watching them had a sense of understanding from their play how they were thinking, and judging their strategy favourably, I doubt the expert would be saying they were “playing like a robot”.
I happen to have a simulation of a robot here. (Warning: it’s a Java applet, so if you really want to run it you may have to jump through security hoops to convince your machine to do so.) In hunting mode, it predictably finds and eats the virtual food particles. I am quite willing to say it has agency, even though I wrote it and know exactly how it works. A limited agency, to be sure, compared with humans, but the same sort of thing.