Organisms die as well as organisations. That doesn’t mean they are not goal-directed.
Nor do organisms act entirely harmoniously. There are millions of bacterial symbionts inside every animal, who have their own reproductive ends. Their bodies are infected with pathogens, which make them sneeze, cough and scratch. Also, animals are uneasy coalitions of genes—some of which (e.g. segregation distorters) want to do other things besides helping the organism reproduce. So, if you rule out companies on those grounds, organisms seem unlikely to qualify either.
In practice, both organisms and companies are harmonious enough for goal-seeking models to work as reasonably good predictors of their behaviour.
If I want to predict what a company will do, I look at the board of directors/ceo/upper management/powerful unions, not the previous actions of the company. This allows me to predict if they will refocus the company on doing something new or sell it off to be gutted.
Companies are not the only agents which can be so dissected. You could similarly examine the brain of an animal—or examine the source code of a robot.
However, treating agents in a behavioural manner—as input-process-output black boxes is a pretty conventional method of analysing their behaviour.
Sure it has some disadvantages. If an organism is under attack by a pathogen and is near to death, their previous behaviour may not be an accurate predictor of future actions. However, that is not usually the case—and there are corresponding advantages.
For example, you might not have access to the organism’s internal state—in which case a “black box” analysis would be attractive.
Anyway, your objections don’t look too serious to me. Companies typically behave in a highly goal-directed manner—broadly similar to the way in which organisms behave—and for similar reasons.
Yes, yes. It is all a continuum. That doesn’t change the fact that I don’t use the intentional stance on businesses. I view them as either a) designed inflexible systems for providing me services in exchange for money or b) systems modified by human actors for their own interests.
I’ll very rarely say “google is trying to do X” or that “microsoft knows Y”. I do do these things for humans and animals, in that respect I see a firm dividing line between them in the type of system I treat them as.
I think the human brain has a whole bunch of circuitry designed to understand other agents—as modified versions of yourself.
That circuitry can also be pushed into use for understanding the behaviour of organisations, companies and governments—since those systems have enough “agency” to make the analogy more fruitful than confusing.
My take on the issue is that this typically results in more insight for less effort.
Critics might say this is anthropomorphism—but IMO, it pays.
Organisms die as well as organisations. That doesn’t mean they are not goal-directed.
Nor do organisms act entirely harmoniously. There are millions of bacterial symbionts inside every animal, who have their own reproductive ends. Their bodies are infected with pathogens, which make them sneeze, cough and scratch. Also, animals are uneasy coalitions of genes—some of which (e.g. segregation distorters) want to do other things besides helping the organism reproduce. So, if you rule out companies on those grounds, organisms seem unlikely to qualify either.
In practice, both organisms and companies are harmonious enough for goal-seeking models to work as reasonably good predictors of their behaviour.
If I want to predict what a company will do, I look at the board of directors/ceo/upper management/powerful unions, not the previous actions of the company. This allows me to predict if they will refocus the company on doing something new or sell it off to be gutted.
Companies are not the only agents which can be so dissected. You could similarly examine the brain of an animal—or examine the source code of a robot.
However, treating agents in a behavioural manner—as input-process-output black boxes is a pretty conventional method of analysing their behaviour.
Sure it has some disadvantages. If an organism is under attack by a pathogen and is near to death, their previous behaviour may not be an accurate predictor of future actions. However, that is not usually the case—and there are corresponding advantages.
For example, you might not have access to the organism’s internal state—in which case a “black box” analysis would be attractive.
Anyway, your objections don’t look too serious to me. Companies typically behave in a highly goal-directed manner—broadly similar to the way in which organisms behave—and for similar reasons.
Yes, yes. It is all a continuum. That doesn’t change the fact that I don’t use the intentional stance on businesses. I view them as either a) designed inflexible systems for providing me services in exchange for money or b) systems modified by human actors for their own interests.
I’ll very rarely say “google is trying to do X” or that “microsoft knows Y”. I do do these things for humans and animals, in that respect I see a firm dividing line between them in the type of system I treat them as.
I think the human brain has a whole bunch of circuitry designed to understand other agents—as modified versions of yourself.
That circuitry can also be pushed into use for understanding the behaviour of organisations, companies and governments—since those systems have enough “agency” to make the analogy more fruitful than confusing.
My take on the issue is that this typically results in more insight for less effort.
Critics might say this is anthropomorphism—but IMO, it pays.