They really don’t maximise that value… you’d get closer to the mark if you added in words like profit and executive pay.
But the main reason I don’t think they have a coherent goal is because they may evaporate tomorrow. If the goal seeking agents that make them up decide there is somewhere better to fulfill there goals, then they can just up and leave and the goal does not get fulfilled. They have to balance the variety of goals of the agents inside it (which constantly change as they get new people) with the business goals, if it is to survive. Sometimes no one making up an organisation wants it to survive.
Organisms die as well as organisations. That doesn’t mean they are not goal-directed.
Nor do organisms act entirely harmoniously. There are millions of bacterial symbionts inside every animal, who have their own reproductive ends. Their bodies are infected with pathogens, which make them sneeze, cough and scratch. Also, animals are uneasy coalitions of genes—some of which (e.g. segregation distorters) want to do other things besides helping the organism reproduce. So, if you rule out companies on those grounds, organisms seem unlikely to qualify either.
In practice, both organisms and companies are harmonious enough for goal-seeking models to work as reasonably good predictors of their behaviour.
If I want to predict what a company will do, I look at the board of directors/ceo/upper management/powerful unions, not the previous actions of the company. This allows me to predict if they will refocus the company on doing something new or sell it off to be gutted.
Companies are not the only agents which can be so dissected. You could similarly examine the brain of an animal—or examine the source code of a robot.
However, treating agents in a behavioural manner—as input-process-output black boxes is a pretty conventional method of analysing their behaviour.
Sure it has some disadvantages. If an organism is under attack by a pathogen and is near to death, their previous behaviour may not be an accurate predictor of future actions. However, that is not usually the case—and there are corresponding advantages.
For example, you might not have access to the organism’s internal state—in which case a “black box” analysis would be attractive.
Anyway, your objections don’t look too serious to me. Companies typically behave in a highly goal-directed manner—broadly similar to the way in which organisms behave—and for similar reasons.
Yes, yes. It is all a continuum. That doesn’t change the fact that I don’t use the intentional stance on businesses. I view them as either a) designed inflexible systems for providing me services in exchange for money or b) systems modified by human actors for their own interests.
I’ll very rarely say “google is trying to do X” or that “microsoft knows Y”. I do do these things for humans and animals, in that respect I see a firm dividing line between them in the type of system I treat them as.
I think the human brain has a whole bunch of circuitry designed to understand other agents—as modified versions of yourself.
That circuitry can also be pushed into use for understanding the behaviour of organisations, companies and governments—since those systems have enough “agency” to make the analogy more fruitful than confusing.
My take on the issue is that this typically results in more insight for less effort.
Critics might say this is anthropomorphism—but IMO, it pays.
They really don’t maximise that value… you’d get closer to the mark if you added in words like profit and executive pay.
But the main reason I don’t think they have a coherent goal is because they may evaporate tomorrow. If the goal seeking agents that make them up decide there is somewhere better to fulfill there goals, then they can just up and leave and the goal does not get fulfilled. They have to balance the variety of goals of the agents inside it (which constantly change as they get new people) with the business goals, if it is to survive. Sometimes no one making up an organisation wants it to survive.
Organisms die as well as organisations. That doesn’t mean they are not goal-directed.
Nor do organisms act entirely harmoniously. There are millions of bacterial symbionts inside every animal, who have their own reproductive ends. Their bodies are infected with pathogens, which make them sneeze, cough and scratch. Also, animals are uneasy coalitions of genes—some of which (e.g. segregation distorters) want to do other things besides helping the organism reproduce. So, if you rule out companies on those grounds, organisms seem unlikely to qualify either.
In practice, both organisms and companies are harmonious enough for goal-seeking models to work as reasonably good predictors of their behaviour.
If I want to predict what a company will do, I look at the board of directors/ceo/upper management/powerful unions, not the previous actions of the company. This allows me to predict if they will refocus the company on doing something new or sell it off to be gutted.
Companies are not the only agents which can be so dissected. You could similarly examine the brain of an animal—or examine the source code of a robot.
However, treating agents in a behavioural manner—as input-process-output black boxes is a pretty conventional method of analysing their behaviour.
Sure it has some disadvantages. If an organism is under attack by a pathogen and is near to death, their previous behaviour may not be an accurate predictor of future actions. However, that is not usually the case—and there are corresponding advantages.
For example, you might not have access to the organism’s internal state—in which case a “black box” analysis would be attractive.
Anyway, your objections don’t look too serious to me. Companies typically behave in a highly goal-directed manner—broadly similar to the way in which organisms behave—and for similar reasons.
Yes, yes. It is all a continuum. That doesn’t change the fact that I don’t use the intentional stance on businesses. I view them as either a) designed inflexible systems for providing me services in exchange for money or b) systems modified by human actors for their own interests.
I’ll very rarely say “google is trying to do X” or that “microsoft knows Y”. I do do these things for humans and animals, in that respect I see a firm dividing line between them in the type of system I treat them as.
I think the human brain has a whole bunch of circuitry designed to understand other agents—as modified versions of yourself.
That circuitry can also be pushed into use for understanding the behaviour of organisations, companies and governments—since those systems have enough “agency” to make the analogy more fruitful than confusing.
My take on the issue is that this typically results in more insight for less effort.
Critics might say this is anthropomorphism—but IMO, it pays.