The most frequent error in my opinion is modelling a group of humans as a single human.
I think it’s a bit more complicated. I see nothing wrong with modeling a group of humans as a single entity which has, say, particular interests, traditions, incentives, etc. There are big differences between “government” and “politicians in the government”—an obvious one would be that politicians come and go, but the government (including a very large and very influential class of civil servants) remains.
I am not saying that we should anthropomorphise entities, but treating them just as a group of humans doesn’t look right either.
I see nothing wrong with modeling a group of humans as a single entity which has, say, particular interests, traditions, incentives, etc.
Such model ignores e.g. minorities which don’t share the interests of the majority, or the internal fighting between people who have the same interests but compete with each other for scarce resources (such as status within the group).
As a result, the group of humans modelled this way will seem like a conspiracy, and—depending on whether you choose to model all failures of coordination as “this is what the entity really wants” or “this is what the entity doesn’t want, but does it anyway”—either evil or crazy.
How good a model is cannot be determined without specifying purpose of this model. In particular, there is no universally-correct granularity—some models track a lot of little details and effects, while others do not and aggregate all of them into a few measures or indicators. Both types can be useful depending on the purpose. In particular, a more granular model is not necessarily a better model.
This general principle applies here as well. Sometimes you do want to model a group of humans as a group of distinct humans, and sometimes you want to model a group of humans as a single entity.
It’s a bit more complicated, but still basically true: a group is not very well modeled as an individual. Heck, I’m not sure individual humans have sufficient consistency over time to be well-modeled as an individual. I suspect that (Arrow’s Theorem)[https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem] applies to subpersonal thinking modules as well as it does whole people.
A single entity which can believe and act simultaneously in contradictory ways is not really a single entity, is it?
I think it’s a bit more complicated. I see nothing wrong with modeling a group of humans as a single entity which has, say, particular interests, traditions, incentives, etc. There are big differences between “government” and “politicians in the government”—an obvious one would be that politicians come and go, but the government (including a very large and very influential class of civil servants) remains.
I am not saying that we should anthropomorphise entities, but treating them just as a group of humans doesn’t look right either.
Such model ignores e.g. minorities which don’t share the interests of the majority, or the internal fighting between people who have the same interests but compete with each other for scarce resources (such as status within the group).
As a result, the group of humans modelled this way will seem like a conspiracy, and—depending on whether you choose to model all failures of coordination as “this is what the entity really wants” or “this is what the entity doesn’t want, but does it anyway”—either evil or crazy.
Well, let’s step back a little bit.
How good a model is cannot be determined without specifying purpose of this model. In particular, there is no universally-correct granularity—some models track a lot of little details and effects, while others do not and aggregate all of them into a few measures or indicators. Both types can be useful depending on the purpose. In particular, a more granular model is not necessarily a better model.
This general principle applies here as well. Sometimes you do want to model a group of humans as a group of distinct humans, and sometimes you want to model a group of humans as a single entity.
It’s a bit more complicated, but still basically true: a group is not very well modeled as an individual. Heck, I’m not sure individual humans have sufficient consistency over time to be well-modeled as an individual. I suspect that (Arrow’s Theorem)[https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem] applies to subpersonal thinking modules as well as it does whole people.
A single entity which can believe and act simultaneously in contradictory ways is not really a single entity, is it?
See my answer to Viliam...