Are you ontologically real or distinct from the sum of your parts? Do you “care” about things only because your constituents do?
I’m suggesting precisely that the group-network levels may be useful in the same sense that the human level or the multicellular-organism level can be useful. Granted, there’s more transfer and overlap when the scale difference is small but that in itself doesn’t necessarily mean that the more customary frame is equally-or-more useful for any given purpose.
Appreciate the caring-about-money point, got me thinking about how concepts and motivations/drives translate across levels. I don’t think there’s a clean joint to carve between sophisticated agents and networks-of-said-agents.
Side note: I don’t know of a widely shared paradigm of thought or language that would be well-suited for thinking or talking about tall towers of self-similar scale-free layers that have as much causal spillover between levels as living systems like to have.
Are you ontologically real or distinct from the sum of your parts? Do you “care” about things only because your constituents do?
Nope. Well, maybe. I’m the sum of parts in a given configuration, even as some of those parts are changed, and as the configuration evolves slightly. Not real, but very convenient to model, since my parts are too numerous and their relationships too complicated to identify individually. But I’m not any more than that sum.
I fully agree with your point that there’s no clean joint to carve between when to use different levels of abstraction for modeling behavior (and especially for modeling “caring” or motivation), but I’ll continue to argue that most organizations are small enough that it’s workable to notice the individuals involved, and you get more fidelity and understanding if you do so.
Are you ontologically real or distinct from the sum of your parts? Do you “care” about things only because your constituents do?
I’m suggesting precisely that the group-network levels may be useful in the same sense that the human level or the multicellular-organism level can be useful. Granted, there’s more transfer and overlap when the scale difference is small but that in itself doesn’t necessarily mean that the more customary frame is equally-or-more useful for any given purpose.
Appreciate the caring-about-money point, got me thinking about how concepts and motivations/drives translate across levels. I don’t think there’s a clean joint to carve between sophisticated agents and networks-of-said-agents.
Side note: I don’t know of a widely shared paradigm of thought or language that would be well-suited for thinking or talking about tall towers of self-similar scale-free layers that have as much causal spillover between levels as living systems like to have.
Nope. Well, maybe. I’m the sum of parts in a given configuration, even as some of those parts are changed, and as the configuration evolves slightly. Not real, but very convenient to model, since my parts are too numerous and their relationships too complicated to identify individually. But I’m not any more than that sum.
I fully agree with your point that there’s no clean joint to carve between when to use different levels of abstraction for modeling behavior (and especially for modeling “caring” or motivation), but I’ll continue to argue that most organizations are small enough that it’s workable to notice the individuals involved, and you get more fidelity and understanding if you do so.