Edit: On reflection, in many situations insulation from financial pressures may be a good thing, all else being equal. That still leaves the question of how to keep networks in proper contact with reality. As our power increases, it becomes ever easier to insulate ourselves and spiral into self-referential loops.
If civilization really is powered by network learning on the organizational level, then we’ve been doing it exactly wrong. Top-down funding that was supposed to free institutions and companies to pursue their core competencies has the effect of removing reality-based external pressures from the organization’s network structure. It certainly seems as if our institutions have become more detached from reality over time.
Have organizations been insulated from contact with reality in other ways?
Humans in modern societies cannot be disconnected from financial pressures (or, in some communist experiences, pressures of material needs intermediated by not-quite-financial mechanisms). Pure insulation from such is unlikely to be possible at any scale, and probably not desirable for most things.
The common concern here is not “connection with reality”, but “short-timeframe focus”. With some amount of “ability to take long-shot bets that probably won’t pay off”. I know of no way to make individuals or organizations resistant to the pressures of having to eat and enjoying leisure activities which require other humans to be paid.
You can seek out and attract members who are naturally more aligned with your mission than with their personal well-being, but that doesn’t get you to 100%. Probably enough for most purposes. You can attract sponsors and patrons who’ve overindexed on their personal wealth and are willing to share some of it with you, in pursuit of your mission.
In all cases, if the mission becomes unpopular or if those sacrificing to further it stop believing it’s worth your expense, it’ll stop. This can take some time, and isn’t terribly well-measured, so it often affects topics rather than organizations or individuals. That sucks, but over time it definitely happens.
You seem to be focused on the individual level? I was talking about learning on the level of interpersonal relationships and up. As I explain here, I believe any network of agents does Hebbian learning on the network level by default. Sorry about the confusion.
Looking at the large scale, my impression is that the observable dysfunctions correspond pretty well with pressures (or lack thereof) organizations face, which fits the group-level-network-learning view. It seems likely that the individual failings, at least in positions where they matter most, are downstream of that. Call it the institution alignment problem if you will.
I don’t think we have a handle on how to effectively influence existing networks. Forming informal networks of reasonably aligned individuals around relatively object-level purposes seems like a good idea by default.
Hmm. I don’t think I agree that network/group learning exists, distinctly from learning and expectations of the individuals. This is not a denial that higher levels of abstraction are useful for reasoning, but that doesn’t make them ontologically real or distinct from the sum of the parts.
To the extent that we can observe the lower-level components of a system, and there are few enough of them that we can identify the way they add up, we get more accurate predictions by doing so, rather than averaging them out into collective observations.
For this example, the organization “cares” about prosaic things like money, because it’s constituents do. It may also care about it in terms of influence on other orgs or non-constituent humans, of course.
Are you ontologically real or distinct from the sum of your parts? Do you “care” about things only because your constituents do?
I’m suggesting precisely that the group-network levels may be useful in the same sense that the human level or the multicellular-organism level can be useful. Granted, there’s more transfer and overlap when the scale difference is small but that in itself doesn’t necessarily mean that the more customary frame is equally-or-more useful for any given purpose.
Appreciate the caring-about-money point, got me thinking about how concepts and motivations/drives translate across levels. I don’t think there’s a clean joint to carve between sophisticated agents and networks-of-said-agents.
Side note: I don’t know of a widely shared paradigm of thought or language that would be well-suited for thinking or talking about tall towers of self-similar scale-free layers that have as much causal spillover between levels as living systems like to have.
Are you ontologically real or distinct from the sum of your parts? Do you “care” about things only because your constituents do?
Nope. Well, maybe. I’m the sum of parts in a given configuration, even as some of those parts are changed, and as the configuration evolves slightly. Not real, but very convenient to model, since my parts are too numerous and their relationships too complicated to identify individually. But I’m not any more than that sum.
I fully agree with your point that there’s no clean joint to carve between when to use different levels of abstraction for modeling behavior (and especially for modeling “caring” or motivation), but I’ll continue to argue that most organizations are small enough that it’s workable to notice the individuals involved, and you get more fidelity and understanding if you do so.
Edit: On reflection, in many situations insulation from financial pressures may be a good thing, all else being equal. That still leaves the question of how to keep networks in proper contact with reality. As our power increases, it becomes ever easier to insulate ourselves and spiral into self-referential loops.
If civilization really is powered by network learning on the organizational level, then we’ve been doing it exactly wrong. Top-down funding that was supposed to free institutions and companies to pursue their core competencies has the effect of removing reality-based external pressures from the organization’s network structure. It certainly seems as if our institutions have become more detached from reality over time.
Have organizations been insulated from contact with reality in other ways?
Humans in modern societies cannot be disconnected from financial pressures (or, in some communist experiences, pressures of material needs intermediated by not-quite-financial mechanisms). Pure insulation from such is unlikely to be possible at any scale, and probably not desirable for most things.
The common concern here is not “connection with reality”, but “short-timeframe focus”. With some amount of “ability to take long-shot bets that probably won’t pay off”. I know of no way to make individuals or organizations resistant to the pressures of having to eat and enjoying leisure activities which require other humans to be paid.
You can seek out and attract members who are naturally more aligned with your mission than with their personal well-being, but that doesn’t get you to 100%. Probably enough for most purposes. You can attract sponsors and patrons who’ve overindexed on their personal wealth and are willing to share some of it with you, in pursuit of your mission.
In all cases, if the mission becomes unpopular or if those sacrificing to further it stop believing it’s worth your expense, it’ll stop. This can take some time, and isn’t terribly well-measured, so it often affects topics rather than organizations or individuals. That sucks, but over time it definitely happens.
You seem to be focused on the individual level? I was talking about learning on the level of interpersonal relationships and up. As I explain here, I believe any network of agents does Hebbian learning on the network level by default. Sorry about the confusion.
Looking at the large scale, my impression is that the observable dysfunctions correspond pretty well with pressures (or lack thereof) organizations face, which fits the group-level-network-learning view. It seems likely that the individual failings, at least in positions where they matter most, are downstream of that. Call it the institution alignment problem if you will.
I don’t think we have a handle on how to effectively influence existing networks. Forming informal networks of reasonably aligned individuals around relatively object-level purposes seems like a good idea by default.
Hmm. I don’t think I agree that network/group learning exists, distinctly from learning and expectations of the individuals. This is not a denial that higher levels of abstraction are useful for reasoning, but that doesn’t make them ontologically real or distinct from the sum of the parts.
To the extent that we can observe the lower-level components of a system, and there are few enough of them that we can identify the way they add up, we get more accurate predictions by doing so, rather than averaging them out into collective observations.
For this example, the organization “cares” about prosaic things like money, because it’s constituents do. It may also care about it in terms of influence on other orgs or non-constituent humans, of course.
Are you ontologically real or distinct from the sum of your parts? Do you “care” about things only because your constituents do?
I’m suggesting precisely that the group-network levels may be useful in the same sense that the human level or the multicellular-organism level can be useful. Granted, there’s more transfer and overlap when the scale difference is small but that in itself doesn’t necessarily mean that the more customary frame is equally-or-more useful for any given purpose.
Appreciate the caring-about-money point, got me thinking about how concepts and motivations/drives translate across levels. I don’t think there’s a clean joint to carve between sophisticated agents and networks-of-said-agents.
Side note: I don’t know of a widely shared paradigm of thought or language that would be well-suited for thinking or talking about tall towers of self-similar scale-free layers that have as much causal spillover between levels as living systems like to have.
Nope. Well, maybe. I’m the sum of parts in a given configuration, even as some of those parts are changed, and as the configuration evolves slightly. Not real, but very convenient to model, since my parts are too numerous and their relationships too complicated to identify individually. But I’m not any more than that sum.
I fully agree with your point that there’s no clean joint to carve between when to use different levels of abstraction for modeling behavior (and especially for modeling “caring” or motivation), but I’ll continue to argue that most organizations are small enough that it’s workable to notice the individuals involved, and you get more fidelity and understanding if you do so.