We’d have to talk more / I’d have to read more of what you wrote, for me to give a non-surface-level / non-priors-based answer, but on priors (based on, say, a few dozen conversations related to multiple agency) I’d expect that whatever you mean by hierarchical agency is dodging the problem. It’s just more homunculi. It could serve as a way in / as a centerpiece for other thoughts you’re having that are more so approaching the problem, but the hierarchicalness of the agency probably isn’t actually the relevant aspect. It’s like if someone is trying to explain how a car goes and then they start talking about how, like, a car is made of four wheels, and each wheel has its own force that it applies to a separate part of the road in some specific position and direction and so we can think of a wheel as having inside of it, or at least being functionally equivalent to having inside of it, another smaller car (a thing that goes), and so a car is really an assembly of 4 cars. We’re just… spinning our wheels lol.
It’s a bit annoying to me that “it’s just more homunculi” is both kind of powerful for reasoning about humans, but also evades understanding agentic things. I also find it tempting because it gives a cool theoretical foothold to work off, but I wonder whether the approach is hiding most of the complexity of understanding agency.
Agreed we would have to talk more. I think I mostly get the homunculi objection. Don’t have time now to write an actual response, so here are some signposts: - part of what you call agency is explained by roughly active inference style of reasoning —some type of “living” system is characteristic by having boundaries between them and the environment (boundaries mostly in sense of separation of variables) -- maintaining the boundary leads to need to model the environment —modelling the environment introduces a selection pressure toward approximating Bayes - other critical ingredient is boundedness —in this universe, negentropy isn’t free —this introduces fundamental tradeoff / selection pressure for any cognitive system: length isn’t free, bitflips aren’t free, etc. (--- downstream of that is compression everywhere, abstractions) -- empirically, the cost/returns function for scaling cognition usually hits diminishing returns, leading to minds where it’s not effective to grow the single mind further —this leads to the basin of convergent evolution I call “specialize and trade”— empirically, for many cognitive systems, there is a general selection pressure toward modularity —I don’t know what are all the reasons for that, but one relatively simple is ‘wires are not free’; if wires are not free, you get colocation of computations like brain regions or industry hubs —other possibilities are selection pressures from CAP theorem, MVG, … (modularity also looks a bit like box-inverted specialize and trade)
So, in short, I think where I agree with the spirit of If humans didn’t have a fixed skull size, you wouldn’t get civilization with specialized members and my response is there seems to be extremely general selection pressure in this direction. If cells were able to just grow in size and it was efficient, you wouldn’t get multicellulars. If code bases were able to just grow in size and it was efficient, I wouldn’t get a myriad of packages on my laptop, it would all be just kernel. (But even if it was just kernel, it seems modularity would kick in and you still get the ‘distinguishable parts’ structure.)
That’s why solving hierarchical agency is likely necessary for success
We’d have to talk more / I’d have to read more of what you wrote, for me to give a non-surface-level / non-priors-based answer, but on priors (based on, say, a few dozen conversations related to multiple agency) I’d expect that whatever you mean by hierarchical agency is dodging the problem. It’s just more homunculi. It could serve as a way in / as a centerpiece for other thoughts you’re having that are more so approaching the problem, but the hierarchicalness of the agency probably isn’t actually the relevant aspect. It’s like if someone is trying to explain how a car goes and then they start talking about how, like, a car is made of four wheels, and each wheel has its own force that it applies to a separate part of the road in some specific position and direction and so we can think of a wheel as having inside of it, or at least being functionally equivalent to having inside of it, another smaller car (a thing that goes), and so a car is really an assembly of 4 cars. We’re just… spinning our wheels lol.
Just a guess though. (Just as a token to show that I’m not completely ungrounded here w.r.t. multi-agency stuff in general, but not saying this addresses specifically what you’re referring to: https://tsvibt.blogspot.com/2023/09/the-cosmopolitan-leviathan-enthymeme.html)
It’s a bit annoying to me that “it’s just more homunculi” is both kind of powerful for reasoning about humans, but also evades understanding agentic things. I also find it tempting because it gives a cool theoretical foothold to work off, but I wonder whether the approach is hiding most of the complexity of understanding agency.
Agreed we would have to talk more. I think I mostly get the homunculi objection. Don’t have time now to write an actual response, so here are some signposts:
- part of what you call agency is explained by roughly active inference style of reasoning
—some type of “living” system is characteristic by having boundaries between them and the environment (boundaries mostly in sense of separation of variables)
-- maintaining the boundary leads to need to model the environment
—modelling the environment introduces a selection pressure toward approximating Bayes
- other critical ingredient is boundedness
—in this universe, negentropy isn’t free
—this introduces fundamental tradeoff / selection pressure for any cognitive system: length isn’t free, bitflips aren’t free, etc.
(--- downstream of that is compression everywhere, abstractions)
-- empirically, the cost/returns function for scaling cognition usually hits diminishing returns, leading to minds where it’s not effective to grow the single mind further
—this leads to the basin of convergent evolution I call “specialize and trade”—
empirically, for many cognitive systems, there is a general selection pressure toward modularity
—I don’t know what are all the reasons for that, but one relatively simple is ‘wires are not free’; if wires are not free, you get colocation of computations like brain regions or industry hubs
—other possibilities are selection pressures from CAP theorem, MVG, …
(modularity also looks a bit like box-inverted specialize and trade)
So, in short, I think where I agree with the spirit of If humans didn’t have a fixed skull size, you wouldn’t get civilization with specialized members and my response is there seems to be extremely general selection pressure in this direction. If cells were able to just grow in size and it was efficient, you wouldn’t get multicellulars. If code bases were able to just grow in size and it was efficient, I wouldn’t get a myriad of packages on my laptop, it would all be just kernel. (But even if it was just kernel, it seems modularity would kick in and you still get the ‘distinguishable parts’ structure.)