I think, ideally, for anything that really matters, I’d selfishly prefer to just be in consensus with flawless reasoners, by sharing the same key observations, and correctly deriving the same important conclusions?
But you’re embedded in physics, and can rely on the fact that you will never be a flawless reasoner. You’re made of neurons, none of which are flawless reasoners, but they are able to work together to be a single agent by nature of keeping each other informed about what your opinion is as that opinion gets refined. Your neurons operate at or near criticality, so any neuron could potentially cause an update that propagates through the whole brain; neurons’ uncertainty about whether other neurons will provide an insightful contribution, combined with consensus network that refines away errors in ways that diffuse towards your self, is what allows free will to fall out of a deterministic system: your neurons inform each other of your personality, and you move your environment towards yourself.
In a social network, overly dense connectivity can break edge-of-chaos, criticality-seeking behavior, by resulting in a network that accept updates from people with too little processing. This is especially severe when there’s any sort of hierarchy, especially when that hierarchy is related to a hierarchy of control or dominance.
I propose that, if there are conflicts about approach to reasoning, information flow should continue, and if things go well, the partition should be one that results in the networks staying overlapped but separating partially.
(I do not intend to be at all metaphorical. I am intending to make claims that these patterns are literally the same, not mere metaphor. If they are not literally the same, my claim is wrong, and discovering it will teach me new things.)
I can link some lectures I’ve watched recently about this. Eg, I liked this one on “what is complexity”, which goes over how complex systems science is about the process of understanding what laws can be stated universally about large systems that are neither simple due to high entropy nor simple due to low entropy. It is not highly relevant such that it is worth it if that’s not new to you, but if it is new to you, it may be important background knowledge.
Also, keep in mind that there’s a good chance I’m straight up just not as smart or educated as most people on here; I compensate for that the same sort of way as current models do—I’ve seen a lot more stuff shallowly than most people study deeply. (but a real PhD would actually be good at stuff I merely fangirl about.)
on reread, seems like I may have missed components of your reply in my reply. I’m about to sleep; if you reply to me with emphasis on which parts I missed I’ll reply tomorrow
But you’re embedded in physics, and can rely on the fact that you will never be a flawless reasoner. You’re made of neurons, none of which are flawless reasoners, but they are able to work together to be a single agent by nature of keeping each other informed about what your opinion is as that opinion gets refined. Your neurons operate at or near criticality, so any neuron could potentially cause an update that propagates through the whole brain; neurons’ uncertainty about whether other neurons will provide an insightful contribution, combined with consensus network that refines away errors in ways that diffuse towards your self, is what allows free will to fall out of a deterministic system: your neurons inform each other of your personality, and you move your environment towards yourself.
In a social network, overly dense connectivity can break edge-of-chaos, criticality-seeking behavior, by resulting in a network that accept updates from people with too little processing. This is especially severe when there’s any sort of hierarchy, especially when that hierarchy is related to a hierarchy of control or dominance.
I propose that, if there are conflicts about approach to reasoning, information flow should continue, and if things go well, the partition should be one that results in the networks staying overlapped but separating partially.
(I do not intend to be at all metaphorical. I am intending to make claims that these patterns are literally the same, not mere metaphor. If they are not literally the same, my claim is wrong, and discovering it will teach me new things.)
I can link some lectures I’ve watched recently about this. Eg, I liked this one on “what is complexity”, which goes over how complex systems science is about the process of understanding what laws can be stated universally about large systems that are neither simple due to high entropy nor simple due to low entropy. It is not highly relevant such that it is worth it if that’s not new to you, but if it is new to you, it may be important background knowledge.
Also, keep in mind that there’s a good chance I’m straight up just not as smart or educated as most people on here; I compensate for that the same sort of way as current models do—I’ve seen a lot more stuff shallowly than most people study deeply. (but a real PhD would actually be good at stuff I merely fangirl about.)
on reread, seems like I may have missed components of your reply in my reply. I’m about to sleep; if you reply to me with emphasis on which parts I missed I’ll reply tomorrow