Human thought is by default compartmentalized for the same good reason warships are compartmentalized:
I’m going to ask you to recall your 2010 self now, and ask if you were actually trying to argue for a causal relationship that draws an arrow from the safety of compartmentalization to its existence. This seems wrong. It occurs to me that if you’re evolution, and you’re cobbling together a semblance of a mind, compartmentalization is just the default state, and it doesn’t even occur to you (because you’re evolution and literally mindless) to build bridges between parts of the mind.
Well, even if we agree that compartmentalized minds were the first good-enough solution, there’s a meaningful difference between “there was positive selection pressure towards tightly integrated minds, though it was insufficient to bring that about in the available time” and “there was no selection pressure towards tightly integrated minds” and “there was selection pressure towards compartmentalized minds”.
Rwallace seems to be suggesting the last of those.
Point, but I find the middle of your three options most plausible. Compartmentalization is mostly a problem in today’s complex world; I doubt it was even noticeable most of the time in the ancestral environment. False beliefs e.g. religion look like merely social, instrumental, tribal-bonding mental gestures rather than aliefs.
Yeah, I dunno. From a systems engineering/information theory perspective, my default position is “Of course it’s adaptive for the system to use all the data it has to reason with; the alternative is to discard data, and why would that be a good idea?”
But of course that depends on how reliable my system’s ability to reason is; if it has failure modes that are more easily corrected by denying it certain information than by improving its ability to reason efficiently with that data (somewhat akin to programmers putting input-tests on subroutines rather than write the subroutine so as to handle that kind of input), evolution may very well operate in that fashion, creating selection pressure towards compartmentalization.
What’s about facts from environment—is it good to gloss over applicability of something that you observed in one context, to other context? The compartmentalization may look like good idea when you are spending over a decade to put the effective belief system into children. It doesn’t look so great when you have to process data from environment. We even see correlations where there isn’t any.
The information compartmentalization may look great if the crew of the ship is to engage in pointless idle debates over intercom. Not so much when they need to coordinate actions.
I agree that if “the crew” (that is, the various parts of my brain) are sufficiently competent, and the communications channels between them sufficiently efficient, then making all available information available to everyone is a valuable thing to do. OTOH, if parts of my brain aren’t competent enough to handle all the available information in a useful way, having those parts discard information rather than process it becomes more reasonable. And if the channels between those parts are sufficiently inefficient, the costs of making information available to everyone (especially if sizable chunks of it are ultimately discarded on receipt) might overcome the benefits.
In other words, glossing over the applicability of something I observed in one context to another context is bad if I could have done something useful by non-glossing over it, and not otherwise. Which was reliably the case for our evolutionary predecessors in their environment, I don’t know.
Well, one can conjecture the counter productive effects of intelligence in general and any aspects of it in particular, and sure there were a few, but it stands that we did evolve the intelligence. Keep in mind that without highly developed notion of verbal ‘reasoning’ you may not be able to have the ship flooded with abstract nonsense in the first place. The stuff you feel, it tracks the probabilities.
Can you clarify the relationship between my comment and counterproductive effects of intelligence in general? I’m either not quite following your reasoning, or wasn’t quite clear about mine.
A general-purpose intelligence will, all things being equal, get better results with more data.
But we evolved our cognitive architecture not in the context of a general-purpose intelligence, but rather in the context of a set of cognitive modules that operated adaptively on particular sets of data to perform particular functions. Providing those modules with a superset of that data might well have gotten counterproductive results, not because intelligence is counterproductive, but because they didn’t evolve to handle that superset.
In that kind of environment, sharing all data among all cognitive modules might well have counterproductive effects… again, not because intelligence is counterproductive, but because more data can be counterproductive to an insufficiently general intelligence.
The existence of evolved ‘modules’ within the frontal cortex is not settled science and is in fact controversial. It’s indeed hard to tell how much data do we share, though. Maybe without habit of abstract thought, not so much. On other hand the data about human behaviours seem important.
The default state, is that anything which is not linked to limb movement or other outputs ever, could as well not exist in the first place.
I think the issue with compartmentalization, is that integration of beliefs is a background process, that ensures coherent response whereby one part of the mind would not come up with one action, and other with another, which would make you e.g. drive a car into a tree if one part of brain wants to turn left and other wants to turn right.
The compartmentalization of information is anything but safe. When you compartmentalize your e.g. political orientation, from your logical thinking, I can make you do either A or B by presenting exact same situation in either political, or logical, way, so that one of the parts activates, and arrives at either action A or action B. That is not safe. That is “it gets you eaten one day” unsafe.
And if you compartmentalize the decision making on a warship, it will fail to coordinate the firing of the guns, and will be sunk, even if it will take more holes. Consider a warship that is being attacked by several enemies. If you don’t coordinate the firing of torpedoes, you’ll have overkill fire at some of the ships, wasting firepower. You’ll be sunk. It is known issue in RTS games. You can beat human with pretty dumb AI if it simply coordinates the fire between units better.
The biologist in this example above is a single cherry picked example, from the majority of scientists, for whom the process has worked correctly, and they stopped believing that God created animals, or have failed to integrate beliefs, and are ticking time bombs wrt producing bad hypotheses. An edge case between atheists and believers, he is.
The compartmentalization of information is anything but safe.
I agree in most cases; however, there are some cases where ideas are very Big and Scary and Important where a full propagation through your explicit reasoning causes you to go nuts. This has happened to multiple people on Less Wrong, whom I will not name for obvious reasons.
I would like to emphasize that I agree in most cases. Compartmentalization is bad.
I think it happens due to ideas being wrong and/or being propagated incorrectly. Basically, you would need extremely high confidence in a very big and scary idea, before it can overwrite anything. The MWI is very big and scary. Provisionally, before I develop moral system based on MWI, it is perfectly consistent to assume that it has probability of being wrong, q, and the relative morality of actions, unknown under MWI, and known under SI, does not change, and consequently no moral decision (involving comparison of moral values) changes before there is a high quality moral system based on MWI. As a quick hack moral system based on MWI is likely to be considerably incorrect and lead to rash actions (e.g. quantum suicide that actually turns out to be as bad as normal suicide after you figure stuff out)
The ship is compartmentalized against hole in the hull, not against something great happening to it. Incorrect idea with high confidence can be a hole in the hull; the water be the resulting nonsense overriding the system.
I’m going to ask you to recall your 2010 self now, and ask if you were actually trying to argue for a causal relationship that draws an arrow from the safety of compartmentalization to its existence. This seems wrong. It occurs to me that if you’re evolution, and you’re cobbling together a semblance of a mind, compartmentalization is just the default state, and it doesn’t even occur to you (because you’re evolution and literally mindless) to build bridges between parts of the mind.
Well, even if we agree that compartmentalized minds were the first good-enough solution, there’s a meaningful difference between “there was positive selection pressure towards tightly integrated minds, though it was insufficient to bring that about in the available time” and “there was no selection pressure towards tightly integrated minds” and “there was selection pressure towards compartmentalized minds”.
Rwallace seems to be suggesting the last of those.
Point, but I find the middle of your three options most plausible. Compartmentalization is mostly a problem in today’s complex world; I doubt it was even noticeable most of the time in the ancestral environment. False beliefs e.g. religion look like merely social, instrumental, tribal-bonding mental gestures rather than aliefs.
Yeah, I dunno. From a systems engineering/information theory perspective, my default position is “Of course it’s adaptive for the system to use all the data it has to reason with; the alternative is to discard data, and why would that be a good idea?”
But of course that depends on how reliable my system’s ability to reason is; if it has failure modes that are more easily corrected by denying it certain information than by improving its ability to reason efficiently with that data (somewhat akin to programmers putting input-tests on subroutines rather than write the subroutine so as to handle that kind of input), evolution may very well operate in that fashion, creating selection pressure towards compartmentalization.
Or, not.
What’s about facts from environment—is it good to gloss over applicability of something that you observed in one context, to other context? The compartmentalization may look like good idea when you are spending over a decade to put the effective belief system into children. It doesn’t look so great when you have to process data from environment. We even see correlations where there isn’t any.
The information compartmentalization may look great if the crew of the ship is to engage in pointless idle debates over intercom. Not so much when they need to coordinate actions.
I’m not sure I’m understanding you here.
I agree that if “the crew” (that is, the various parts of my brain) are sufficiently competent, and the communications channels between them sufficiently efficient, then making all available information available to everyone is a valuable thing to do. OTOH, if parts of my brain aren’t competent enough to handle all the available information in a useful way, having those parts discard information rather than process it becomes more reasonable. And if the channels between those parts are sufficiently inefficient, the costs of making information available to everyone (especially if sizable chunks of it are ultimately discarded on receipt) might overcome the benefits.
In other words, glossing over the applicability of something I observed in one context to another context is bad if I could have done something useful by non-glossing over it, and not otherwise. Which was reliably the case for our evolutionary predecessors in their environment, I don’t know.
Well, one can conjecture the counter productive effects of intelligence in general and any aspects of it in particular, and sure there were a few, but it stands that we did evolve the intelligence. Keep in mind that without highly developed notion of verbal ‘reasoning’ you may not be able to have the ship flooded with abstract nonsense in the first place. The stuff you feel, it tracks the probabilities.
Can you clarify the relationship between my comment and counterproductive effects of intelligence in general? I’m either not quite following your reasoning, or wasn’t quite clear about mine.
A general-purpose intelligence will, all things being equal, get better results with more data.
But we evolved our cognitive architecture not in the context of a general-purpose intelligence, but rather in the context of a set of cognitive modules that operated adaptively on particular sets of data to perform particular functions. Providing those modules with a superset of that data might well have gotten counterproductive results, not because intelligence is counterproductive, but because they didn’t evolve to handle that superset.
In that kind of environment, sharing all data among all cognitive modules might well have counterproductive effects… again, not because intelligence is counterproductive, but because more data can be counterproductive to an insufficiently general intelligence.
The existence of evolved ‘modules’ within the frontal cortex is not settled science and is in fact controversial. It’s indeed hard to tell how much data do we share, though. Maybe without habit of abstract thought, not so much. On other hand the data about human behaviours seem important.
The default state, is that anything which is not linked to limb movement or other outputs ever, could as well not exist in the first place.
I think the issue with compartmentalization, is that integration of beliefs is a background process, that ensures coherent response whereby one part of the mind would not come up with one action, and other with another, which would make you e.g. drive a car into a tree if one part of brain wants to turn left and other wants to turn right.
The compartmentalization of information is anything but safe. When you compartmentalize your e.g. political orientation, from your logical thinking, I can make you do either A or B by presenting exact same situation in either political, or logical, way, so that one of the parts activates, and arrives at either action A or action B. That is not safe. That is “it gets you eaten one day” unsafe.
And if you compartmentalize the decision making on a warship, it will fail to coordinate the firing of the guns, and will be sunk, even if it will take more holes. Consider a warship that is being attacked by several enemies. If you don’t coordinate the firing of torpedoes, you’ll have overkill fire at some of the ships, wasting firepower. You’ll be sunk. It is known issue in RTS games. You can beat human with pretty dumb AI if it simply coordinates the fire between units better.
The biologist in this example above is a single cherry picked example, from the majority of scientists, for whom the process has worked correctly, and they stopped believing that God created animals, or have failed to integrate beliefs, and are ticking time bombs wrt producing bad hypotheses. An edge case between atheists and believers, he is.
I agree in most cases; however, there are some cases where ideas are very Big and Scary and Important where a full propagation through your explicit reasoning causes you to go nuts. This has happened to multiple people on Less Wrong, whom I will not name for obvious reasons.
I would like to emphasize that I agree in most cases. Compartmentalization is bad.
I think it happens due to ideas being wrong and/or being propagated incorrectly. Basically, you would need extremely high confidence in a very big and scary idea, before it can overwrite anything. The MWI is very big and scary. Provisionally, before I develop moral system based on MWI, it is perfectly consistent to assume that it has probability of being wrong, q, and the relative morality of actions, unknown under MWI, and known under SI, does not change, and consequently no moral decision (involving comparison of moral values) changes before there is a high quality moral system based on MWI. As a quick hack moral system based on MWI is likely to be considerably incorrect and lead to rash actions (e.g. quantum suicide that actually turns out to be as bad as normal suicide after you figure stuff out)
The ship is compartmentalized against hole in the hull, not against something great happening to it. Incorrect idea with high confidence can be a hole in the hull; the water be the resulting nonsense overriding the system.