Conflicts Between Mental Subagents: Expanding Wei Dai’s Master-Slave Model
Related to: Alien Parasite Technical Guy, A Master-Slave Model of Human Preferences
In Alien Parasite Technical Guy, Phil Goetz argues that mental conflicts can be explained as a conscious mind (the “alien parasite”) trying to take over from an unsuspecting unconscious.
Last year, Wei Dai presented a model (the master-slave model) with some major points of departure from Phil’s: in particular, the conscious mind was a special-purpose subroutine and the unconscious had a pretty good idea what it was doing1. But Wei said at the beginning that his model ignored akrasia.
I want to propose an expansion and slight amendment of Wei’s model so it includes akrasia and some other features of human behavior. Starting with the signaling theory implicit in Wei’s writing, I’ll move on to show why optimizing for signaling ability would produce behaviors like self-signaling and akrasia, speculate on why the same model would also promote some of the cognitive biases discussed here, and finish with even more speculative links between a wide range of conscious-unconscious conflicts.
The Signaling Theory of Consciousness
This model begins with the signaling theory of consciousness. In the signaling theory, the conscious mind is the psychological equivalent of a public relations agency. The mind-at-large (hereafter called U for “unconscious” and similar to Wei’s “master”) has socially unacceptable primate drives you would expect of a fitness-maximizing agent like sex, status, and survival. These are unsuitable for polite society, where only socially admirable values like true love, compassion, and honor are likely to win you friends and supporters. U could lie and claim to support the admirable values, but most people are terrible liars and society would probably notice.
So you wall off a little area of your mind (hereafter called C for “conscious” and similar to Wei’s “slave”) and convince it that it has only admirable goals. C is allowed access to the speech centers. Now if anyone asks you what you value, C answers “Only admirable things like compassion and honor, of course!” and no one detects a lie because the part of the mind that’s moving your mouth isn’t lying.
This is a useful model because it replicates three observed features of the real world: people say they have admirable goals, they honestly believe on introspection that they have admirable goals, but they tend to pursue more selfish goals. But so far, it doesn’t explain the most important question: why do people sometimes pursue their admirable goals and sometimes not?
Avoiding Perfect Hypocrites
In the simplest case, U controls all the agent’s actions and has the ability to set C’s values, and C only controls speech. This raises two problems.
First, you would be a perfect hypocrite: your words would have literally no correlation to your actions. Perfect hypocrites are not hard to notice. In a world where people are often faced with Prisoners’ Dilemmas against which the only defense is to swear a pact to mutually cooperate, being known as the sort of person who never keeps your word is dangerous. A recognized perfect hypocrite could make no friends or allies except in the very short-term, and that limitation would prove fatal or at least very inconvenient.
The second problem is: what would C think of all this? Surely after the twentieth time protesting its true eternal love and then leaving the next day without so much as a good-bye, it would start to notice it wasn’t pulling the strings. Such a realization would tarnish its status as “the honest one”; it couldn’t tell the next lover it would remain forever true without a little note of doubt creeping in. Just as your friends and enemies would soon realize you were a hypocrite, so C itself would realize it was part of a hypocrite and find the situation incompatible with its idealistic principles.
Other-signaling and Self-Signaling
You could solve the first problem by signaling to others. If your admirable principle is to save the rainforest, you can loudly and publicly donate money to the World Wildlife Fund. When you give your word, you can go ahead and keep it, as long as the consequences aren’t too burdensome. As long as you are seen to support your principles enough to establish a reputation for doing so, you can impress friends and allies and gain in social status.
The degree to which U gives permission to support your admirable principles depends on the benefit of being known to hold the admirable principle, the degree to which supporting the principle increases others’ belief that you genuinely hold the principle, and the cost of the support. For example, let’s say a man is madly in love with a certain woman, and thinks she would be impressed by the sort of socially conscious guy who believes in saving the rainforest. Whether or not he should donate $X to the World Wildlife Fund depends on how important winning the love of this woman is to him, how impressed he thinks she’d be to know he strongly believes in saving the rainforests, how easily he could convince her he supports the rainforests with versus without a WWF donation—and, of course, the value of X and how easily he can spare the money. Intuitively, if he’s really in love, she would be really impressed, and it’s only a few dollars, he would do it; but not if he’s not that into her, she doesn’t care much, and the WWF won’t accept donations under $1000.
Such signaling also solves the second problem, the problem of C noticing it’s not in control—but only partly. If you only give money when you’re with a love interest and ey’s standing right there, and you only give the minimum amount humanly possible so as to not repulse your date, C will notice that also. To really satisfy C, U must support admirable principles on a more consistent basis. If a stranger comes up and gives a pitch for the World Wildlife Fund, and explains that it would really help a lot of rainforests for a very low price, U might realize that C would get a little suspicious if it didn’t donate at least a token amount. This kind of signaling is self-signaling: trying to convince part of your own mind.
This model modifies the original to include akrasia2 (U refusing to pursue C’s goals) and the limitations on akrasia (U pursues C’s goals insofar as it has to convince other people—and C itself—its signaling is genuine).
It also provides a key to explaining some superficially weird behavior. A few weeks ago, I saw a beggar on the sidewalk and walked to the other side of the street to avoid him. This isn’t sane goal-directed behavior: either I want beggars to have my money, or I don’t. But under this model, once the beggar asks for money, U has to give it or risk C losing some of its belief that it is compassionate and therefore being unable to convince others it is compassionate. But as long as it can avoid being forced to make the decision, it can keep both its money and C’s innocence.
Thinking about this afterward, I realized how silly it was, and now I consider myself unlikely to cross the street to avoid beggars in the future. In the language of the model, C focuses on the previously subconscious act of avoiding the beggar and realizes it contradicts its principles, and so U grudgingly has to avoid such acts to keep C’s innocence and signaling ability intact.
Notice that this cross-the-street trick only works if U can act without C being fully aware what happened or its implications. As we’ll see below, this ability of U’s has important implications for self-deception scenarios.
From Rationality to Rationalization
So far, this model has assumed that both U and C are equally rational. But a rational C is a disadvantage for U for exactly the reasons mentioned in the last paragraph; as soon as C reasoned out that avoiding the beggar contradicted its principles, U had to expend more resources giving money to beggars or lose compassion-signaling ability. If C is smart enough to realize that its principle of saving the rainforest means you ought to bike to work instead of taking the SUV, U either has to waste resources biking to work or accept a decrease in C’s environmentalism-signaling ability. Far better that C never realizes it ought to bike to work in the first place.
So it’s to U’s advantage to cripple C. Not completely, or it loses C’s language and reasoning skills, but enough that it falls in line with U’s planning most of the time.
“How, in detail, does U cripple C?” is a restatement of one of the fundamental questions of Less Wrong and certainly too much to address in one essay, but a few suggestions might be in order:
- The difference between U and C seems to have a lot to do with two different types of reasoning. U seems to reason over neural inputs – it takes in things like sense perceptions and outputs things like actions, feelings, and hunches. This kind of reasoning is very powerful – for example, it can take as an input a person you’ve just met and immediately output a calculation of their value as a mate in the form of a feeling of lust – but it can also fail in weird ways, like outputting a desire to close a door three dozen times into the head of an obsessive-compulsive, or succumbing to things like priming. C, the linguistic one, seems to reason over propositions – it takes propositions like sentences or equations as inputs, and returns other sentences and equations as outputs. This kind of reasoning is also very powerful, and also produces weird errors like the common logical fallacies.
- When U takes an action, it relays it to C and claims it was C’s action all along. C never wonders why its body is acting outside of its control; only why it took an action it originally thought it disapproved of. This relay can be cut in some disruptions of brain function (most convulsions, for example, genuinely seem involuntary), but remains spookily intact in others (if you artificially activate parts of the brain that cause movement via transcranial magnetic stimulation, your subject will invent some plausible sounding reason for why ey made that movement)3.
- C’s crippling involves a tendency for propositional reasoning to automatically cede to neural reasoning and to come up with propositional justifications for its outputs, probably by assuming U is right and then doing some kind of pattern-matching to fill in blanks. For example, if you have to choose to buy one of two cars, and after taking a look at them you feel you like the green one more, C will try to come up with a propositional argument supporting the choice to buy the green one. Since both propositional and neural reasoning are a little bit correlated with common sense, C will often hit on exactly the reasoning U used (for example, if the red car has a big dent in it and won’t turn on, it’s no big secret why U’s heuristics rejected it) but in cases where U’s justification is unclear, C will end up guessing and may completely fail to understand the real reasons behind U’s choice. Training in luminosity can mitigate this problem, but not end it.
- A big gap in this model is explaining why sometimes C openly criticizes U, for example when a person who is scared of airplanes says “I know that flying is a very safe mode of transportation and accidents are vanishingly unlikely, but my stupid brain still freaks out every time I go to an airport”. This might be justifiable along the lines that allowing C to signal that it doesn’t completely control mental states is less damaging than making C look like an idiot who doesn’t understand statistics – but I don’t have a theory that can actually predict when this sort of criticism will or won’t happen.
- Another big gap is explaining how and when U directly updates on C’s information. For example, it requires conscious reasoning and language processing to understand that a man on a plane holding a device with a countdown timer and shouting political and religious slogans is a threat, but a person on that plane would experience fear, increased sympathetic activation, and other effects mediated by the unconscious mind.
This part of the model is fuzzy, but it seems safe to assume that there is some advantage to U in changing C partially, but not completely, from a rational agent to a rubber-stamp that justifies its own conclusions. C uses its propositional reasoning ability to generate arguments that support U’s vague hunches and selfish goals.
How The World Would Look
We can now engage, with a little bit of cheating, in some speculation about how a world of agents following this modified master-slave model would look.
You’d claim to have socially admirable principles, and you’d honestly believe these claims. You’d pursue these claims at a limited level expected by society: for example, if someone comes up to you and asks you to donate money to children in Africa, you might give them a dollar, especially if people are watching. But you would not pursue them beyond the level society expects: for example, even though you might consciously believe saving a single African child (estimated cost: $900) is more important than a plasma TV, you would be unlikely to stop buying plasma TVs so you could give this money to Africa. Most people would never notice this contradiction; if you were too clever to miss it you’d come up with some flawed justification; if you were too rational to accept flawed justifications you would just notice that it happens, get a bit puzzled, call it “akrasia”, and keep doing it.
You would experience borderline cases, where things might or might not be acceptable, as moral conflicts. A moral conflict would feel like a strong desire to do something, fighting against the belief that, if you did it, you would be less of the sort of person you want to be. In cases where you couldn’t live with yourself if you defected, you would cooperate; in cases where you could think up any excuse at all that allowed you to defect and still consider yourself moral, you would defect.
You would experience morality not as a consistent policy to maximize utility across both selfish and altruistic goals, but as a situation-dependent attempt to maximize feelings of morality, which could be manipulated in unexpected ways. For example, as mentioned before, going to the opposite side of the street from a beggar might be a higher-utility option than either giving the beggar money or explicitly refusing to do so. In situations where you were confident in your morality, you might decide moral signaling was an inefficient use of resources – and you might dislike people who would make you feel morally inferior and force you to expend more resources to keep yourself morally satisfied.
Your actions would be ruled by “neural reasoning” that outputs expectations different from the ones your conscious reasoning would endorse. Your actions might hinge on fears which you knew to be logically silly, and your predictions might come from a model different from the one you thought you believed. If it was necessary to protect your signaling ability, you might even be able to develop and carry out complicated plots to deceive the conscious mind.
Your choices would be determined by illogical factors that influenced neural switches and levers and you would have to guess at the root causes of your own decisions, often incorrectly – but would defend them anyway. When neural switches and levers became wildly inaccurate due to brain injury, your conscious mind would defend your new, insane beliefs with the same earnestness with which it defended your old ones.
You would be somewhat rational about neutral issues, but when your preferred beliefs were challenged you would switch to defending them, and only give in when it is absolutely impossible to keep supporting them without looking crazy and losing face.
You would look very familiar.
Footnotes
1. Wei Dai’s model gets the strongest compliment I can give: after reading it, it seemed so obvious and natural to think that way that I forgot it was anyone’s model at all and wrote the first draft of this post without even thinking of it. It has been edited to give him credit, but I’ve kept some of the terminology changes to signify that this isn’t exactly the same. The most important change is that Wei thinks actions are controlled by the conscious mind, but I side with Phil and think they’re controlled by the unconscious and relayed to the conscious. The psychological evidence for this change in the model are detailed above; some neurological reasons are mentioned in the Wegner paper below.
2. Or more accurately one type of akrasia. I disagreed with Robin Hanson and Bryan Caplan when they said a model similar to this explains all akrasia, and I stand by that disagreement. I think there are at least two other, separate causes: akrasia from hyperbolic discounting, and the very-hard-to-explain but worthy-of-more-discussion-sometime akrasia from wetware design.
3. See Wegner, “The Mind’s Best Trick: How We Experience Conscious Will” for a discussion of this and related scenarios.
- A Study of Scarlet: The Conscious Mental Graph by 27 May 2011 20:13 UTC; 44 points) (
- Index of Yvain’s (Excellent) Articles by 30 Jun 2011 9:57 UTC; 36 points) (
- Transparency and Accountability by 21 Aug 2010 13:01 UTC; 20 points) (
- 10 Aug 2010 18:59 UTC; 12 points) 's comment on Against Cryonics & For Cost-Effective Charity by (
- 10 Aug 2010 16:56 UTC; 10 points) 's comment on Against Cryonics & For Cost-Effective Charity by (
- Against Cryonics & For Cost-Effective Charity by 10 Aug 2010 3:59 UTC; 9 points) (
- Reason is not the only means of overcoming bias by 9 Sep 2010 22:59 UTC; 8 points) (
- 17 Nov 2014 14:44 UTC; 5 points) 's comment on Agency and Life Domains by (
- 1 Jun 2011 23:17 UTC; 1 point) 's comment on What is/are the definition(s) of “Should”? by (
- 9 Dec 2011 23:04 UTC; 0 points) 's comment on Value evolution by (
That’s not a gap at all—in fact, you answered it elsewhere in your article, right here:
The key is understanding that those sense perceptions need not be present-tense/actual; they can be remembered or imagined. It’s pretty central to what I do. (Heck, most of the model you’re describing isn’t much different from things I’ve been writing about since around 2005.)
Anyway, a big part of the work I do with people is helping them learn to identify the remembered or imagined sensory predictions (which drive the feelings and behavior) and inject other ways of looking at things. More precisely, other ways of interpreting the sensory impressions, such that they lead to different predictions about what will happen.
Don’t get me wrong, there is a ton of pragmatic information you need to know in order to be able to do that effectively, so it’s really only that simple in principle.
Chief amongst the problems most people encounter is that C usually pays near-zero attention to what U is doing, and is exceptionally prone to making up propositions that “explain” U’s motives incorrectly.
(The other fairly big problem is that C is pretty bad at coming up with alternative perspectives or interpretations—we’re pretty good at doing that to other people’s ideas, but not our own.)
One thing that I sort of disagree with in your article, though, is that you can’t really view U as a reasoning “agent”. Remember: it isn’t conscious (not an agent), and it’s not singular (not an agent).
Unlike C, U can hold mutually-contradictory concepts—i.e., “double-binds”. It is also concerned only with the regulation of the expected future value of perceptually-derived measurements (which includes things like status, the state of one’s relationships, health, etc. -- really anything a human can value.)
U can and does engage in “unsupervised learning” to find ways to regulate these values, but it is limited to relatively blind variations, rather than devious plots. (But then, evolutionary searches can certainly lead to things that look like devious plots on occasion!)
However, it’s sheer anthropomorphizing to think of U as if it were an actual agent, as opposed to simply treating agency as a metaphor. To think of it as an agent tends to lead to the idea of conflict, and it also implies that if you attempt to change it, it will somehow fight back or push against being changed.
And indeed, it can sometimes appear that way, if you don’t realize that the agglomeration of regulated values we are calling “U” is not under any requirement to be self-consistent, except as a result of sensory perceptions (whether real, remembered, or imagined) occurring in close temporal bounds.
That is, you realized your issue with crossing the street was silly because you actually paid attention to it—the juxtaposition of sensory information causing U to update its model.
To put it another way, your “realizing it was silly” was how the algorithm of U updating on C feels from the inside.
Essentially (at a bit of an oversimplification) my work consists of teaching people to find what things to feel silly about, and to identify ways to think about them so that they do, in fact, feel silly about them. (It is, in fact a remarkably common response to making any sort of “U” change: the “Wait? You mean I don’t have to do it that way?” response.)
I generally agree with PJ, but not in this case. I don’t think that C exists, just U and sensory modalities that some parts of U can manipulate, and I think that U contains systems which are not actual agents and other systems that, while not conscious, ARE actual agents in the same sense that animals with a visual cortex are, others which are agents in the sense that animals that lack a visual cortex are, and other simple agents which can manipulate the sensory modalities in a stereotyped fashion (or censor the data coming into the sensory modalities) but which don’t seek goals (or maybe, not outward directed goals).
How has this affected your understanding of your values?
I think I may be confused about my values partly because I’m not carving myself into pieces like this.
I see my symbolic centers as essentially a forum within which mutually beneficial sometimes timeless trades are negotiated between agents, some of which run on the same brain and some of which run in parallel in multiple brains. It looks more appealing, from the outside, than what I used to do, since it believes that it should. From inside, its nice because I’m getting all these gains from trade.
C can hold mutually-contradictory concepts. C is very good at holding mutually-contradictory concepts. Perhaps even better than U (in as much as U is more likely to give things weights while C often thinks in absolutes.)
hmm, as I understood from the post Both C & U give due consideration in their own ways. this is too tangled for me though, could you elaborate on how C’s absolutes are so different from U’s stimuli->reaction?
This is a very interesting article, thanks for writing it! I agree with Tim Tyler’s remark that your theory sounds more like a perturbation to a more fundamental theory of consciousness.
You may be generalizing from one example based on personal experience with feelings of tension between a conscious desire to be a utilitarian and unconscious desires that point in mostly other directions, as evidenced by your nice post the trouble with good. It must be remembered that very few people consciously subscribe to normative utilitarianism.
Various issues that your post does not appear to address:
•Sometimes people consciously have overtly selfish goals. Sometimes people even explicitly talk about such goals in public. (I can dig up references if you’d like.)
Relatedly, note that apparent pursuit of altruistic goals can result social expulsion. It’s an oversimplification to say that it’s evolutionarily advantageous to have a conscious mind with noble motivations. This is quite possibly related to your remark “you might dislike people who would make you feel morally inferior and force you to expend more resources to keep yourself morally satisfied.”
•Your theory points to the idea that unusually high commitment to altruistic behavior corresponds to unusually high ratio of conscious mind to unconscious mind. But as far as I know, there’s no evidence that such people are less affected by cognitive biases in general relative to people who do not exhibit such strong commitment to altruistic behavior. Do you think that somebody like Zell Kravinsky who donated the vast majority of his 45 million dollar fortune differs from other investors who made 50+ million dollars primarily in that his conscious mind has been able to harness greater than usual control of his unconscious mind? On the flip side, are people who exhibit an abundance of traits associated with antisocial personality disorder most properly viewed as “people whose conscious minds are thwarted by their unconscious minds to a greater than usual degree?”
•The conscious vs. unconscious division is not binary but gradualist. There are aspects of one’s thinking that one is very aware of, aspects that one is somewhat aware of, aspects that one is obliquely aware of, aspects that one could be aware of if one was willing to pay attention to them, and aspects that one has no access to. This point is of course related to your mention of luminosity.
•The unconscious mind of the average human is not programmed to make him or her survive and reproduce, it’s programmed to make him or her exhibit behaviors and modes of thinking that were historically associated with surviving and reproducing. For men, survival, accumulation of status, resources and sexual partners were historically associated with reproducing, but there are some remaining cognitive features which are now unrelated to accumulation of status, resources and sexual partners.
•It must be remembered that sometimes the best way to improvement to reproductive fitness at the margin is to push away from the global optimum for reproductive fitness, in the direction of a closer local maximum. This is important in thinking about where humans came from, because evolution acts to optimize reproductive fitness at the margin and is not a forward looking process. Eliezer has written at length about these things, for example in An Especially Elegant Evpsych Experiment. This is related to your remark “evolution is a crazy tinker who recycles everything.” I know that you know all of this already.
Interestingly enough, one way in which WEIRD psychological samples are different from others is that WEIRD samples are less inclined to antisocial punishment.
Thanks for the interesting reference.
There’s also a sort of akrasia which is physically based—if I eat too much refined carbs, I can get a day or two of doing very little while thinking “I don’t care, I don’t care”. It looks like a psychological problem, but is really well corelated with the carbs.
After reading Stuart Sutherland’s “Irrationality”, I’m starting to think I should discard altogether my impressions of what makes the difference between good and bad days in favour of a notebook and a real regression analysis. Is that what you’re doing? If so, what is the correlation?
The way I talked about it implied more precision than I’ve actually got.
The “I don’t care” internal monologue only seems to happen after sugar overdose, but I haven’t been keeping records.
Be careful about your temporal order here: it’s possible that when you feel a bad mood coming on, you respond by then eating more sugary food in order to try and fight the unhappiness off.
Messing up the order of this particular problem is a mistake I’ve made myself.
Be careful about your temporal order here: it’s possible that when you feel a bad mood coming on, you respond by then eating more sugary food in order to try and fight the unhappiness off.
Messing up the order of this particular problem is a mistake I’ve made myself.
I agree; that’s what I meant by the “wetware akrasia” footnote linked to the post about glucose.
Same thing for me with being tired.
Oddly, I’m sort of the opposite. If I’m too well-rested, I have trouble concentrating. I seem to function best when I’m slightly sleep-deprived, but I frequently overcompensate and end up sleep-depped enough that the loss of sharpness outweighs the (flat) concentration gain.
Have you considered poly-phasic sleep schedules? You’ll not be getting much sleep overall, so shouldn’t get super well rested, but frequent sleep should keep you out of the danger zone.
I would really like to try out polyphasic sleep, but it’s not compatible with my work schedule. I would have tried it when I was in college, but classes presented the same problem. If I’m ever in a position where I’m working for myself, though, it’s definitely something I’m planning to experiment with.
For about a year I adopted a schedule in which I slept 4 hours one night and 10 hours the next. It was convenient, and I got the nice sleep-dep focus you describe every other day. I stopped because I developed emotional instability, but I’m pretty sure the sleep schedule wasn’t the sole cause of that.
This happens to me, too, any time I mess up my blood sugar, and often for several days at a time as you describe. But I’m also diabetic, so I would think that this response (particularly the magnitude and duration of it) would be atypical. How much carbohydrate does it take for this to happen? If you get this effect with less than 100g, you ought to perform a glucose tolerance test on yourself.
Since a cynical view of most charitable organizations is held by many here (including me), this bit of pushback could be a surprise resulting in Bayesian updating.
This model is missing a plausible evolutionary explanation for how U and C may have evolved. That’s a pretty gaping hole because if we don’t constrain U and C to being plausible under evolution then they can be given whatever motives, responsibilities, etc. that are convenient to fit the model to existing data (see Psychohistorian’s epicycle comment).
What do you mean? Do you mean the genetic explanation (as in “it was a mutation on the long arm of chromosome 20?”) or do you mean the evolutionary pressures that were behind it?
I tried to explain the evolutionary pressures behind it—eg a person who couldn’t signal would be unsuccessful, a person who signaled but never acted on those signals would be unconvincing. If you think those pressures wouldn’t be sufficient, why not?
I don’t have a play-by-play genetic explanation, but neither do a lot of well-established things.
yep
This feels like a worse version of epicycles, in that even if it’s kind of useful, it seems like it definitely is not what’s going on. The idea of lying being difficult seems to (A) presuppose a consciousness, and (B) make no sense—it seems like it would be much cheaper to evolve better lie-hiding mechanisms than to evolve consciousness. “Cognitive dissonance is adaptive with respect to expensive gestures” seems to explain pretty much all of what this theory is trying to address, without being weirdly centered on lying.
This feels like a theory that has been heavily massaged to fit facts, in the sense that your prediction of how such an individual would act seems to rather conveniently match how we actually act, rather than clearly and obviously predicting how we would act.
If there were an elegant alternative to this model, that would be better. But I don’t know of any other model that tries to comprehensively explain mental conflict without handwaving through the difficult parts.
Part of the problem may be that I assumed people already agreed with most of this model’s premises. Looking at the comments, I see I was totally wrong and the average person here doesn’t believe in things like the signaling theory of consciousness (which I thought was almost-universal in the Less Wrong community). So I might backtrack and try to make a sequence out of this, where I present each premise in order and slowly try to justify them. Maybe if people already believed all the premises it would look more like a reasonable way to fit a few large parts together, and less like LOOK HERE ARE TWELVE MILLION HYPOTHESIZED SYSTEMS AND SUBSYSTEMS THAT WHEN YOU PUT THEM TOGETHER KIND OF PRODUCE SOMETHING LIKE OUR BRAINS.
I’m pretty sure I agree with the theory that consciousness is the result of a social modeling arms race (ie signalling is one of the things driving the evolution of consciousness), but I think that a sequence of posts would be good anyway. It’s good both to have a group of well explained articles on which to found one’s thinking as well as a good way to get new rationalists up to speed.
I would be curious about such a sequence.
Interesting theory.
I tend to agree with Tim Tyler that the “common” interpretation of consciousness is simpler and the signaling thing is not necessary. I realize that you are trying to increase the scope of the theory, but I am not convinced yet that the cure is better than the illness.
While I could see why an ape trying to break into “polite society” might want to gain the facility you describe, the apes created the “polite society” in the first place, I do not see a plausible solution to the catch 22 (perhaps it’s a lack of imagination).
You raise the question of U not being able to be a complete hypocrite, therefore “inventing” C, who is good at lying to itself. But wouldn’t others notice that C is lying to itself, and remains largely a hypocrite? If the achievement of C is being more cooperative, why doesn’t U just skip the BS and become more cooperative? (I actually think this last point might be answerable, the key observation being that C operates on a logical, verbal level. This allows it to be predictably consistent if certain specific situations, such as “if friend do this”, which is very important in solving the kinds of game-theoretic scenarios you describe. Giving cooperation over to C, rather than “making U more cooperative” creates consistency, which is essential. I think you might have hinted at this.)
ETA: the theory might be more palatable if the issues of consciousness and “public relation function” are separated along byrnema’s lines (but perhaps clearer).
Regarding your second two points, the idea of signalling games is that as long as C has some influence on your behavior, others can deduce from your apparent trustworthiness, altruism, etc., that you are at least somewhat trustworthy, etc. If you did away with C and simply made your U more trustworthy, you would seem less trustworthy than someone with a C, and other agents in the signalling game would assume that you have a C, but your U is unusually untrustworthy. So there’s an incentive to be partially hypocritical.
Someone with mod powers needs to ban this user pronto.
This theory simply does not resonate with me. I do not feel that I am at all like that and neither has anyone I have known been like that. It is as off the mark as Freudian theories are, in my view. “So you wall off a little area of your mind..” Do you have any evidence for this idea that the consciousness is a walled off area?
Is your objection that you don’t believe in an unconscious at all, or that you think the unconscious works in a different when than I do?
I think links in this post are sufficient to show something’s going on with mental decisions that aren’t consciously available; if you haven’t already, see posts like “Belief in Belief”, “Cached Selves”, “Simultaneously Right and Wrong”, “The Apologist and the Revolutionary”, et cetera. Given that unconscious processing takes place, this is an attempt to explain why and where it might work. I guess another model might be better, but I haven’t really seen* any other model (except Wei’s and Phil’s, to which this is basically a friendly amendment).
So yes, I admit it’s obviously crazy, but I don’t think it’s crazier than necessary to explain the crazy data.
My objection is not about unconsciousness, nor is it about consciousness—it is about the division of the mind/brain into two systems.
I have a mind (product or function of my brain) and that mind does the things like perception, problem solving, motivation, action, memory, emotion and so on, all the mind-type things. Some of that activity I am aware of and some not. This is not a difference primarily in the activity but in which parts of it rise to consciousness. Nothing much happens in consciousness other than conscious awareness. It is one system. Consciousness seems to be necessary for some functions of memory and focus of attention, to predict the use of skeletal muscles, to facilitate some complex thought that requires a working memory (language and the like). I do not associate my ‘I’ with my consciousness alone but with my whole, single, undivided mind. That is how it feels and a division does not resonate with me.
I know that we were all brought up to accept a Freudian or some other such division inside our heads, but I was not able to believe that in the ’50s when I was a teenager and I have never been able to do it since. I do not feel divided and never have.
I’d like to expand on this point by merging it with pjeby’s comment:
There’s nothing wrong, per se, with dividing the mind into conscious and unconscious, any more than any division into a category and its complement. The problem comes when the unconscious is made out to be highly parallel to the (usual picture of the) conscious mind, including a high degree of internal coherence, capacity for sophisticated goal-seeking, etc.
Additional problems may come from viewing the conscious mind as having a single coherent focus (“it’s the PR department”). The conscious mind comes across as too homunculized. Tim Tyler’s view makes more sense to me: PR is one function among many.
You never experience akrasia?
Yes, I have experienced akrasia. What does that say about whether I have one or two minds? All it says is that sometimes I’m conflicted.
That is an incredibly strong claim if meant literally rather than as hyperbole.
Can you clarify? Are you saying that this posting’s ideas are not similar to Freudian ideas? Or are you saying that Freudian type ideas are not generally discredited in neuroscience? Or both? Or something else?
I’m saying that Freudian type ideas are really, really bad. I’m emphasising that saying a model is as bad as Freud’s eponymous theories is a significant claim indeed, (although it is something that could reasonably be used to emphasise a lesser criticism with hyperbole.)
My comment here is orthogonal to the issue of to what extent the ideas in this post are accurate and useful but relevant to scale of evaluation.
Fair enough. If you are saying that the ideas in the post are not as bad as Freud’s and I have exaggerated, then you may be right. They seem very similar to me but I was focusing on the business of dividing up the mind into component minds and the notion of opposition between the resulting minds. I would agree that the post is not like Freud in other ways.
Well, that and I’m just taking the chance to disrespect Freud (and express contempt with the remnants of Freudian thinking that are still floating about in popular psychology.)
One thing that I would say, and you may well agree on this, is that this post could reasonably be labelled a ‘just so’ story, as could Freud’s ramblings. Fortunately the sanity watermark is somewhat higher here and contemporary understanding of human behavior at least ensures that even the most arbitrary of just so stories must explain a better set of observations.
Agreed—thanks for the correction
Re: “Do you have any evidence for this idea that the consciousness is a walled off area?”
If consciousness is a walled garden, the walls would appear to be quite mobile.
This theory seems to make a testable prediction: you will have less akrasia if your signaling requires you to reach your goal, not just show that you’re working towards it. Looking at my life, I’m not sure if that’s true.
One inspiration for this line of thinking was Michael Vassar complaining about how many people volunteer to help SIAI, versus how few people actually go through with helping them once he’s given them projects. I noticed I do this sort of thing all the time.
All these systems evolved in the ancestral environment. “Go to the moon” might not even be parseable by other processes as something you can do or not do, or something that signals anything.
People get more done if they find the activity expression of their status-seeking, developing mastery.
Developing mastery is itself rewarding. Babies will be interested in pushing a foot pedal in their crib until they are satisfied they can predict/control the movement of a mobile above that is operated by the pedal, and will then lose interest.
Or, to put it another way, humans value dopamine (stimulation) rewards as well as serotonin (status) rewards.
(I could be wrong, but ISTR that serotonin levels change in response to social status changes in monkeys—maybe someone can correct me if it’s a different neurotransmitter.)
Seratonin is the right one (and dopamine a bit too, as can be expected by dopamine’s general role as a ‘reward’ neurotransmitter.)
Why choose to master A and not B?
Why do babies try to master a foot pedal-and-mobile combination?
For the same reason people climb mountains: because it’s there.
IOW, opportunity and interest. (One might reasonably say there are people who climb mountains for status, but it’s hard to pin a status motive on babies.)
In my observation, there are at least four major categories of reward mechanism: what I refer to as status, affiliation, safety, and stimulation, or S.A.S.S. for short. These likely correspond to major chemical pathways—maybe serotonin, oxytocin, ?, and dopamine.
Trying to make everything about human motivation into a single drive (status) makes it really hard to actually predict behavior, in my experience, since there are plenty of examples of people lowering their status to get one or more of the other kinds of reward.
I find signaling to others a useful way to meet goals. A lot of the time, I will tell other people about something that I’m working on, so I am more motivated to impress that person, and avoid looking bad if they ask me about it later and I haven’t done any work on it.
Your consciousness contains the things you need to be able to reflect on in order to function properly. That seems like a much more basic way of delineating the conscious mind than the proposed signalling theory.
Yes: consciousness sometimes excludes things that it is undesirable to signal—but surely that is more of a footnote to the theory than its main feature. Quite a bit of that work is actually done by selective forgetting—which is a feature with better targeting capabilities than the filters of consciousness.
If you want the answer to involve signalling, then the ego seems like a more suitable thing to examine.
It sounds like our two theories make different predictions.
Yours suggests (if I understand right) that the function of the conscious mind is to reflect upon things. If reflecting on things is evolutionary necessary, then the conscious mind’s reflections ought to be used to make decisions or something. So you predict that when a person gives reasons for eir decisions, those reasons should always be correct. It also predicts that our decisions will usually be those that would be arrived at by logical reflection.
Mine predicts that the conscious mind’s reflections should be only loosely correlated with actual decisions, and more likely to be ex post facto justifications, and that even though we may have logical-sounding reasons for our decisions, on closer inspection they will look less like logic and more like the sorts of things an unconscious selfish heuristic process would arrive at.
Is that fair, or am I doing that thing where I fit the evidence to say what I want it to again?
No way! I am not saying that the brain has NO PR department. Just that consciousness is more like the business park where the PR department has its HQ—where a whole bunch of other stuff also happens.
For example, when you focus on one thing (say reading a book) you generally filter out other things (background noise). This is to concentrate resources and avoid distraction—and does not have much to do with social signalling. There are a large number of other examples, illustrating applications of filtering sensory inputs away from consciousness for purposes which have little to do with social signalling.
I mentioned “selective forgetting”. If the ego can’t reconcile actions or events with self-image, it can tell the department of records to erase the evidence before any broadcasts are made. Such material can make its way into consciousness—but after examination it gets rejected and there then follows a cover-up operation—which can sometimes look a bit like: http://en.wikipedia.org/wiki/Repressed_memory
One advantage of this is that more sophisticated methods can be used to determine what gets rejected. One disadvantage is that the forgetting process can be imperfect.
Stating your opponent’s argument in absolutist terms and then rejecting it in support of your theory is not a good method ;). Did you read the “Good and Real” description of the C function? It’s not really to “think logically” but being a selectively active supervisor process IIRC. The override might be seldom, but important.
Almost unquestionably there is a lot of rationalization and self deception going on, this is crystal clear from famous split-brain experiment. It’s just not obvious that this is the raison d’etre for consciousness, esp. in light of lacking a plausible evolutionary path for this as was pointed out by several people.
|It’s just not obvious that this is the raison d’etre for consciousness, esp. in light of lacking a plausible evolutionary path for this|
The debating tactic you highlight would fit a public-relations function for the unconscious nd . And a public relations facility; explaining to others one self’s unconscious-driven behaviour could be of survival value among humans, yes?
If you look at what happens under hallucinogens—most of which act so as to make what is usually unconscious conscious—then it seems as though consciousness is a filter—which selectively eliminates the least important things. Attention is a form of selective consciousness.
Humans function less effectively if their consciousness is continually flooded with sensory inputs that are usually unconscious—since then the important and the unimportant are muddled together.
We are conscious of as little as we are partly because consciousness is a kind of meta-analysis—and is a relatively expensive feature. For systems with limited computational resources, sensory overload can be a big problem—and the filtering done by consciousness represents a large part of the solution.
What sorts of things does one “need to reflect on in order to function properly”?
As far as I can tell all non-human animals “function properly” without any reflection whatsoever.
No, no! I would say that other primates have broadly similar reflection capabilities, though smaller minds. Cetaceans too, probably. All mammals probably have some basic reflection capabilities—though it may be difficult to detect experimentally.
Some literal reflection:
“Who’s That Strange Monkey in the Mirror?”
http://www.primates.com/misc/mirror-self.html
EDIT: This theory does not sufficiently address the heart of the issue, and needs to be reconsidered.
Perhaps this knot can be cut with PCT. Suppose you have the following hypothesis:
“The executive function sequences actions so as to minimize the error signal from mental subsystems”.
This seems to explain most of the things you’re trying to resolve. For instance:
-Crossing the street to avoid the homeless man minimizes the errors from the “maximize my amount of money”, “avoid socially awkward situations”, and “maximize my self image as a charitable person” subsystems.
-Self-handicapping minimizes the errors from “maximize my self image as talented” and “maximize my appearance to others as talented” (the reason this occurs relatively infrequently is that it’s probably somewhat difficult to decouple these subsystems from other ones that may “override” them with stronger error signals”)
This hypothesis has the benefit of being rooted in a conceptualization that we’re pretty sure is right (executive function and a large number of mental modules), and not relying on the extremely fuzzy conscious vs unconscious dichotomy.
Of course, there’s a big giant risk of coming up with evolutionary style “just-so” stories in the form of mental modules for every possible behavior, but that doesn’t seem insurmountable.
Are there any obvious problems with this?
Your error minimizer hypothesis doesn’t say anything about why we sometimes genuinely believe that we have a certain goal and then act in a way that’s aimed at a completely different, sometimes contradictory, goal.
The U vs. C hypothesis explains that by proposing a different model: two different satisfcation maximizers (or I guess error minimizers, it comes down to the same result) with different amounts of influence over different, but somewhat overlapping, categories of action. The goals of C will not necessarily be fulfilled by U, and vice-versa, explaining why our bodies sometimes do things that contradict with what C, the part of ourselves with primary responsibility for self-reflection on goals and writing comments on Internet blogs, wants.
I don’t know if the OP necessarily has the best explanation for this phenomenon, but the error minimizing executive function doesn’t explain it at all, I think.
Fair point—this doesn’t address the heart of the issue. I’ll mull it over and see if the concept can’t be salvaged.
The idea of C being a public relations agency resonates for me. I prefer the C/U dichotomy to the superego/ego dichotomy because whereas in both cases it is U or the ego that represents my real self, the first theory has U in agent-control and trying to mollify C, the second theory has the superego in agent-control and embarrassed by the ego. I feel like the first theory more closely fits what I experience, especially during indecision conflicts. Without any guilt, I’ll ask, what is the minimum I need to do to feel external-world/socially comfortable here? Because my main goal is be true to my ‘self’. Also, morality doesn’t consistently break evenly on one side or the other in these conflicts.
So I disagree with the idea that everything altruistic and good about a person goes in C, and that U is an ugly, selfish animal. I only agree that U is socially unacceptable. For one thing, empathy is a basic, natural emotion and society teaches us to repress its natural expression as much as it teaches us to express it in fake ways. C does feel like a liar (I think this is because U doesn’t understand and isn’t convinced by propositional arguments), but C needn’t actually be lying, or representing a better or worse aspect of yourself. Its just useful. (Here on Less Wrong, I feel like we are all ’U’s trying to double-check what C is doing and make it more correct.)
I agree with Tim Tyler, that consciousness containing the things you need to be able to reflect on in order to function properly would be more basic way of delineating the conscious mind. Combining this with the idea of C as the public relations agent, this turns into: C is in charge of everything U doesn’t do naturally on its own, in order to relate to and succeed in the outside world. If U can’t relate to people starving on other continents, C uses logical reasoning to try to care and develop an identity as someone who would care. If U wants to sit down next to the beggar on the street and help him gather resources to survive and feel good, C has the reasons why that isn’t practical or good for U in the long run. Indeed, C is our agent, especially our public relations agent.
About empathy: what is a good way for someone who experiences less empathy to relate to more normal humans?
About lying: I do not regard it as helpful to consider whether C is lying. Instead, one should ask whether there exists an isomorphism between C’s purported beliefs and an accurate model of the person’s whole mind, and if so, what that isomorphism is.
As an example from my experience, consider the exchange:
“Oh, hi, [given name]. How are you?”
″Oh, fine, thanks.”
The second person, despite not being “fine” by more objective metrics, need not be regarded as lying, so long as “Oh, fine, thanks”, is simply taken to mean “I recognize that you have taken effort to express concern to me, and would like to reciprocate by showing friendliness and not bothering you with more details about myself than are appropriate for our relationship.”
Instead of asking such a nebulous, abstract question as, “Is the second person lying by claiming to be fine?” I contend that one should focus on the question of how those statements should map to a model of reality, and if there exists a concise description for how it does so.
I’m not sure what you mean by “the natural expression of empathy”, though I do agree that society encourages you to express it whether you have it or not.
By the natural expression of empathy I mean the empathy we feel at gut, involuntary level when someone being in distress makes us feel bad. For example, babies crying when other babies are upset may be the first expressions of this (though it may also reflect a selfish concern that they are in a ‘bad situation’). But certainly, seeing other people in pain can make us feel nauseous and distressed and we often want to help.
I wrote that socialization forces us to repress natural empathy as much as it requires us to fake it in some contexts. We feel empathy most naturally in physically immediate contexts (for example, towards a beggar on the street or towards someone in class being teased) and we are often socialized to repress or at least tone down some of this empathy, and less naturally in spatially removed contexts, in which we are socialized to further develop it.
Whatever is going on isn’t just about selfishness vs. empathy. There are usually social rules requiring you to not show empathy for some people and/or the results of some actions.
So, if the person discussing this, and presumably the one choosing to be rational, is C, and it must necessarily fight against a selfish, flighty and almost completely uncaring U except in the cases where it percieves a direct benifit, and furthermore is assumed to have complete or nearly complete control over the person, then why be rational? The model described here makes rationality, rather than mere rationalization, literally impossible. Therefore, why try? Or did U’s just decide to force their C’s into this too, making such a model deterministic in all but name?
This seems incorrect—anyone reasonably apt at introspection would not come to the conclusion “I have only admirable goals”, but instead to the conclusion “I seem to have many conflicting goals”. It’s only a profound LACK of introspection that would make someone believe that they have only admirable goals.
Really? I honestly believe on introspection that I want to help other people and so on. I notice I’m not very good at it, but I want to.
But don’t you also believe that you want sex, status, and other socially inappropriate things? I mean, you wrote this article, you’re certainly aware of your conflicting drives.
I think that for some people it’s sometimes rational to avoid beggars. Recalling your post Doing your good deed for the day, it seems plausible that for some people, giving money to beggars is likely to lower their motivation to do other good things. Giving money to beggars is probably not a cost-effective charitable activity. So it’s plausible that some people should not give money to beggars. On the other hand, the act of refusing to help people in need can be desensitizing and can lower one’s future desire to help other people. So for some, beggar avoidance might be an important part of helping people.
Sure, to the extent that this is the case, it’s an artifact of arbitrary features of the human brain, but we don’t have freedom to alter the features of our brains at will, and for the time being we’re stuck with some of them and have to make the best of our situation. Making the best of our situation may involve apparently contradictory behavior such as beggar avoidance.
Does this sort of effect still apply to people who are consciously aware of it?
I don’t know of any experimental results addressing your question.
My own interpretation of the “doing your good deed for the day” phenomenon is that people have a fixed capacity for sacrifice—that after having made a number of decisions where they felt like they were sacrificing something, people stop being being willing to sacrifice more.
So my guess would be that the effect applies to people who don’t enjoy giving to beggars (whether or not they’re consciously aware of the effect) but not to people who do enjoy giving to beggars.
There’s a good reason to avoid bums completely independent of donating money: they smell horrible and are generally unpleasant to be close to. Crossing the street to avoid the smell or risk from them seems perfectly sane goal-directed behavior to me
Even when beggars don’t smell, it’s rational to avoid socially awkward situations, if you are bothered by them. This is the same reason I don’t answer the phone when I know it’s a telemarketer calling.
I think some terminology clarification might be in order here—consciousness performs a variety of functions (attention/monitoring, abstract thought, executive, etc.), and mediating conflicts between conscious and subconscious preferences comprises a somewhat small part of what it does. This may be why the theory seems awkward to some people (including me).