I generally agree, but I challenge the claim that the (mostly social) failures of conscious consequentialist reasoning are just a matter of speed-of-calculation versus a cached rule. In most social situations, one or several such rules feel particularly salient to our decisionmaking at any moment, but the process by which these particular rules seem salient is the essence of our real (unconscious) calculations.
We already have a well-developed neural framework for social situations, and a conscious calculation of utility is unlikely to outperform that framework across that domain (though it can lead to occasional insights that the heuristics of the framework miss). Compare it to our native ‘physics engine’ that allows us to track objects, move around and even learn to catch a ball, versus the much slower and mistake-prone calculations that can still give the better answer when faced with truly novel situations (our intuitive physics is wrong about what happens to a helium balloon in the car when you slam on the brakes, but a physics student can get the right answer with conscious thought).
I suggest that attempting to live one’s entire life by either conscious expected-utility maximization or even by execution of consciously-chosen low-level heuristics is going to work out badly. What works better for human beings is to generally trust the unconscious in familiar social domains, but to observe and analyze ourselves periodically in order to identify (and hopefully patch) some deleterious biases. We should also try to rely more on conscious Bayesian reasoning in domains (like scientific controversies or national politics) that were unlikely to be optimized for in the ancestral environment.
This leaves aside, of course, the question of what to do when one’s conscious priorities seem to oppose one’s unconscious priorities (which they do, not in most things, but in some crucial matters).
The unconscious mind is not some dark corner, it’s the vast majority of “you”. It’s what and who you are, all the time you’re not pointing your “Cartesian camcorder” at yourself. It’s the huge huge majority of your computing capacity, nearly all of your personality, nearly all of your motivation, emotion, and beneath that the coldly calculating part that handles signaling, pack rank, etc. That’s the real problem. Cutting it out of the picture in favor of the conscious mind is like the pinky finger demanding that the body be cut off.
Upvoted because I deliberatively judge that this should scare me, and yet I immediately recognize it as obviously true, and yet it does not really scare me, because nearly all of my emotion, motivation, and so forth are below the level where my deliberative judgement desperately cries that she should be in control of things.
nearly all of my emotion, motivation, and so forth are below the level where my deliberative judgement desperately cries that she should be in control of things.
The conscious mind finds itself riding an uncontrollable wild horse of emotions, and generally the success of a person in the real world will depend on the conscious mind’s ability to strategically place carrots in places such that the horse goes roughly the right way.
But, on the other hand, moral antirealism means that if the conscious mind did ever completely free itself from that wild horse, it would have only an extremely impoverished purpose in life, because rationality massively under-constrains behaviour in this morally relative existence.
The conscious mind finds itself riding an uncontrollable wild horse of emotions, and generally the success of a person in the real world will depend on the conscious mind’s ability to strategically place carrots in places such that the horse goes roughly the right way.
This is a very common view about the human mind, and I think it is a mistaken one. In most domains of daily life, the unconscious knows what it’s doing far better than the conscious mind; and since much of our conscious goals consist of signaling and ignore the many unconscious actions that keep them running, the conscious goals would probably be incoherent or awful for us if we genuinely pursued them in an expected-utility-maximizing fashion. Fortunately, it is impossible for us to do so by mere acts of will.
I instead hope to let my conscious thought model and understand the unconscious better, in order to point out some biases (which can be corrected for by habit or conscious effort or mind-hack) and to see if there are ways that both my conscious and unconscious minds can achieve their goals together rather than wasting energy in clashes. (So far I haven’t seen an unconscious goal that my conscious mind can’t stomach; it’s often just subgoals that call out for compromise and change.)
Also, there’s no hope of the conscious mind “freeing itself”, because it is not enough of an independent object to exist on its own.
IAWYC, but I want to add that the conscious mind has some strengths— like the ability to carefully verify logical arguments and calculate probabilities— which the unconscious mind doesn’t seem to do much of.
I’m not sure how to describe what actually happens in my mind at the times that I feel myself trying to follow my conscious priorities against some unconscious resistance, but the phenomenon seems to be as volitional as anything else I do, and so it seems reasonable to reflect on whether and when this “conscious override” is a good idea.
the question of what to do when one’s conscious priorities seem to oppose one’s unconscious priorities
Fully sorting out this problem probably requires the physical capabilities and intelligence that are beyond our current level.
In key areas such as getting out of bed in the morning there are Hacks, like eating a whole dessert spoon of honey to increase your blood sugar and make you feel active.
We already have a well-developed neural framework for social situations, and a conscious calculation of utility is unlikely to outperform that framework across that domain
It’s not about outperforming, it’s about improvement on what you have. There is no competition, incoherence is indisputably wrong wherever it appears. Only if the time spent reflecting on coherence of decisions could be better spent elsewhere is there a tradeoff, but the other activity doesn’t need to be identified with “instinctive decision-making”, it might as well be hunting or sleeping.
The context here is of an aspiring rationalist trying to consciously plan and follow a complete social strategy, and rejecting their more basic intuitions about how they should act in favor of their consequentialist calculus. This sort of conscious engineering often fails spectacularly, as I can attest. (The usual exceptions are heuristics that have been tested and passed on by others, and are more likely to succeed not because of their rational appeal relative to other suggestions but rather because of their optimization by selection.)
Then they are reaching out too much, using the tool incorrectly, confusing themselves instead of fixing the problems. Note that conscious planning is also mostly intuition, not expected utility maximization, and you’ve just magnified on the incoherence of the practice of applying it where the consequence of such act is failure, while the goal is success.
I generally agree, but I challenge the claim that the (mostly social) failures of conscious consequentialist reasoning are just a matter of speed-of-calculation versus a cached rule. In most social situations, one or several such rules feel particularly salient to our decisionmaking at any moment, but the process by which these particular rules seem salient is the essence of our real (unconscious) calculations.
We already have a well-developed neural framework for social situations, and a conscious calculation of utility is unlikely to outperform that framework across that domain (though it can lead to occasional insights that the heuristics of the framework miss). Compare it to our native ‘physics engine’ that allows us to track objects, move around and even learn to catch a ball, versus the much slower and mistake-prone calculations that can still give the better answer when faced with truly novel situations (our intuitive physics is wrong about what happens to a helium balloon in the car when you slam on the brakes, but a physics student can get the right answer with conscious thought).
I suggest that attempting to live one’s entire life by either conscious expected-utility maximization or even by execution of consciously-chosen low-level heuristics is going to work out badly. What works better for human beings is to generally trust the unconscious in familiar social domains, but to observe and analyze ourselves periodically in order to identify (and hopefully patch) some deleterious biases. We should also try to rely more on conscious Bayesian reasoning in domains (like scientific controversies or national politics) that were unlikely to be optimized for in the ancestral environment.
This leaves aside, of course, the question of what to do when one’s conscious priorities seem to oppose one’s unconscious priorities (which they do, not in most things, but in some crucial matters).
The unconscious mind is not some dark corner, it’s the vast majority of “you”. It’s what and who you are, all the time you’re not pointing your “Cartesian camcorder” at yourself. It’s the huge huge majority of your computing capacity, nearly all of your personality, nearly all of your motivation, emotion, and beneath that the coldly calculating part that handles signaling, pack rank, etc. That’s the real problem. Cutting it out of the picture in favor of the conscious mind is like the pinky finger demanding that the body be cut off.
Upvoted because I deliberatively judge that this should scare me, and yet I immediately recognize it as obviously true, and yet it does not really scare me, because nearly all of my emotion, motivation, and so forth are below the level where my deliberative judgement desperately cries that she should be in control of things.
The conscious mind finds itself riding an uncontrollable wild horse of emotions, and generally the success of a person in the real world will depend on the conscious mind’s ability to strategically place carrots in places such that the horse goes roughly the right way.
But, on the other hand, moral antirealism means that if the conscious mind did ever completely free itself from that wild horse, it would have only an extremely impoverished purpose in life, because rationality massively under-constrains behaviour in this morally relative existence.
This is a very common view about the human mind, and I think it is a mistaken one. In most domains of daily life, the unconscious knows what it’s doing far better than the conscious mind; and since much of our conscious goals consist of signaling and ignore the many unconscious actions that keep them running, the conscious goals would probably be incoherent or awful for us if we genuinely pursued them in an expected-utility-maximizing fashion. Fortunately, it is impossible for us to do so by mere acts of will.
I instead hope to let my conscious thought model and understand the unconscious better, in order to point out some biases (which can be corrected for by habit or conscious effort or mind-hack) and to see if there are ways that both my conscious and unconscious minds can achieve their goals together rather than wasting energy in clashes. (So far I haven’t seen an unconscious goal that my conscious mind can’t stomach; it’s often just subgoals that call out for compromise and change.)
Also, there’s no hope of the conscious mind “freeing itself”, because it is not enough of an independent object to exist on its own.
IAWYC, but I want to add that the conscious mind has some strengths— like the ability to carefully verify logical arguments and calculate probabilities— which the unconscious mind doesn’t seem to do much of.
I’m not sure how to describe what actually happens in my mind at the times that I feel myself trying to follow my conscious priorities against some unconscious resistance, but the phenomenon seems to be as volitional as anything else I do, and so it seems reasonable to reflect on whether and when this “conscious override” is a good idea.
Fully sorting out this problem probably requires the physical capabilities and intelligence that are beyond our current level.
In key areas such as getting out of bed in the morning there are Hacks, like eating a whole dessert spoon of honey to increase your blood sugar and make you feel active.
It’s not about outperforming, it’s about improvement on what you have. There is no competition, incoherence is indisputably wrong wherever it appears. Only if the time spent reflecting on coherence of decisions could be better spent elsewhere is there a tradeoff, but the other activity doesn’t need to be identified with “instinctive decision-making”, it might as well be hunting or sleeping.
The context here is of an aspiring rationalist trying to consciously plan and follow a complete social strategy, and rejecting their more basic intuitions about how they should act in favor of their consequentialist calculus. This sort of conscious engineering often fails spectacularly, as I can attest. (The usual exceptions are heuristics that have been tested and passed on by others, and are more likely to succeed not because of their rational appeal relative to other suggestions but rather because of their optimization by selection.)
Then they are reaching out too much, using the tool incorrectly, confusing themselves instead of fixing the problems. Note that conscious planning is also mostly intuition, not expected utility maximization, and you’ve just magnified on the incoherence of the practice of applying it where the consequence of such act is failure, while the goal is success.
Excellent comment.