How confident should we be?
What should a rationalist do about confidence? Should he lean harder towards
relentlessly psyching himself up to feel like he can do anything, or
having true beliefs about his abilities in all areas, coldly predicting his likelihood of success in a given domain?
I don’t want to falsely construe these as dichotomous. The real answer will probably dissolve ‘confidence’ into smaller parts and indicate which parts go where. So which parts of ‘confidence’ correctly belong in our models of the world (which must never be corrupted) or our motivational systems (which we may cut apart and put together however helps us achieve our goals)? Note that this follows the distinction between epistemic and instrumental rationality.
Eliezer offers a decision criterion in The Sin of Underconfidence:
Does this way of thinking make me stronger, or weaker? Really truly?
It makes us stronger to know when to lose hope already, and it makes us stronger to have the mental fortitude to kick our asses into shape so we can do the impossible. Lukeprog prescribes boosting optimism “by watching inspirational movies, reading inspirational biographies, and listening to motivational speakers.” That probably makes you stronger too.
But I don’t know what to do about saying ‘I can do it’ when the odds are against me. What do you do when you probably won’t succeed, but believing that Heaven’s army is at your back would increase your chances?
My default answer has always been to maximize confidence, but I acted this way long before I discovered rationality, and I’ve probably generated confidence for bad reasons as often as I have for good reasons. I’d like to have an answer that prescribes the right action, all of the time. I want know when confidence steers me wrong, and know when to stop increasing my confidence. I want the real answer, not the historically-generated heuristic.
I can’t help but feeling like I’m missing something basic here. What do you think?
First, disambiguate the word “confidence”. There are numbers we call confidence—cold calculations that feed into other cold calculations. And then there’s an emotion we call confidence. These are not the same thing, and should not be subject to the same policies. To a first approximation, the numbers should be truthful and the emotion should be wildly over-optimistic.
So you disagree with Tordmor, and are arguing that we should never act like the Little Engine That Could by saying ‘I think I can’ to increase our credence in our likelihood of success, even if doing so helps us win more often? This is not totally implausible; I’ve heard of cases where not making Bayesian updates is the rational choice in game theoretic situations (where ignorance shields you from defection). EDIT: I just realized that my comment here makes absolutely no sense—just ignore it.
I used to play a lot of squash. It looks like this, though those players are exceptionally good at it. It took a long time to get there, but I found could achieve a state of flow, where my game would become very good. Your use of the phrases “psyched up” and “coldly predict” reminded me of my squash game immediately.
I’m worried about the analogy I’m about to make, since a sport has well-defined rules, and I trained myself to implement narrow strategy. We don’t tend to get that in messy real life. But here goes: squash involves strategic concerns (deciding where to place your next shot and which region of the court to occupy between shots) and obviously, concerns of execution (managing to actually hit a shot where you intended).
It becomes very important to calibrate your strategy to your own technical skills and fitness. A weak player will tend to implement a completely different strategic game from that of a strong player. It’s almost as though a player “graduates” from one style of play to another as the improve.
When I was training, we spoke alot about “high percentage” and “low percentage” shots. And that was appropriate; the game involves making quick decisions in the face of uncertainty. It’s easy to play a “high percentage” shot along a side wall, a shot that tends to mitigate the risk that your opponent will be able to do something fancy, but not a shot that really gives you any sort of upper hand in the rally. A low percentage shot might be to play a low shot across the font of the court—if hit well, this will move your opponent out of position, or maybe even win the rally, but a poor hit is at risk of losing the rally for you, as it’s easy to play the ball out of bounds, or to lob it up where your opponent gains a huge advantage.
There are some objective decision criteria that you can use to decide whether to play various types of high or low percentage shots, based mostly on your position and your opponent’s position on the court. (I had a coach actually name zones on the court and me and my friends would play drills where we have to play a certain type of sot, depending on which zone we were in.) But there’s a huge subjective component to deciding whether to make a low percentage shot. After a lot of training, you just ask yourself: “does this feel right?” if ‘yes’, you make the risky move and if ‘no’, you don’t. This all undoubtedly depends on things like your balance, whether you’re placed in a comfortable position relative to the ball, and all sorts of other subtle physiological conditions.
I was a fairly inconsistent, though skilled player 10 years ago. What tended to happen is that I could “psyche myself up” by taking a fast, hard warmup before a game. I’d play confidently, and make lots of impressive shots, but at the cost of getting too aggressive. “Playing too many low percentage shots” is actually a slight lie, but it captures the gist and fits for the purpose of my narrative. Other times, I’d come on much too cold. I’d know what decision to make, but I’d do it through deliberation; I’d be cool and non-committal. This leads to some very disastrous shots. I’ve hit the ball into the ceiling just because I changed my mind about where I want the ball to go halfway through a swing. But then there were days when I could achieve “flow”. I make the right decisions, almost entirely subconsciously, and never question them. I always described the feeling that came with it as “cool intensity”. There was no “fire in my belly”. Maybe a small flame, but nothing excessively passionate. Just awareness of my surroundings and what I need to do next.
I usually fail to achieve anything like that state in the rest of my life, except in solving simple math problems on tests during my undergrad. The only thing I think these two things have in common is that I learned them using theory and then trained them a lot. The problem I’ve had in generalizing this lesson is that the world is large and complex and there’s no way to train for everything. But Eliezer’s notion of a rationality dojo resonates with my only two successful experiences. Winning in gerenal is going to be a subtle art, and we’re going to have to practice somehow.
Shut up and multiply by the payoff, and act on the expected value, not the probability.
When you have decided what to do, do it 100%. Don’t put in 50% effort because you think there’s only a 50% chance of success. That is like the fallacy discussed here of guessing whether the red or the blue light will turn on next. If red is more frequent than blue and each event is independent, then you should guess red every time. When you are uncertain what to do, the rational thing to do is to go wholeheartedly for whichever choice seems, however uncertainly, to be the best.
I talked about something similar here.
Basically, being confident causes you to do certain things. If you can just do those things, then it doesn’t matter if you’re confident. If you’re sure that you can do those things so long as you think you can, you should think you can, and then do those things.
If increasing your confidence in your chances of success above the correct value actually does raise those chances, than the correct value of confidence is where your expected value is maximized, i.e. where the benefit of more chances of success is not reduced by the effects of more chances at failure from tasks you wouldn’t have attempted otherwise.
Edit: But I suggest you check the effect, I guess it might only feel like overconfidence increases chances of success.
So are you saying that the correct value of confidence in my success need not meet the actual likelihood of my success? That is to say: I may deliberately believe false things?
Agreed that I should check the effect.
Yes, that is what I’m saying. We pursue epistemological rationality because it serves instrumental rationality and thus only in so far as it really does.
That sounds like the correct goal for rationality. I guess the next step for me is to figure out good heuristics for fixing the effects of intentional adjustments, and discovering whether there are cases in real life when I should intentionally believe false things.
It could be a theoretical possibility that never pans out IRL.
Just one more think should it actually turn out to be preferable to believe in false things:
If you start to believe in “beneficial falsehoods” your ability to check the effects of believing additional falsehoods accurately might be reduced. So keep that in mind.
If you pursue a goal where you have a very low chance of success, it is true that believing that Heaven’s armies are behind you might increase your chances a little bit. But it also has a negative side. For one, it increases the chance that your failure will be truly spectacular, as you dig yourself much deeper than you otherwise would. Even more commonly, you will waste resources: you will invest or spend things you would have started saving or protecting if you had a more realistic view of your chances.
Overconfidence is a pure gamble: small chance of jackpot, much bigger chance of losing your shirt to the house.
If you must go for a long-shot project, you should do so only if the cost of failure isn’t too high (i.e. “if it works, fantastic; if it doesn’t, oh well”), and then be as realistic as possible about your chances throughout the whole process. This can be fun, it can help you expand your horizons, and train your skills. But the “limited cost of failure” and “no overconfidence” caveats are critical.
As for objective ways to calibrate your confidence, yes, there is a lot of useful information. I’m currently writing a broad summary of our current understanding of motivation, in psychology and neuroscience. It should be posted in a few weeks, as a sequence (the thing will be bloody long, there is a lot of ground to cover), and it will touch upon this point more than once. But here, I’ll adapt a bit that deals with the concept of self-efficacy, which I think may help answer your question.
There are many possible, or even desirable goals we can go for, and our brain has to prioritize. We will be motivated to pursue some goals, and unmotivated to pursue others. This is not a simple function of desirability. The perceived difficulty of the goal matters, as does your perception of your abilities. Your mind will take both into account before deciding where to invest motivation.
For example, John Smith considers an acting career in Hollywood highly desirable; but the goal is extremely difficult to achieve, and he perceives his acting skills as average. Therefore, he may occasionally daydream about being an actor, but he won’t be motivated to actually pursue acting. Jane Smith, on the other hand, also finds that goal highly desirable; she also correctly sees the goal as extremely difficult. But she also perceives her acting skills as extremely good. So she moves to LA, hires an agent, and starts waiting tables while she waits for her break. She is highly motivated to pursue acting.
John and Jane have the same goal and the same outlook upon that goal. The difference in motivation comes entirely from their perception of their own abilities. John has low self-efficacy in regard to acting. Jane has high self-efficacy in regard to acting.
This applies to all areas. If we have high self-efficacy, we have more motivation to pursue a goal, we are more likely to keep trying after encountering obstacles and setbacks, and we are likely to be more productive while working towards that goal. We also feel generally better when we work in a field where our self-efficacy is high. Low self-efficacy, conversely, produces feeling of insecurity and depression.
But higher self-efficacy isn’t always better. Where a person with too low self-efficacy fails to achieve an attainable goal, a person with unrealistically high self-efficacy is likely to try for an unattainable one – and quite predictably fail. Incorrect calibration of self-efficacy also causes attribution problems. A person with low self-efficacy will attribute failure to personal lack of ability, even if failure occurred because of external factors. At the same time, a high self-efficacy person will attribute failure to external factors, even if it occurred due to insufficient skill or effort.
If we make a reasonable assumption (based on most current research) that the best, highest-growth goals are those on the very edge of our ability, it follows that the “optimal” self-efficacy should be tuned to just above the very limit of our ability. We should, in other words, believe that we can do just a little bit more than we actually can.
Paradoxically, while good, accurately calibrated self-efficacy helps motivation in its own narrow area, it can lead to significant problems elsewhere. We achieve expertise, and build up a good understanding of our abilities. In the process, we become used to the feeling of confidence while solving difficult, expert-level problems; feelings of incompetence and anxiety become much harder to bear. New challenges become demotivating, since they force us to encounter unfamiliar obstacles which we may or may not actually overcome. If failure occurs, it will compromise the picture of the competent, successful person we imagine ourselves to be. So we just focus on doing familiar things within our area, and we start avoiding new challenges and opportunities for growth.
This can be a major problem, especially common among those who have achieved extremely high levels of expertise in highly demanding fields. For example, it affects many people who work in academic research, regardless of the field. In our training, we spend years to become absolute experts in the use of a few particular (and quite difficult) experimental techniques. When we then encounter a new problem, there is an inclination against looking for the best possible approach; instead, we tend to try and solve it through convoluted and indirect ways of applying techniques we are already familiar with. I have had to drag myself out of this cognitive-motivational trap more than once
I find it is possible to do both. Just because you know there is a 95% chance of failure doesn’t mean you can’t be confident and act as if you are in the 5%. Abstract belief about probability doesn’t necessarily naturally get built into emotional state and there is little good reason to deliberately do so if the state is undesirable. Mind you self aware confidence can be a hard skill to learn for some...
Confidence is a state of mind. It is critical from the standpoint of motivation. Without confidence we would be paralyzed into inaction; we would be unable to turn decisions into structured consequences. We would be constantly “scoping the game plan” and never playing. However, confidence should not play a major role in making decisions. Cold rationality is key in two aspects of the decision process: (a) how important is the decision? (b) if the decision is important, what is the “outside view” ? (per Kahneman) The first decision, IMO, should be handled with an algorithm analogous to triage and requires ascertaining sufficient basic facts to determine what, in fact, is the decision that needs to be made and how long can the decision be deferred. In other words, part of the algorithm might be answering the question, ‘what happens if we do nothing?’ If the decision appears essentially trivial (i.e., should I buy a new chair and, if yes, should I buy the red chair or the blue), you don’t need to get to (b). If a decision is important, you need to use cold rationality.
If I am dealing with a situation where the decision has already been made, I may be able to use learned skills and experience to determine how to act. Then the key question from the standpoint of confidence is whether the situation falls within the scope of my expertise, where I can be confident that my trained ‘gut reaction’ will be an appropriate response. If it is outside my area of expertise, I have no reason to be confident—although I may act like I am confident if success depends upon others trusting my abilities.
Regardless of how I may need to appear to others, I would never try to kid myself about my abilities. What may be missing in the above question—Should I believe hard that I can accomplish X regardless of the likelihood of success—are the foundational questions: Do I really have to try to accomplish X? Is there a reasonable alternative method that is more likely to be successful? Is there a reasonable alternative outcome Y that will give me the benefits I need from X with a greater chance of success. If the answers are Yes, No, No—then you have to believe in order to win, so throw the “Hail Mary” with total confidence.
I’ve been thinking about that lately, and I think a good reason for telling yourself you’ll succeed even if the odds are against you is that you should prefer to be the kind of entity that succeeds; ergo, you should optimize your predictions to benefit that kind of entity. Doing so will also maximize the measure of those entities, because of how our brains treat prediction of success as encouragement; so it’s preferable on that level as well.
Save the cold rationality for your break down of what you did wrong afterwards. When you are in the middle of doing something you should always act as if you are awesome at it. (This obviously doesn’t apply for deliberate practice sessions.)
“No. Try not. Do, or do not. There is no try.”
When acting, the focus should be on the act, and not the potential failure or success in the act. Do your hedging for failure prior to the act, but while doing the act, set that aside in your mind, and just do it. You will succeed or fail in the doing, but you’ll be more likely to succeed if you set such considerations aside, and just do.
I understand that there will be particular situations where you have to monitor for failure, but those are the exceptions that prove the rule.
I find that if I adopt this mindset I sometimes fail to fully invest myself in my work such that I am working towards my goals. For example, I may read over a text without using my full concentration to analyze it.
It reminds me of this quote:
http://lesswrong.com/lw/7i/rationality_is_systematized_winning/
“This mindset”—is that the mindset I described? If you have a more effective mindset, what is it?
Convincing myself I can succeed.