Simulate and Defer To More Rational Selves
I sometimes let imaginary versions of myself make decisions for me.
I first started doing this after a friend told me (something along the lines of) this story. When they first became executive director of their organization, they suddenly had many more decisions to deal with per day than ever before. “Should we hire this person?” “Should I go buy more coffee for the coffee machine, or wait for someone else deal with it?” “How many participants should attend our first event?” “When can I schedule time to plan the fund drive?”
I’m making up these examples myself, but I’m sure you, too, can imagine how leading a brand new organization might involve a constant assault on the parts of your brain responsible for making decisions. They found it exhausting, and by the time they got home at the end of the day, a question like, “Would you rather we have peas or green beans with dinner?” often felt like the last straw. “I don’t care about the stupid vegetables, just give me food and don’t make me decide any more things!”
They were rescued by the following technique. When faced with a decision, they’d imagine “the Executive Director of the organization”, and ask themselves, “What would ‘the Executive Director of the organization’ do?” Instead of making a decision, they’d make a prediction about the actions of that other person. Then, they’d just do whatever that person would do!
In my friend’s case, they were trying to reduce decision fatigue. When I started trying it out myself, I was after a cure for something slightly different.
Imagine you’re about to go bungee jumping off a high cliff. You know it’s perfectly safe, and all you have to do is take a step forward, just like you’ve done every single time you’ve ever walked. But something is stopping you. The decision to step off the ledge is entirely yours, and you know you want to do it because this is why you’re here. Yet here you are, still standing on the ledge.
You’re scared. There’s a battle happening in your brain. Part of you is going, “Just jump, it’s easy, just do it!”, while another part—the part in charge of your legs, apparently—is going, “NOPE. Nope nope nope nope NOPE.” And you have this strange thought: “I wish someone would just push me so I don’t have to decide.”
Maybe you’ve been bungee jumping, and this is not at all how you responded to it. But I hope (for the sake of communication) that you’ve experienced this sensation in other contexts. Maybe when you wanted to tell someone that you loved them, but the phrase hovered just behind your lips, and you couldn’t get it out. You almost wished it would tumble out of your mouth accidentally. “Just say it,” you thought to yourself, and remained silent. For some reason, you were terrified of the decision, and inaction felt more like not deciding.
When I heard this story from my friend, I had social anxiety. I didn’t have way more decisions than I knew how to handle, but I did find certain decisions terrifying, and was often paralyzed by them. For example, this always happened if someone I liked, respected, and wanted to interact with more asked to meet with them. It was pretty obvious to me that it was a good idea to say yes, but I’d agonize over the email endlessly instead of simply typing “yes” and hitting “send”.
So here’s what it looked like when I applied the technique. I’d be invited to a party. I’d feel paralyzing fear, and a sense of impending doom as I noticed that I likely believed going to the party was the right decision. Then, as soon as I felt that doom, I’d take a mental step backward and not try to force myself to decide. Instead, I’d imagine a version of myself who wasn’t scared, and I’d predict what she’d do. If the party really wasn’t a great idea, either because she didn’t consider it worth my time or because she didn’t actually anticipate me having any fun, she’d decide not to go. Otherwise, she’d decide to go. I would not decide. I’d just run my simulation of her, and see what she had to say. It was easy for her to think clearly about the decision, because she wasn’t scared. And then I’d just defer to her.
Recently, I’ve noticed that there are all sorts of circumstances under which it helps to predict the decisions of a version of myself who doesn’t have my current obstacle to rational decision making. Whenever I’m having a hard time thinking clearly about something because I’m angry, or tired, or scared, I can call upon imaginary Rational Brienne to see if she can do any better.
Example: I get depressed when I don’t get enough sunlight. I was working inside where it was dark, and Eliezer noticed that I’d seemed depressed lately. So he told me he thought I should work outside instead. I was indeed a bit down and irritable, so my immediate response was to feel angry—that I’d been interrupted, that he was nagging me about getting sunlight again, and that I have this sunlight problem in the first place.
I started to argue with him, but then I stopped. I stopped because I’d noticed something. In addition to anger, I felt something like confusion. More complicated and specific than confusion, though. It’s the feeling I get when I’m playing through familiar motions that have tended to lead to disutility. Like when you’re watching a horror movie and the main character says, “Let’s split up!” and you feel like, “Ugh, not this again. Listen, you’re in a horror movie. If you split up, you will die. It happens every time.” A familiar twinge of something being not quite right.
But even though I noticed the feeling, I couldn’t get a handle on it. Recognizing that I really should make the decision to go outside instead of arguing—it was just too much for me. I was angry, and that severely impedes my introspective vision. And I knew that. I knew that familiar not-quite-right feeling meant something was preventing me from applying some of my rationality skills.
So, as I’d previously decided to do in situations like this, I called upon my simulation of non-angry Brienne.
She immediately got up and went outside.
To her, it was extremely obviously the right thing to do. So I just deferred to her (which I’d also previously decided to do in situations like this, and I knew it would only work in the future if I did it now too, ain’t timeless decision theory great). I stopped arguing, got up, and went outside.
I was still pissed, mind you. I even felt myself rationalizing that I was doing it because going outside despite Eliezer being wrong wrong wrong is easier than arguing with him, and arguing with him isn’t worth the effort. And then I told him as much over chat. (But not the “rationalizing” part; I wasn’t fully conscious of that yet.)
But I went outside, right away, instead of wasting a bunch of time and effort first. My internal state was still in disarray, but I took the correct external actions.
This has happened a few times now. I’m still getting the hang of it, but it’s working.
Imaginary Rational Brienne isn’t magic. Her only available skills are the ones I have in fact picked up, so anything I’ve not learned, she can’t implement. She still makes mistakes.
Her special strength is constancy.
In real life, all kinds of things limit my access to my own skills. In fact, the times when I most need a skill will very likely be the times when I find it hardest to access. For example, it’s more important to consider the opposite when I’m really invested in believing something than when I’m not invested at all, but it’s much harder to actually carry out the mental motion of “considering the opposite” when all the cognitive momentum is moving toward arguing single-mindedly for my favored belief.
The advantage of Rational Brienne (or, really, the Rational Briennes, because so far I’ve always ended up simulating a version of myself that’s exactly the same except lacking whatever particular obstacle is relevant at the time) is that her access doesn’t vary by situation. She can always use all of my tools all of the time.
I’ve been trying to figure out this constancy thing for quite a while. What do I do when I call upon my art as a rationalist, and just get a 404 Not Found? Turns out, “trying harder” doesn’t do the trick. “No, really, I don’t care that I’m scared, I’m going to think clearly about this. Here I go. I mean it this time.” It seldom works.
I hope that it will one day. I would rather not have to rely on tricks like this. I hope I’ll eventually just be able to go straight from noticing dissonance to re-orienting my whole mind so it’s in line with the truth and with whatever I need to reach my goals. Or, you know, not experiencing the dissonance in the first place because I’m already doing everything right.
In the mean time, this trick seems pretty powerful.
- Your best future self by 6 Jun 2020 19:10 UTC; 36 points) (
- 8 Jan 2018 18:40 UTC; 9 points) 's comment on Roleplaying As Yourself by (
- Meetup : Urbana-Champaign: Meta-systems and getting things done by 20 Oct 2014 4:16 UTC; 3 points) (
- 25 Nov 2014 10:51 UTC; 3 points) 's comment on Open thread, Nov. 24 - Nov. 30, 2014 by (
- 13 Sep 2014 23:55 UTC; 3 points) 's comment on Talking to yourself: A useful thinking tool that seems understudied and underdiscussed by (
- 28 Sep 2014 20:42 UTC; 3 points) 's comment on The Puzzle of Faith and Belief by (
- 15 Sep 2014 0:26 UTC; 1 point) 's comment on Causal decision theory is unsatisfactory by (
- 23 Aug 2015 6:30 UTC; 0 points) 's comment on Travel Through Time to Increase Your Effectiveness by (
I have found that the more I use my simulation of HPMOR!Quirrell for advice, the harder it is to shut him up. As with any mental discipline, thinking in particular modes wears thought-grooves into your brain’s hardware, and before you know it you’ve performed an irreversible self-modification. Consequently, I would definitely recommend that anybody attempting to supplant their own personality (for lack of a better phrasing) with a model of some idealized reasoner try to make sure that the idealized reasoner shares your values as thoroughly as possible.
I’ve now got this horrifying idea that this has been Quirrell’s plan all along: to escape from HPMOR to the real world by tempting you to simulate him until he takes over your mind.
Hmm, so the Fanfiction.net website is his horcrux?
In retrospect, I’m kind of glad that my plan to make a Quirrell-tulpa never got off the ground.
afaict the quirrell tulpa is one of the more common types of tulpas. if you have one, do not use it. it is secretly voldemort and will destroy your soul.
But Quirrell didn’t cause Eliezer to write HPMOR...
It’s to Quirrell’s advantage that you believe that, of course.
Beware acausal trade! Once Eliezer imagined Quirrel, he had to write HPMOR to stop Quirrel from counterfactually simulating 3^^^3 dustspeckings.
Rational agents cannot be successfully blackmailed by other agents that simulate them accurately, and especially not by figments of their own imagination.
Are you implying that rational agents can be successfully blackmailed by other agents that simulate them inaccurately? (This does seem plausible to me, and is an interesting rare example of accurate knowlage posing a hazard.)
Well, that’s quite obvious. Just imagine the blackmailer is a really stupid human with a big gun that’d fall for blackmail in a variety of awful ways, and has a bad case of typical mind fallacy, and if anything goes other than their expectations they get angry and just shot them before thinking through the consequences.
Its kinda obvious, but deeply counter-intuitive—I mean its a situation where stupidity is decisive advantage!
Not quite stupidity—irrationality. And it is well-known that (credible) irrationality can be a big advantage in negotiations and other game theory scenarios. Essentially, if I’m irrational then you cannot simulate me accurately and cannot predict what I will do which means that your risk aversion pushes you towards safe choices which limit your downside at the cost of your upside. And if it’s a zero-sum game, I get this upside.
Of course, I need to be credible in showing my irrationality.
The reason such a strategy is not used more often is because (a) often there is the option to walk away which many people do when faced with an irrational counterparty; and (b) when two irrational counterparties meet, bad things happen :-)
There are instances where (arguably) irrationality confers a big game-theoretic advantage even though you’re predictable.
For instance, suppose you’re leading a nuclear superpower. If you can make it credibly clear that you really truly would be happy to launch World War Three if the other guys don’t back down, then they probably will. Not because they can’t predict your actions, but because they can.
In this sort of case it’s either debatable whether it’s really irrationality, or debatable whether it’s really a game-theoretic advantage. If you can really be sure that the other guys will back down, then maybe it’s not irrationality because you never have to blow up the world. If you can’t, then maybe you don’t have a game-theoretic advantage after all because if you play this game often enough then the other guys call your bluff, you push the big red button, and everyone dies.
[EDITED to add: I think this sort of case is nearer to the example discussed upthread than the sort where unpredictability is key.]
That’s more like sheer bloodymindedness X-) not irrationality.
Yeah, it’s called the game of chicken and that’s a slightly different thing.
I think you mean that rational agents cannot be successfully blackmailed by others agents that for which it is common knowledge that the other agents can simulate them accurately and will only use blackmail if they predict it to be successful. All of this of course in the absence of mitigating circumstances (including for example the theoretical likelihood of other agents that reward you for counterfactualy giving into blackmail under these circumstances).
That doesn’t seem true. How can the victim know for sure that the blackmailer is simulating them accurately or being rational?
Suppose you get mugged in an alley by random thugs. Which of these outcomes seems most likely:
You give them the money, they leave.
You lecture them about counterfactual reasoning, they leave.
You lecture them about counterfactual reasoning, they stab you.
Any agent capable of appearing irrational to a rational agent can blackmail that rational agent. This decreases the probability of agents which appear irrational being irrational, but not necessarily to the point that you can dismiss them.
Why not? Are rational agents generally immune to blackmail, or is it not strictly advantageous to be able to simulate another agent accurately?
I think it basically comes to, if the rational agent recognizes that the rational thing to do is to NOT buckle under blackmail, regardless of what the rational agent simulating them threatens, then the blackmailer’s simulation of the blackmailee will also not respond to that pressure, and so it’s pointless to go to the effort of pressuring them in the first place. However, if the blackmailer is irrational, their simulation of the blackmailee will be irrational, and thus they will carry through with the threat. This means that the blackmailee’s simulation of the blackmailer as rational is itself inaccurate, as the simulation does not correspond to reality. If the blackmailee is irrational, their simulation of the blackmailer will be irrational, and thus they will concede to their demands. Yet, each party acts as if their simulation of the other was correct, until actual, photon-transmitted information about the world can impress itself into their cognitive function. So, no-one gets what they want. The best choice for a rational agent here is just to ignore the good professor. On the other hand, you can’t argue with results. And there’s a simulation of Quirrel s-quirreled away in your brain, whispering.
It looks like you are saying that both rational and irrational agents model competitors as behaving in the same way they do.
Is that why you think that an irrational simulation of a rational agent must be wrong, and why a rational simulation of an irrational agent must be wrong? I suggest that an irrational agent can correctly model even a perfectly rational one.
sorry
Worryingly, this sounds like a good deal—getting skills for faster power/control increase, keeping continuity of consciousness, and increasing the odds of escaping from this reality into the next higher one...
Possibly valuable to talk with Robin Hanson and I for revision to HPMOR!Quirrell decision procedures from the source?
I would give a finger from my wand hand for such an opportunity.
I bid two.
This whole comment thread is utterly delightful.
What I want to know is Why didn’t Anna tell me about her technique for managing decision fatigue earlier?
Please do report back on whether it helps you!
Sounds somewhat like exploiting the Fundamental Attribution Error. Other people (including imagined selves) are not that much influenced by external situational factors in our minds. Thus, they act more consistent with their internal characteristics. ActualMe is always thrown around by external forces and emotion chaos.
I was trying to think of what a more rational response to this, since I agreed with your points and also used a very similar trick. I then came up with ‘The rational thing to do is to say you agree, upvote, and then get back to the other tasks you have, rather than spending an hour worrying about a perfect response, which sounds a lot like the very social anxiety she was trying to avoid.’
I agree with your post. Upvoted.
Mine went “See if someone has already stated what you want then up-vote them, OP and be done with it”
I use a variant of this from my tabletop gaming days, ever since I noticed that player characters in rpgs don’t really suffer from decision fatigue or hyperbolic discounting in the same way. I simulate myself as a player making choices about the Toggle-character in a game. “If the GM asked me what my character was going to do today/for the next hour/in response to this challenge, what would I say?”
I was having a hard time making an accurate model from my friends or idealized self, but this method really helped me with the concept.
Though, there’s a failure mode I noticed once I started doing it more regularly. When I think about how a PC would do something, they often place very little value on issues like ‘could this action result in me getting into legal or social trouble.’
So, I just had a weird turn at work, that’s made it obvious that I can’t stay here.
And when I ask myself, “what does Protagonist Brent do?”, I immediately imagine powering through my flu, putting my most valuable possessions in my car, pointing West, and driving until I reach Berkeley—then finding an apartment and walking into start-ups and big companies and saying “I can code. I just moved here from Idaho. I need a job. What have you got?”
And then I don’t do that, because I’m too dizzy to get out of bed, let alone drive 10 hours to Berkeley, and I have no idea where I’d stay, and I only have $3,000 to my name.
Because my imagination does NOT conserve detail, it just builds a narrative.
How do you work around that?
Update:
I’ve slept, rested, stuffed myself full of multivitamins, and got through my flu. My most necessary possessions are in my car. I am pointed West, with a room waiting for me in Berkeley.
puts on Blues Brothers glasses
Hit it.
Drive safely, and live well! We’re behind you.
salutes I profoundly appreciate that. So far, there have been zero police chases inside shopping malls, or their metaphorical equivalents.
Content appropriate to the thread:
Invoking what brave, confident Brent would do has been working SWIMMINGLY WELL for me. Absurdly well. Impossibly well. I have literally spent my entire life not understanding the underlying principle behind “fake it till you make it”, but now I get it instinctively.
Thank you all.
Hello Past Brent, this is Future Brent, aka the actor playing Protagonist Brent on the popular hit show, “Ialdabaoth”.
Here’s what you’re missing:
“Montage”.
It looks like Protagonist Brent has to power through recouperation, driving, interviews, hiring, etc. in a matter of weeks because you forget that Protagonist Brent’s super-long slogs get edited down into a montage. Six months of work still takes six months, but Protagonist You gets to construct that into a montage-like narrative where the boring parts take up maybe two sentences each, and the cool parts take up minutes to hours of excitedly-narrated epicness.
But I, the actor playing Protagonist Brent, still have to slog through the full six months of work, so that we can pick the best highlights and edit it down in post-production to a few pithy, iconic representations of “this was hard work and there was lots of improvement and moments of triumph”. The payoff of the slog is the moments of triumph and the distilled moments of “I can sweat for this”, and neglecting them means a fake montage, which means Protagonist Brent doesn’t look very epic.
And that itself can be motivating! When things are a slow slog, and you can’t just ‘flow’ it, but are actively obsessing over the future in a way that prevents you from connecting to the present, stop saying “I can’t wait to stop having to do this” and start saying “man, I can’t wait to see what the highlights real for this is going to look like.” Don’t imagine the you that’s STOPPED working the slog, imagine the you that’s FINISHED working the slog. It’s a subtle but profound difference.
Taskify your challenges. To continue the metaphor: Protagonists often have lots of adventures/problems/riddles to solve on their way to the end of the book.
You asked Protagonist Brent what he would do and he told you how he would get a job. That’s a good start, but don’t let him take all the credit while foisting the legwork off onto you! How does Protagonist Brent find somewhere to live? How does he address his financial concerns?
I might also add there’s a lot of scope for dramatic imagery if Protagonist Brent rests up for a day or two and then rises from his bed as if from the grave. :)
“WWRMD?” (RM for “rational me”.)
This is exactly the point of asking “What Would Jesus Do?” Christians are asking themselves, what would a perfectly moral, all-knowing person do in this situation, and using the machinery their brains have for simulating a person to find out the answer, instead of using the general purpose reasoner that is so easily overworked. Of course, simulating a person (especially a god) accurately can be kind of tricky. Similar thoughts religious people use to get themselves to do things that they want to abstractly but are hard in the moment: What would I do if I were the kind of person I want to become? What would a perfectly moral, all-knowing person think about what I’m about to do?
Right. Unfortunately, they know they are not as good as Jesus, so this fails more often than not. However, simulating oneself with just one small difference, the way OP suggests, is probably much easier and so likely be more successful.
Actually, simulating a hypothetical wise, but not necessarily superhuman counselor, can be better than simulating myself. I have my confusions and my weaknesses, and it can be hard to generated a model of someone who exactly like myself but without them.
On the other hand, since my hypothetical personal hero is fictional (although maybe based on someone real or fictional that I admire), it is easier to generate a model of this wise, calm and collected adviser.
I think the real trick here is that any “simulated person” in your mind has cognitive “permission” to take the Outside View on you. So it will damn well tell you, “You’re stressed, you’re tired, you’re not thinking correctly” instead of just endorsing everything you think.
The CEO of the company I work for is named Jeff. In a 20-year tenure speech before a packed auditorium, a coworker recently recommended that in the course of our workday we should all be asking WWJD: “What would Jeff do?”
It was pretty funny.
Guess you had to be there.
I’ve used WWJBD. I don’t remember how it stuck with me, and I haven’t killed anyone yet.
For those who are as puzzled by that as I was: I think JB = Jack Bauer, hero of an American TV series called 24. Bauer is a counterterrorism agent; AAUI he’s portrayed as smart, heroic, omnicompetent, violent, principled but not particularly scrupulous, and notoriously willing to go as far as torture to save lives.
(I haven’t watched 24 and would be glad of correction if my characterization is wrong. Or for that matter if I’ve got the wrong JB.)
James Bond. Jack Bauer could work too.
Not great for moral guidance, but sometimes seems to help with hesitation.
Aha. Of course, unlike Jack Bauer who is portrayed as smart, heroic, omnicompetent, violent, and principled but not particularly scrupulous, James Bond is portrayed as smart, beroic, omnicompetent, violent, and principled but not particularly scrupulous.
I wish this were posted in main.
Wish granted!
I’ve found something like this useful, especially at work, but hard to calibrate. “What would a more less shy kalium do? Tell the CTO that he’s wrong, because he’s wrong.” Sometimes this is a good idea, but sometimes it’s not. “What would an optimally shy kalium do?” is not so easy to predict.
Perhaps your simulated assistant is optimized for the wrong thing, and you actually want Kalium Who Acts With Regard to the Greater Good of the Project or similar. “Don’t be shy” is orthogonal to “someone in charge is making it difficult to get stuff done”.
This seems to be an extremely powerful method for handling decision fatigue—it’s one of the few (maybe the only?) things I’ve seen on Less Wrong that I’m going to start applying immediately because of the potential I see in it. On the other hand, I doubt it would be so effective for me for handling social anxiety or other emotion-laden situations. A voice in my head telling me to do something that I already know I should do won’t make the emotion go away, and, for me, the obstacle in these sorts of situations is definitely the emotion.
A voice in your head isn’t a simulation of what the idealized person would do. What you want is for your simulation to be is the experience of observing that idealized person actually doing it. Otherwise, you are just thinking (system 2) instead of simulating (system 1).
To put it another way: a voice in your head is Far, a simulated experience is Near—and Near has far more influence over your emotions (no pun intended).
Exactly my experience—it helps with making little decisions throughout the day and staying productive, but when it comes to ones I’m reluctant to make… no matter how many times the little people in my head go ‘this one!’ the issue isn’t cleared.
Great post!
Others have mentioned the HPMOR-style “take a poll of different aspects of your personality,” which I have found to be entertaining and useful.
I’d also like to endorse the method for troubleshooting. I got the idea from Ridiculous Fish’s blog post from 3 years ago.
When I have a technical problem I’m stuck on, I try to ask myself “What would someone who’s smarter than me do?” This is really just “imagine a parody version of person x and see if that causes you to think about the problem in a different way.”
I like to consult Imaginary Dr. House (“The problem is something very rare and obscured because your data is lying to you”), my former boss (“The problem is the most obvious thing it could be, trust yourself and go solve it!”), my college roommate (“Maybe there’s a YouTube video from a dedicated hobbyist that explains this”), and some others.
I wrote up one experience with this technique (not as good as Ridiculous Fish’s) a few months ago, when I had a baffling issue to solve at work (FTP on April 26th at 2 AM).
I don’t carry around a mental model of myself, but I think I will start on it.
I do explicitly carry around a mental model of my boss. Whenever I am working out a deal and I’m not sure about whether to agree to a point, I ask the little boss simulation. One of my boss’s sterling qualities is that he makes decisions quickly, much more quickly than I do. Where I tend to gather lots of information and examine nuances, he simplifies. He is also better at (and institutionally more appropriate for) comparing incommensurate values (like different risks and rewards, where the units are not comparable). The little boss simulation has a very good batting average; I am very surprised when he gets things wrong.
The reason I think I will extent this to a model of myself is that I have people working for me who also do deals. We talk about how to step back from a conflict in the process of a negotiation, but I have not had much good advice to give there. “Take a deep breath.” Yeah, that actually works a lot of the time, but what about when it does not? The simulated rational me might be just the thing. I’ll just ask myself what I would think about the proposal if I weren’t offended / backed into a corner / embarrassed at being insufficiently prepared for the negotiation / desperate to get the deal done.
Max L.
I am also trying to start implementing a self-model of the type Brienne describes.
But I also have mental models of people with whom I interact everyday (instead of myself). Unfortunately, I don’t construct them consciously—they appear whenever I have issues with the real person in question. I’ll argue with them before confronting the actual person (if a confrontation is called for). When I do enter a real discussion with the real person, I’m almost always struck by how wrong my models were—to the point of being actively damaging to my psychological well-being.
I do that exact thing (inaccurate mental models of people) too! Have you found any useful ways to counteract that?
I haven’t found any other way than forcing myself to consciously look at unconscious models of people i make from time to time if it is possible and updating the models accoringly but this methos has not been very effective.
I’ve been working on noticing that I’m arguing with them, and running a mental process to halt those threads. It helps a lot in the moment. More importantly, it may even be having a preventative effect—I think I’m experiencing these imaginary fights less often.
I’ve tried this before and I find it difficult to trust this “other self”. What works better for me is to treat “other self” as an information source and then have “real me” make a decision with this information in mind.
The “real me” is sort of like a benevolent dictator. It has to have the final say, but ultimately it’s capable of deferring to “other self” when appropriate. Maybe this is just a thing with me (“real me” always has to have the final say, I can’t just trust someone else).
Of course, this is just one data point. Other people very well may be different.
Hah! I know that feeling so well. Then when I notice it, I feel doomed AND stupid!
This sounds like a good technique for dealing with social phobias or ADHD, but I hope you don’t use it for complex, long-chain-single-point-of-failure personal or ethical problems. Those are areas where emotions stop us from doing terrible things because we forgot to carry the 1 in column 3.
This is a line of development that—while clearly useful—seems somewhat hacky and unpromising to me. While I agree that this is likely to yield useful benefits in the short run, it strikes me that fixing one’s internal structure in order to produce reliably correct external actions without these sorts of hacks seems more promising in terms of long-term growth and skills.
About a year ago, I thought that lucid dreaming was a great path to rationality. While lucid dreaming is a great way to train the skill of noticing confusion, I no longer recommend it to people asking me for advice on rationality practice, because I think you hit the skill ceiling relatively fast and it doesn’t particularly lend itself to further development.
I’m worried that this strategy falls prey to the same flaw—while it’s quite effective in the short run, I think that people using these methods will ultimately have to learn the internal solutions anyway if they wish to progress to more advanced domains. Therefore, it makes more sense to me to just start with the internal solutions.
(Of course, if you need rapid skill growth in the short term, this might well be a useful strategy to adopt—just be aware of the downsides.)
I’ve been thinking about trying out lucid dreaming. Do you think it’s not useful in general, or just in terms of becoming more rational?
Hmm, depends on what you mean by useful. I think lucid dreaming is:
a) very fun
b) useful for becoming more rational,, but only in a somewhat limited way—it can be very good for training noticing confusion but doesn’t seem to have a huge amount of potential beyond this.
By useful, I mean, can I use it to:
Practice a speech?
Deliberate about a decision with imagined famous people?
Get more comfortable around people of high status?
Hm, so I tried thinking about how I could apply this to a problem I have and can’t quite see how to do it. Any suggestions? Am I missing or misinterpreting a point or is this just not a good problem to apply this solution to?
Here’s the problem: I like rock climbing, but I’m not very good at it. The thing holding me back in the situation I’m most concerned with here (lead climbing outdoors) is primarily that I’m scared. I’m frequently in a position where I’m capable of executing the next move with very high probability, but I don’t want to attempt it because I’m afraid that I’ll fail and fall.
Of course there are lots of ways to try and work on this, but I’m curious whether this seems like the kind of situation where this approach ought to be applicable. I can try to simulate a not scared Emily, and this is quite easy to do: of course she would execute the move right away, but I still have plenty of resistance to just copying her. My knee-jerk tendency is to rationalise that if she’s not scared, there must be a good reason: she must also be better at climbing! So I can’t just copy her, because I’m not as good! What if I fall?
Should I be trying to convince myself that I’m successfully simulating a climber with the same skill level, but less fear? Or perhaps this is just not a situation where this is a good approach, because my fear is for a reason, and if my assessment of my ability is accurate then I’m right to be scared given the (small, but not really tiny) probability of failure?
You will probably be less scared when you’ve climbed more, so maybe it would be more helpful to simulate a climber who has already fallen a bunch of times and knows she won’t fall this time.
However, given that climbing involves a degree of physical memory, it might be even more useful to actually fall a bunch of times, until you are confident that you can recover successfully.
Yeah. I’ve already more or less accomplished this step with indoor climbing, where most of the time I’m pretty happy to chuck myself off from the top without clipping the chains. I can’t seem to make it transfer outside though. In trad climbing it doesn’t seem like good practice to purposefully fall onto your gear (I’ve taken exactly one lead trad fall ever; it was completely unproblematic but I don’t want to make a habit of it), so I guess maybe the answer is do more sport climbing and fall off loads. The bolts are always soooo far apart though, and the kind of grades I climb tend to have big holds and ledges and slabby sections which makes for really offputting falling prospects. I wish I was good enough to climb more fun steep stuff outside. (Indoors I am much better at steeply overhanging stuff than vertical walls and slabs, but outside it seems hard to find the kind of route that has big enough holds to be accessible at my grade on overhanging terrain.)
Presumably you climb with all the standard safety measures in place, so even as a lead you are not at that much of a risk, unless your gear fails, not a very common occurrence in rock climbing. Which means that you probably have more fear than applicable in this situation (i.e. the amount of fear you would have in a non-climbing situation of a similar risk, like maybe mountain biking). If so, then it’s worth figuring out in which situation you have the wrong amount of fear could be a start.
Yeah, most of the time the consequences of failing a move are not very bad. One problem I find is that, all else being equal, easier routes (where I’m more confident I can do the moves) at least feel like they have worse consequences of failure, because they’re full of big holds, ledges, and slabby sections which you can bash yourself up on before your gear catches you.
Right. Seems like an incentive to progress to vertical and overhangs :)
I don’t even need an incentive! I love overhangs indoors and I’m way better at them than slabs/vertical stuff. But most steep stuff outdoors seems to be well beyond the grades I might attempt to lead, at least round here. One day I’ll be good enough… maybe… :)
I like this, and I’m going to try it on a decision I’ve been putting off for years.
(said decision involves whether or not I should tell deeply religious family that I don’t believe in god(s))
May I suggest that you make sure all of your money is in accounts that don’t also have their names? You’d be disappointed by how many ~college age people get screwed by this.
Please let us know how that turns out. I’ve been terrified to tell my fiercely traditional family that I’m bisexual.
Sure. I’ll be doing it later this month.
That is if this technique gets me to actually do it!
If you have time before you do it, you can practice. I’ve done this for meetings and presentations that I anticipate will be difficult. I start by imagining what someone else would do (“how would my competent friends handle themselves in a meeting like this?”) and then try on different versions of myself, if it’s not immediately obvious what a braver version of myself would do. (I often have that problem.) It’s like what Brienne describes but over a longer period of time, so you can take advantage of habit formation. Then, if you have what you’ll say and how you’ll say it well rehearsed (not memorized, but rehearsed), it’s easier to go into the difficult situation sort of on autopilot. You can keep Brienne’s technique in your back pocket if the autopilot fails.
The downside of rehearsal in your head is that you might find yourself dwelling on all the horrible things that could happen. This is usually where my mind goes if I let it because it’s easy to let the models I have of other people run amok and show me all the worst possibilities that I might have to face. You have to just rehearse the part you know you own and not spend too long imagining in vivid detail what happens next, besides what is useful for preparation (like what beoShaffer suggests and similar practical plans). I hope it works out for you and you’ll let us know how it goes!
How did it go? Were your family okay with it? Was the technique effective?
Before I brought it up, I saw some signs of possible agreement with my viewpoint from my wife. I put off pressing the issue until I can determine what these signs I saw were are all about.
Hopefully, I’m not just deluding myself to avoid the hard decision.
Depends on what specific information are you waiting for before you can decide. If it’s something that won’t happen soon or indeed ever it’s more likely to be delusion. If it’s something you can figure out in a week or just ask then it’s more likely to be legitimate. Best of luck
This trick is pretty powerful. I’m channeling a model of the more confident version of myself to post this comment, rather than just lurking like I normally do.
Thanks a lot!
This resonated with me instantly, thank you!
I now remember, I used to do something similar if I needed to make decisions, even minor decisions, when drunk. I’d say, “what would I think of this decision sober”? If the answer was “it was silly” or “I’d want to do it but be embarrassed” I’d go ahead and do it. But if the answer was “Eek, obviously unsafe”, I’d assume my sober self was right and I was currently overconfident.
A structural explanation would be that your map of yourself doesn’t well enough correspond to the territory (your actual self). And the rational self is a way to create a new—better—map. Using the better map (by consulting the rational self)) effectively connects both maps. My understanding of how the brain works lets me guess that the two maps will slowly merge—hopefully fixing whatever was at odds with the old map.
In a way the new map is less detailed, but at least the projection is better—so to speak.
Does this make sense?
Nice point. Let’s change the terminology slightly.
The territory is your yard. A couple of flower beds are overgrown with weeds; that spot over there doesn’t get enough water; and that spot gets too little sun. It’s a little disheveled.
The map is the pretty diagram that you make of your yard if you had the energy and desire to make it what it could be. You’d pluck the weeds, water the dry patch, and put some shade-loving plants in the dark spot. That would be a heck of a garden.
If you were sitting in it, you would have a wonderful view of your surrounding environs. By visualizing the better yard, you can visualize the view. And the view is what you wanted out of your yard.
But now I’m tempted to say “fake it til you make it.”
Max L.
Reminds me of http://en.wikipedia.org/wiki/Rubber_duck_debugging
Using an imaginary external source to solve problems internally.
What I’m interested in is whether this method is applicable to social situations as well. I am not a naturally social person, but have studied how people interact and general social behaviors well enough that I can create a simulation of a “socially acceptable helltank”.
I already have mental triggers (what I like to call “scripts”) in place for a simulation of my rational mind—or rather a portion of my rational mind kept in isolation from bias and metaphorically disconnected from the other parts of my mind to override my “main” portion of my mind in case the main portion becomes irrational at some point, similar to a backup system overriding a corrupted main system.
Until today, however, I have not thought of using them to simulate social skills. I suppose I might eventually spread out a bunch of simulations, what eli_sennesh called a Parliament of different aspects of your personality in his cancelled post, in order to guide my decision making in certain situations, with a “master aspect” (the aforementioned rational simulation) controlling when to give an aspect override privileges.
Still a very good post. Thank you for it.
I second the call to move this to Main.
Thirded.
I suspect this trick works by not only reducing decision fatigue, but by also offloading rejection fear, which makes this hack appropriate for social anxiety. Fear of embarrassment is one step removed from myself.
Does anyone who tried this years ago have an update on the long-term effects?
I find it interesting that Less Wrong appears to be rediscovering existing ethical theories.
This article argues for a form of virtue ethics arising from utilitarianism—in order to be a good person, simulate an alternate self free of whatever desire is applicable, and then use them as a moral exemplar.
Similarly, Elizer’s arguments for Coherent Extrapolated Volition in FAI bear a striking resemblance to Rousseau’s arguments regarding the collective will of a state.
Another example of this that springs to mind is this less-popular post on beeminding sin. http://lesswrong.com/lw/hwm/beeminding_sin/
We have a large body of collected philosophical thought available to us. At least some of those concepts are adaptable to everyday problems and therefore useful things to carry around in your mind. However, biases exist that make many people hesitant to listen to historical sources: “In the past, people had less technology than we do” is often conflated into “in the past, people were less intelligent than we are.”
Even if we accept that people in the past were less intelligent, that still doesn’t rule out that they may have had some good ideas. If it did, then we would be able to make arguments from human reasoning. “People from the past said it” is not an argument for or against a topic any more than “Hitler said it.” (Note that this also is an argument against the failure mode of treating these ideas as the “wisdom of the ancients.”)
It seems to me that a general critical reading of acclaimed philosophers could save everyone in the Less Wrong community a lot of trouble reinventing their ideas, given that examining an already-stated hypothesis is a lot easier than going out and finding one.
I think the more serious issue is that the body of collected philosophical thought is too large. That is:
It’s not obvious to me that this is true. I think there’s a large benefit from a single person doing a deep dive on something, and reporting the results: “This is what I learned reading Rousseau that’s relevant to rationality.” This way all the community needs to do to learn about Rousseau’s connection to rationality (on a conversational level, at least) is read the post, and if they see a specific idea and think “I want to read more about that,” then they know exactly where to start.
(I follow this advice and write reviews of books for LW; my interests are in decision-making, and so that’s where my reviews are. If your interests are in philosophy, that’s a good way to contribute significant value to the community, and earn a bunch of karma in the process.)
Really? I read it simply as as a way to combat social anxiety/decision fatigue/akrasia—object-level advice, in other words, nothing to do with ethics or meta-ethics or whatever. You seem to be reading a philosophical bent into this that I don’t think was particularly present in the actual article. Likewise with the article on Beeminder.
As for CEV and Rousseau, well, I haven’t read much Rousseau, but I sincerely doubt that he had anything close to FAI in mind when constructing his arguments regarding the “collective will of a state”. Note also that a “state”, i.e. a political body, is not at all the same as the entirety of humanity. The latter requires far more cognitive science and psychology to investigate—fields that were hardly present in Rousseau’s time, if at all. Are you sure the comparison here is valid?
Finally, on your point regarding mainstream philosophy: it seems reasonable to say that there are probably some useful insights out there in philosophy. Getting to these insights, however, often requires wading through large amounts of bad thinking and motivated cognition. The problem with philosophers is that often they are smart, but not rational. As a result, they are great at making elaborate arguments that sound convincing (and the issue is further muddled by the dense and obscure language a lot of them seem to like to use), but in fact are motivated by something other than truth-seeking, e.g. “This position seems aesthetically/morally/intuitively pleasing to me; therefore I will argue for it.” If 90% of mainstream philosophy is useless and only 10% is useful, it seems to me as though my time (and the time of fellow LW readers) could be better spent on other things.
Ah, it’s just like “What Would Jesus Do” bracelets.
Rational!Jesus
We have the next HPMOR.
I loved the exaplanation about using it to control your temper. In fact, just after reading the part where you talk about the party, my very first thought was how I could use this to get a hold of my temper. This happened before I finished reading that paragraph.
There was an interesting segment on NPR’s Morning Edition this morning along these lines.
I just want to say that the title of this post is fantastic, and in a deep sort of mathy way, beautiful. It’s probably usually not possible, but I love it when an appropriate title—especially a nice not-too-long one—manages to contain, by itself, so much intellectual interest. Even just seeing that title listed somewhere could plant an important seed in someone’s mind.
I imagined what rational me would do a couple of hours ago, and he’d have gotten a head start on next week’s workload until he was tired and then started tomorrow off on much better footing (I’m not talking about being a workaholic—I’m lazy and have kind of fallen behind—I could stand to work a bit more)
Instead, I read about why the great filter probably doesn’t lie between the evolution of a nervous system and dolphin-level intelligence, learned about ‘biological dark matter’, dismissed it as viruses, undismissed it, learned that it was probably just material from unsequenced portions of microbe genomes, learned things I never knew about the difference between archaea and bacteria, about how they might have combined to form eukaryotes, about tons of various external factors that could keep life from getting the chance to evolve in the first place, read far too much discussion about hypothetical engineers in hypothetical dolphin bodies, then finally came back to this thread and read all the comments.
I guess I did it wrong? :(
Come now, how could anything you call “work” possibly be less important than all that?
It’s a good way to “gamify” decision pressure such that insecurities and the typical anxiety involved with “I have to admit I’m doing this wrong” don’t rear their ugly heads. Instead, you dissolve the sense of self just a bit, with a “I’m a bunch of modules”-approach that makes it oh-so-easier to acknowledge and fix mistakes, since there’s less of the monolithic “I did wrong, I am bad”-response triggered.
Nice trick, in short.
A variation of this technique (pretending to be Batman) works for children.
This rational asshole makes me do stuff I don’t wanna do :(
Have you checked this out : http://lesswrong.com/lw/7i/rationality_is_systematized_winning/
I’m not sure how this article relates to my comment.
I’d say that to a certain degree all of us do this, even if we’re not all consciously aware of it. The unintentional use of this “trick” in my view is as obvious as when people imitate others. To the more extreme end, as you describe here, it can be the deliberate and wilful act of simulating the thoughts of another person for apparent guidance in times of stress.
Overall, I’d say this is just a version of thinking of what a “prudent man” would do, or some might even use the term “straw man” (to separate themselves from reality for the purposes of winning an argument), and for me I have spent much of my life referring to the guidance of others, including a long list of gym trainers—both real and simulated.
The danger, however, is that the subjective nature of the mind running the simulation does create a bias, even if you can’t see it. Just as my simulated gym trainer will advise me against eating junk food, ultimately it is my own motivation to follow through on the final act. It might be a “trick” only because it is actually you making the decision without admitting it, but that doesn’t mean your decisions are any better and don’t ultimately suffer as a result in times of fatigue.
See also: dissociation.
I think I’ve been doing something like this for a long time, but imagining the simulated decision-maker as a “Ghost of Agency and Making Things Better” rather than an idealized version of myself. People seemed to find that a lot more confusing than this, though, so I’m going to start describing it this way instead.
You mean that you don’t have an entire Parliament filled with models designed to represent aspects of your own psychology?
You’re buggy software running on corrupted hardware. Fork redundant copies and vote.
No. Your mind is never magically going to turn nonbuggy. AFAICT, managing the bugginess is one of the most important but most understated tasks we face in life: all our interactions with other people are supposed to be pre-filtered for non-bugginess.
EDIT: This apparently came off as way more harsh than intended. Retracting for tone but leaving in existence.
I’ve also used the “think of yourself as multiple agents” trick at least since my first read of HPMOR, and noticed some parallels. In stressful situations it takes the form of rational!Calien telling me what to do, and I identify with her and know she’s probably right so I go along with it. Although if I’m under too much pressure I end up paralysed as Brienne describes, and there may be hidden negative consequences as usual.
It seems like you’re continually rediscovering NLP.
WWRBD