Trying to Try
“No! Try not! Do, or do not. There is no try.”
—Yoda
Years ago, I thought this was yet another example of Deep Wisdom that is actually quite stupid. SUCCEED is not a primitive action. You can’t just decide to win by choosing hard enough. There is never a plan that works with probability 1.
But Yoda was wiser than I first realized.
The first elementary technique of epistemology—it’s not deep, but it’s cheap—is to distinguish the quotation from the referent. Talking about snow is not the same as talking about “snow”. When I use the word “snow”, without quotes, I mean to talk about snow; and when I use the word “”snow”″, with quotes, I mean to talk about the word “snow”. You have to enter a special mode, the quotation mode, to talk about your beliefs. By default, we just talk about reality.
If someone says, “I’m going to flip that switch”, then by default, they mean they’re going to try to flip the switch. They’re going to build a plan that promises to lead, by the consequences of its actions, to the goal-state of a flipped switch; and then execute that plan.
No plan succeeds with infinite certainty. So by default, when you talk about setting out to achieve a goal, you do not imply that your plan exactly and perfectly leads to only that possibility. But when you say, “I’m going to flip that switch”, you are trying only to flip the switch—not trying to achieve a 97.2% probability of flipping the switch.
So what does it mean when someone says, “I’m going to try to flip that switch?”
Well, colloquially, “I’m going to flip the switch” and “I’m going to try to flip the switch” mean more or less the same thing, except that the latter expresses the possibility of failure. This is why I originally took offense at Yoda for seeming to deny the possibility. But bear with me here.
Much of life’s challenge consists of holding ourselves to a high enough standard. I may speak more on this principle later, because it’s a lens through which you can view many-but-not-all personal dilemmas—”What standard am I holding myself to? Is it high enough?”
So if much of life’s failure consists in holding yourself to too low a standard, you should be wary of demanding too little from yourself—setting goals that are too easy to fulfill.
Often where succeeding to do a thing, is very hard, trying to do it is much easier.
Which is easier—to build a successful startup, or to try to build a successful startup? To make a million dollars, or to try to make a million dollars?
So if “I’m going to flip the switch” means by default that you’re going to try to flip the switch—that is, you’re going to set up a plan that promises to lead to switch-flipped state, maybe not with probability 1, but with the highest probability you can manage—
—then “I’m going to ‘try to flip’ the switch” means that you’re going to try to “try to flip the switch”, that is, you’re going to try to achieve the goal-state of “having a plan that might flip the switch”.
Now, if this were a self-modifying AI we were talking about, the transformation we just performed ought to end up at a reflective equilibrium—the AI planning its planning operations.
But when we deal with humans, being satisfied with having a plan is not at all like being satisfied with success. The part where the plan has to maximize your probability of succeeding, gets lost along the way. It’s far easier to convince ourselves that we are “maximizing our probability of succeeding”, than it is to convince ourselves that we will succeed.
Almost any effort will serve to convince us that we have “tried our hardest”, if trying our hardest is all we are trying to do.
“You have been asking what you could do in the great events that are now stirring, and have found that you could do nothing. But that is because your suffering has caused you to phrase the question in the wrong way… Instead of asking what you could do, you ought to have been asking what needs to be done.”
—Steven Brust, The Paths of the Dead
When you ask, “What can I do?”, you’re trying to do your best. What is your best? It is whatever you can do without the slightest inconvenience. It is whatever you can do with the money in your pocket, minus whatever you need for your accustomed lunch. What you can do with those resources, may not give you very good odds of winning. But it’s the “best you can do”, and so you’ve acted defensibly, right?
But what needs to be done? Maybe what needs to be done requires three times your life savings, and you must produce it or fail.
So trying to have “maximized your probability of success”—as opposed to trying to succeed—is a far lesser barrier. You can have “maximized your probability of success” using only the money in your pocket, so long as you don’t demand actually winning.
Want to try to make a million dollars? Buy a lottery ticket. Your odds of winning may not be very good, but you did try, and trying was what you wanted. In fact, you tried your best, since you only had one dollar left after buying lunch. Maximizing the odds of goal achievement using available resources: is this not intelligence?
It’s only when you want, above all else, to actually flip the switch—without quotation and without consolation prizes just for trying—that you will actually put in the effort to actually maximize the probability.
But if all you want is to “maximize the probability of success using available resources”, then that’s the easiest thing in the world to convince yourself you’ve done. The very first plan you hit upon, will serve quite well as “maximizing”—if necessary, you can generate an inferior alternative to prove its optimality. And any tiny resource that you care to put in, will be what is “available”. Remember to congratulate yourself on putting in 100% of it!
Don’t try your best. Win, or fail. There is no best.
- More Dakka by 2 Dec 2017 13:10 UTC; 211 points) (
- Rereading Atlas Shrugged by 28 Jul 2020 18:54 UTC; 161 points) (
- My Algorithm for Beating Procrastination by 10 Feb 2012 2:48 UTC; 145 points) (
- Make an Extraordinary Effort by 7 Oct 2008 15:15 UTC; 143 points) (
- I’m Sorry Fluttershy by 22 May 2021 20:09 UTC; 137 points) (
- In praise of fake frameworks by 11 Jul 2017 2:12 UTC; 116 points) (
- Shut up and do the impossible! by 8 Oct 2008 21:24 UTC; 111 points) (
- How to Save the World by 1 Dec 2010 17:17 UTC; 103 points) (
- The Sin of Underconfidence by 20 Apr 2009 6:30 UTC; 103 points) (
- What Are Meetups Actually Trying to Accomplish? by 8 Feb 2018 5:30 UTC; 77 points) (
- My Bayesian Enlightenment by 5 Oct 2008 16:45 UTC; 70 points) (
- The Problem of the Criterion by 21 Jan 2021 15:05 UTC; 62 points) (
- What’s Up With the CDC Nowcast? by 22 Dec 2021 13:00 UTC; 61 points) (
- MIRI’s “Death with Dignity” in 60 seconds. by 6 Dec 2022 17:18 UTC; 58 points) (
- The Life-Goals Framework: How I Reason About Morality as an Anti-Realist by 3 Feb 2022 11:40 UTC; 48 points) (EA Forum;
- Stop pressing the Try Harder button by 5 Sep 2020 9:10 UTC; 45 points) (
- If Clarity Seems Like Death to Them by 30 Dec 2023 17:40 UTC; 45 points) (
- Higher Purpose by 23 Jan 2009 9:58 UTC; 44 points) (
- 23 Jul 2019 7:21 UTC; 43 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
- Mapping Out Alignment by 15 Aug 2020 1:02 UTC; 43 points) (
- Forcing yourself to keep your identity small is self-harm by 3 Apr 2021 14:03 UTC; 40 points) (
- 3 Jun 2020 10:58 UTC; 36 points) 's comment on Open & Welcome Thread—June 2020 by (
- What is the typical course of COVID-19? What are the variants? by 9 Mar 2020 17:52 UTC; 36 points) (
- [Link] Dilbert author tries to try by 12 Sep 2011 16:24 UTC; 35 points) (
- Conceptual Specialization of Labor Enables Precision by 8 Jun 2015 2:11 UTC; 30 points) (
- Via productiva—my writing and productivity framework by 6 Mar 2022 16:06 UTC; 29 points) (
- Action derivatives: You’re not doing what you think you’re doing by 21 Nov 2024 16:24 UTC; 26 points) (
- Enforcing Type Distinction by 31 Jul 2020 11:39 UTC; 25 points) (
- 30-ish focusing tips by 22 Oct 2021 19:38 UTC; 22 points) (
- 1 Sep 2023 17:59 UTC; 19 points) 's comment on Nuclear winter—Reviewing the evidence, the complexities, and my conclusions by (EA Forum;
- Self-help, hard and soft by 7 Jun 2020 15:39 UTC; 16 points) (
- 29 Nov 2011 6:53 UTC; 12 points) 's comment on How rationality can make your life more awesome by (
- How To Actually Succeed by 12 Sep 2022 22:33 UTC; 11 points) (EA Forum;
- How I’d Introduce LessWrong to an Outsider by 3 May 2017 4:32 UTC; 10 points) (
- General v. Specific Planning by 27 Mar 2019 12:34 UTC; 9 points) (
- 17 Sep 2010 21:55 UTC; 9 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 3 by (
- 27 Jun 2022 2:58 UTC; 9 points) 's comment on Elizabeth’s Shortform by (
- 3 Jan 2022 14:22 UTC; 8 points) 's comment on Lateral Thinking (AI safety HPMOR fanfic) by (
- How to not be an alarmist by 30 Sep 2020 21:35 UTC; 8 points) (
- 2 Jan 2013 20:51 UTC; 8 points) 's comment on Rationality Quotes January 2013 by (
- 28 Nov 2021 3:29 UTC; 8 points) 's comment on How To Get Into Independent Research On Alignment/Agency by (
- [SEQ RERUN] Trying to Try by 12 Sep 2012 1:45 UTC; 8 points) (
- 21 Jun 2014 22:01 UTC; 7 points) 's comment on Against utility functions by (
- 10 Apr 2023 12:46 UTC; 7 points) 's comment on Eliezer Yudkowsky’s Letter in Time Magazine by (
- 8 Dec 2010 1:20 UTC; 7 points) 's comment on Best career models for doing research? by (
- 27 Aug 2022 4:17 UTC; 6 points) 's comment on Reducing long-term risks from malevolent actors by (EA Forum;
- 26 Apr 2009 18:24 UTC; 6 points) 's comment on Practical Advice Backed By Deep Theories by (
- Trajectory by 2 Jun 2018 18:29 UTC; 6 points) (
- Measuredly better than measurably worse: tightening methodology reports by 6 Jun 2015 22:10 UTC; 6 points) (
- Distinguishing goals from chores by 10 Jan 2021 7:45 UTC; 5 points) (
- Rationality Reading Group: Part Y: Challenging the Difficult by 20 Apr 2016 22:32 UTC; 5 points) (
- 3 Jan 2022 22:28 UTC; 5 points) 's comment on Lateral Thinking (AI safety HPMOR fanfic) by (
- 15 May 2009 19:31 UTC; 5 points) 's comment on Religion, Mystery, and Warm, Soft Fuzzies by (
- 23 Jul 2011 4:01 UTC; 5 points) 's comment on Those who aspire to perfection by (
- 3 Apr 2009 20:00 UTC; 3 points) 's comment on Rationality is Systematized Winning by (
- 22 Dec 2021 16:35 UTC; 3 points) 's comment on Open & Welcome Thread December 2021 by (
- 23 Jul 2021 1:06 UTC; 3 points) 's comment on My Marriage Vows by (
- How To Actually Succeed by 13 Sep 2022 0:21 UTC; 3 points) (
- 14 Dec 2010 23:45 UTC; 3 points) 's comment on The Sin of Persuasion by (
- 29 Aug 2012 9:58 UTC; 3 points) 's comment on 11 minute TED talk is about instrumental rationality by (
- 21 Nov 2022 20:57 UTC; 3 points) 's comment on Don’t design agents which exploit adversarial inputs by (
- 23 Jun 2019 12:31 UTC; 2 points) 's comment on Raemon’s Quick takes by (EA Forum;
- 27 Mar 2015 10:15 UTC; 1 point) 's comment on Room for Other Things: How to adjust if EA seems overwhelming by (EA Forum;
- 13 Apr 2012 7:01 UTC; 1 point) 's comment on Rationality Quotes April 2012 by (
- 17 Jun 2009 15:55 UTC; 1 point) 's comment on Don’t Count Your Chickens... by (
- 20 Oct 2013 13:01 UTC; 1 point) 's comment on Meditation Trains Metacognition by (
- 2 Oct 2012 4:26 UTC; 1 point) 's comment on Rationality Quotes October 2012 by (
- 13 Sep 2012 2:23 UTC; 1 point) 's comment on The noncentral fallacy—the worst argument in the world? by (
- 25 Mar 2011 10:30 UTC; 0 points) 's comment on Towards a Bay Area Less Wrong Community by (
- 27 Sep 2011 7:34 UTC; 0 points) 's comment on Something to Protect by (
- 28 Jun 2015 20:13 UTC; 0 points) 's comment on Goal setting journal (GSJ) − 28/06/15 → 05/07/15 by (
- 2 Nov 2012 22:13 UTC; -1 points) 's comment on Rationality Quotes November 2012 by (
- 7 Dec 2011 15:22 UTC; -2 points) 's comment on 2011 Survey Results by (
- 4 Oct 2017 3:54 UTC; -6 points) 's comment on Catalonia and the Overton Window by (
Remember Morpheus, in The Matrix movie, saying to Neo :
“Come on! Stop trying to hit me, and hit me!”
That seemed pretty efficient.
There are many things worth just “trying your best”. I try to understand many of Eliezer’s posts and when I succeed (once in a long while) I find it very rewarding. I understood this one. And the one about the zebra. Son it’s not a complete waste of my time. (Some would say: “your employer’s time”. Whatever) However, if I were to put enough effort into it to understand his posts always, I’d probably be fired for one of two reasons: wasting too much paper printing articles not related to work, or not doing any work. (Either that or I would have to have my own Internet connection, my own computer and so on, but I’d rather eat) So I’m quite happy with just trying.
But you’re not “trying your best”. That implies you’re trying to convince yourself that you’re trying your best. You know it’s not worth doing. You’re just trying the optimal amount.
I think Yoda’s comment is more an user’s guide to mental machinery than reflection on reality. To create something new, you first create a vision. Vision without failure is in many ways more powerful than vision with failure. (Obviously, in most other ways it is worse.) Creating new things is difficult even without you second-guessing yourself even before you begin.
I personally evaluate people saying they will try to do X, as if X is not always their highest priority, and other things might take precedence so the resources aren’t always available to do X.
“I’m going to try to prove P != NP”, means they might give up at some point and do something else when progress seems unlikely. With regards to FAI, I think the phrasing of your commitment to building it that would be most acceptable to the public without saying try would be “I’m going to figure out if building FAI is possible, and if it is build it”.
I regard people saying, “I’m going to build a FAI that stably recursively self-improves,” as somewhat similar to people saying, “I’m going to build a worm whole device”.
sometimes, when you can not come up with a realistic plan to actually ‘do’ something, isn’t it the best course of action to go brute-force and try to do whatever you can in the area in the hope that a path will eventually become available?
I heard a similar argument about our rush to cure cancer when we don’t really understand DNA yet. Wouldn’t it be more efficient to devote a lot of effort to understanding DNA, see what comes up and then go towards curing cancer? Arguably this top-down approach to medicine (what chemical can I throw at this disease?) is a source of a lot of negative side-effects of modern medicine. Maybe a bottom-up (how does this disease work? what does it effect? how does that work?) approach will take longer but give better results and produce more reusable knowledge.
Of course, this is easy to say, but when your relative is dying of cancer you have little time to ponder on the way information is encoded in DNA.
Perhaps there is a good analogy with forward-chaining and backward-chaining here? when backward-chaining is not giving you any solutions, maybe it would be beneficial to start examining the current situation and the alternative paths forward in the hope of one of them giving a path towards the goal?
of course none of this goes against the main point, that claiming to try is already making excuses for failure and therefore not really attacking the problem at full strength.
Hole, even. Damn homophones.
The quote still annoys me, despite your interpretation. Maybe for very important things, we must avoid “trying” and really do all we can.
But when giving up is a reasonable option, or when resources are scarce, trying is exactly what we need to do. When I was younger, I tried being an historian, and tried being a mathematician. I failed in the first one, and succeeded in the second. I could have done more for the first—I could have put in all possible efforts, forgone sleep and other distractions, etc… But it would have been dumb. If things are hard, and not that important, superficial trying is exactly what is called for.
Luke’s response to Yoda should have been “I will try and lift it out using the force. If that fails, I will try and lever it out. If that fails, I will build a crane and try and lift it out. If that fails, I will try and find out if there’s another way of getting off the planet...”
Apparently Luke didn’t have to try for very long: http://www.cracked.com/article_16625_p2.html
We’ll likely see how long someone can spend straining to lift the starship out of the swamp with no success before giving up. More zebras than jedi masters in this near,near galaxy.
To say that you will achieve anything worth achieving is to arrogantly imply that you believe yourself better than those around you who accept their exempting limitations. Thus it is necessary to say that you are ‘trying’, as a clear message that you understand that you won’t actually succeed. The danger is of then forgetting not to merely ‘try’.
To say that you will try instead of do just indicates to the receiver of the message that there is a relatively large probability of failure.
I wonder if a sinecure isn’t a similar pitfall for someone who’s out to save the world.
I wonder if nerdish literalism is a problem here. Saying “I will do X” when I can’t rationally assign a high probability to success feels like dishonest overconfidence—and if I fail, it’ll have been an outright lie.
When Yoda said “there is no try,” I took it more literally. In the absence of human concepts there is no “try,” there is only things that act or don’t act. Let go of your mind and all that.
Just do it?
Git ’er done?
Let’s get it done?
Will, I think a “worm whole” device would be pretty challenging too, and perhaps even worthwhile. You could use the basic techniques to create a “finger whole” device for victims of industrial (or kitchen) accidents.
Stuart: “Luke’s response to Yoda should have been “I will try and lift it out using the force. If that fails, I will try and lever it out. If that fails …”″
Then Luke would have succeeded at rescuing his craft, but made no progress in learning The Force, and the Dark Side would have ruled the galaxy. :-)
Nick: “I wonder if nerdish literalism is a problem here. Saying “I will do X” when I can’t rationally assign a high probability to success feels like dishonest overconfidence—and if I fail, it’ll have been an outright lie.”
No, that’s the point exactly. Your body will adapt its own behavior to be in accord with your mind. (It’s the old “Whether you think you can, or think you can’t, you’re right”.) It’s easy to demonstrate: run 10 miles saying “I’m a great runner, I’m strong, I’m fast, …”, and then next week run 10 miles saying “I hate running, I’m weak, I’m hungover, I’m slow, …”. Your body doesn’t want to live the lie, so it makes your internal monologue true. When you commit completely to something, you can accomplish much more than you think you can. (Damn, I sound like a cheesy motivational speaker.)
Yoda’s point was that the Force did not make distinctions such as ‘size’, and that it was Luke’s preconceptions and biases that were preventing him from lifting the fighter.
Luke saw a difference between lifting small rocks and lifting the fighter, and his perceptions caused his attempts to fail. Yoda was trying to teach him that the only real obstacle was his disbelief and the failure to react correctly resulting from the disbelief.
For Jedi, there really isn’t such a thing as ‘trying’. There is only success, and not permitting yourself to succeed.
This would be a lot easier to explain if you people understood more about Taoism.
As you put Jedi, they stink of Voldemort’s “there is only power and those too weak to seek it”. While the Force is omnipotent, Jedi are not, and, while their self-limitation by beliefs is a powerful reason of it, it is not the only reason. Jedi Council could not break through Palpatine’s Force Concealing not because they believed Force Concealing to be unbeatable but for two reasons: defense is simply stronger than offense in this field (likewise with Occlumency and Legilimency in Harry Potter) is the first, they had too little information to focus solely on Palpatine (there were probabilistic cues, but probabilities for someone like Mas Amedda, given their information, were still high even if they fully believed Dooku) is the second.
(We need MOAR generalizing from fictional evidence… Joking))
For Jedi, there really isn’t such a thing as ‘trying’. There is only success, and not permitting yourself to succeed.
I wonder how that works in a Jedi fight to the death.
Wearing my mechanical engineer’s hat I say “Don’t be heavy handed.”. Set your over-force trips low. When the switch is hard to flip or the mechanism is reluctant to operate, fail and signal the default over-force exception.
You can always wiggle it, or lubricate it and try again, provided you haven’t forced it and broken it. For me, trying is about running the compiler with the switches set to retain debugging information and running the code in verbose mode. It is about setting up a receiver down-range. Maybe the second rocket will blow up, just like the first did, but at least I will still be recording the telemetry.
I think that Plan A will be stymied by Problem Y, but I try it anyway, before I try to solve Problem Y. My optimistic side is hoping Problem Y might not actually matter, while my pessimistic side thinks Problem X is lurking in the shadows, ready to emerge and kill Plan A whether I solve Problem Y or not.
I try in order to gain information.
It is usually important to procede with confidence. When things go wrong they throw off fragments of broken machinery and fragments of information. Suprised, we fail to catch the flying fragments of information, and must try again, forewarned.
Two meanings of the word “try” fight for mind share.
To try: to position oneself in the right spot to catch the flying fragments of information flung out from failure.
To try: The psychological mechanism that lets us fail through faint-heartedness, again and again, but never quite understand why.
Two meanings sharing a word is a common problem with natural language. The particular danger I see for Eliezer is when the second meaning hides the first.
He says he isn’t ready to write code. If you don’t try to code up a general artificial intelligence you don’t succeed, but you don’t fail either. So you can’t fail earlier and harder than you ever expected and cannot suspect that the singular is far. If you won’t try, you’ll never know.
You really need to read up on Star Wars. Soresu, the style you describe, was shown to “merely delay the inevitable” more often than not, as Kreya put it, if the opponent fights to the death. And two Jedi would never need to fight to the death with each other because they would not do something to make the opponent think their death is preferable in the first place.
Would people stop saying that! It is highly irresponsible in the context of general AI! (Well, at least the self-improving form of general AI, a.k.a., seed AI. I’m not qualified to say whether a general AI not deliberately designed for self-improvement might self-improve anyways.)
Noodling around with general-AI designs is the most probable of the prospective causes of the extinction of Earth-originating intelligence and life. Global warming is positively benign in comparison.
Eliezer of course will not be influenced by taunts of, “Show us the code,” but less responsible people might be.
“The very best you can possibly do is the point at which the real work begins.”
It took you 28 years to realize this?
It seems you’ve missed the point here on a point common to Eastern Wisdom and to systems theory. The “deep wisdom” which you would mock refers to the deep sense there is no actual “self” separate from that which acts, thus thinking in terms of “trying” is an incoherent and thus irrelevant distraction. Other than its derivative implication that to squander attention is to reduce one’s effectiveness, it says nothing about the probability of success, which in systems-theoretic terms is necessarily outside the agent’s domain.
Reminds me of the frustratingly common incoherence of people thinking that they decide intentionally according to their innate values, in ignorance of the reality that they are nothing more nor less than the values expressed by their nature.
This seems a dumb semantic mistake, not a deep truth. You’re confusing “going to” as a prediction and “going to” as a statement of intent. You might prefer the word “intend” if that’s what you mean. And however you phrase it, there is uncertainty in both your chance of success, and limits to the amount of effort and risk you’ll undertake to accomplish this particular mission.
Thanks for bringing this up. My comment above can be read as basically complaining about this double meaning.
Reminds me of the importance of overconfidence to business success, somehow....
“You have to enter a special mode, the quotation mode, to talk about your beliefs. By default, we just talk about reality.”
This is a false dichotomization. Everything is reality! Speaking of thoughts as if the “mental” is separate from the “physical” indicates implicit dualism.
To facilitate an outcome it must first ‘become’ the facilitator.
“Quotation mode” is analogous to an escape character. There’s no dualism here.
Initially, I also thought this blog entry was faulty. But there indeed seems to be an important difference between having the goal do-A, and succeeding only when A, and having the goal try-A, and succeeding when only a finger (or a hyperactuator in my case) was lifted toward A.
One may note that if “mental events” M1 and M2 occur as “physical events” P1 and P2 occur, doing surgery at the P-level could yield better Ps for Ms than doing surgery at the M-level.This is what i argue a lot about with my girlfriend. Am I really trying? What does it mean when I say, that I’ll try to be better listener or whatever? She always calls my bluff. I’m only promising to try. But that is what i mean. I’ll do what i can with the resources I have, I won’t promise more than I can deliver.
But what about hypnosis? A hypnotist says to his subject, that try to lift your hand, and the subject can’t do that. But when he says to lift your hand, the subject will do this. So in Suggestion, saying “try to do this” means “don’t do it”.
I hope to read a follow-up post about hypnotism, and trying to try. I’ve only seen abridged display of hypnosis in my medical studies, not the whole thing from start to finish. But the answer about what “trying” means lies there. When hypnotist says “forget the pain” some people really do when they wouldn’t be able to it by themselves however much they tried. I guess hypnotist is only a specialist in making people believe that they have to do something, and there is no possibility of failure.
If you can answer ‘yes’ to every “is x possible?” question about the problem, like
Is intelligence possible? Yes. (I am a mind.) Can it be instantiated in a machine? Yes. (Minds are machines.) Is looking at your own mind’s code, understanding it, and improving it possible? Yes. (I can understand code, but, alas, my brain is not available for me to hack. A mind made of code doesn’t have this limitation.)
you can say “What’s the use of trying? It’s but a matter of doing it. I will simply do it. I will begin now. I will stop when I’m done.” When you know that success is not forbidden by the laws of physics, trying ends and doing begins.
Right now I am doing and at one point in time I will say: “It worked.” The only thing that is uncertain is when.
If I ask the question “What needs to be done?” I will end up with a task that is incredibly difficult.
If I am to refrain from asking “What can I do?” then I am unable to choose a goal that is more rational.
Greater returns usually involve taking greater risks.
If I set myself goals that are beyond my reasonable limits it will involve taking far greater risks.
In some cases greater risks allow for greater potential reward at the expense of reducing expected utility. An example of this is a lottery.
Sure, I respect the value of intention. I can visualize success, I can aim high and I can give myself no quarter when it comes to making excuses. Nevertheless if I don’t consider “what can I do?” rationality will lead me to gambling, in one way or another. Possibly gambling with my health with risk of burnout.
There are questions I can’t answer about the problem.
Does human-level intelligence require some sort of changing of the source code in itself, experimentally at a local level? Neurons have no smarts in them implicitly, we share the same type of neurons with babies. What makes us smart is how they are connected. Which changes on a daily basis, if not at shorter time scales. Is it possible to alter this kind of computer system from the outside, to make it “better” if it is changing itself. If you freeze a copy of your software brain, you will change during the time you investigate your own smarts, and any changes you then apply back to you may be incompatible or non-optimal with the changes your brain made to itself.
In short I think is plausible that there are computer systems I cannot understand and improve the software of on a high level rational level. And that my own mind might be one of them.
Before the project to actually build space flight capability (or nuclear explosives or computers or any other friggin’ hard thing) was started, engineers had to have ‘yes’ to every “is x possible” question. If they had a ‘dunno’, they had to figure it out, experimentally or/and theoretically. If something was a ‘no’ - there was no point in trying.
http://en.wikipedia.org/wiki/Feasibility_study
There’s a familiar story—maybe you’ve heard it—a story about a proud young man who came to Socrates asking for knowledge. He walked up to the muscular philosopher and said, “O great Socrates, I come to you for knowledge.”
Socrates led the young man through the streets of the town—down to the sea—and chest deep into water. Then he asked, “What do you want?”
“Knowledge, O wise Socrates,” said the young man with a smile.
Socrates put his strong hands on the man’s shoulders and pushed him under. Thirty seconds later Socrates let him up. “What do you want?” he asked again.
“Knowledge,” the young man sputtered, “O great and wise Socrates.”
Socrates pushed him under again. Thirty seconds passed, thirty-five. Forty. Socrates let him up. The man was gasping. What do you want, young man?”
Between heavy, heaving breaths the fellow wheezed, “Knowledge, O wise and wonderful...”
Socrates jammed him under again. Forty seconds passed. Fifty. “What do you want?”
“Air!” he screeched. “I need air!”
“When you want knowledge as you have just wanted air, then you will have knowledge.”
Can you choose to try harder than you actually are? Isn’t that like choosing to believe? I always thought you either believe or you don’t. We don’t have a choice in the matter. Do we?
[ TL;DR keywords in bold ]
Assuming freedom of will in the first place, why should you not be able to choose to try harder? Doesn’t that just mean allocating more effort to the activity at hand?
Did you mean to ask “Can you choose to do better than your best?” ? That would indeed seem similar to the doubtable idea of selecting beliefs arbitrarily. By definition of “best”, you can not do better than it. But that can be ‘circumvented’ by introducing different points in time: Let’s say at t=1 your muscle capacity enables you to lift up to 10 kg. You can not actually choose to lift more. You can try, but would fail. But you can choose to do weight training, with the effect that until t=2 you have raised your lifting power to 20 kg. So you can do better (at t=2) than your best (at t=1).
But Eliezer’s point was a different one, to my understanding: He suggested that when you say (and more or less believe) that you “try your best”, you are wrong automatically. (But only lying to the extent of your awareness of this wrongness.) Because you do better when setting out to “succeed” instead of to “try”; because these different mindsets influence your chances of success.
About belief choice: Believing is not a simply choosable action like any other. But I can imagine ways to alter one’s own beliefs (indirectly), at least in theory:
Influencing reality: one example is the aforementioned weightlifting: That is a device for changing the belief “I am unable to lift 20 kg”—by changing the actual state of reality over time.
Reframing a topic, concentrating on different (perspectives on) parts of the available evidence, could alter your conclusion.
Self-fulfilling prophecy effects, when you are aware of them, create cases where you may be able to select your belief. Quoting Henry Ford:
If you believe this quote, then you can select whether to believe in yourself, since you know you will be right either way.
(Possibly a person who has developed a certain kind of mastery over her own mind can spontaneously program herself to believe something.)
(More examples of manipulating one’s own beliefs, there in the form of “expectancy”, can be found under “Optimizing Optimism” in How to Beat Procrastination. You can also Google “change beliefs” for self-help approaches to the question. Beware of pseudoscience, though.)
The usage of “try” was heavily addressed in the training I just did. The approach is to notice why a usage exists. What is the value of adding “try”?
Well, it demolishes the possibility of failure to realize the stated goal. After all, if I say, “I’m going to try to express myself coherently,” I can’t actually fail, as long as I do something, anything at all. I can give up at the first tiny obstacle, but, hey, I tried. How about “I tried to overcome my procrastination”?
We use “try” to avoid identifying “failure,” because we have been trained that failure is Bad. It’s not. Failure is inevitable if we undertake anything worth doing that isn’t already so easy that we don’t need to take any risks, we just do it. I don’t “try” to turn on the light in the room, I just flip the switch. (Sure, sometimes a light is burned out or something. But we would never ask someone, “Try to turn on the light.” We just ask them to turn it on.)
Failure is an essential part of the learning process, of the development of skill.
Yudkowsky’s ability to see beyond his original incomplete vision, and to openly acknowledge the former shortcoming, is part of what identifies him as Yudkowsky. That is not necessarily a common ability, most people become increasingly entangled in what they said before.
I’d just like to say that this post was one of the most effortless for me to intuitively embrace thus far, seeing as I’ve read HPMoR and the idea of Doing v. Trying is a common theme. I’ll be sure to tap into my mysterious dark side next time I need something actually… taken care of.
Failure is always possible. However there are two responses to failure. One is to be happy with having made the attempt. This does not make failure less likely in the future.
The other is to actually engage with and analyze your failure. If you didn’t flip the switch, your failure is a failure. You figure out why you came up with a plan that didn’t work. If the switch needs to be flipped again tomorrow, you will have a better chance of flipping the switch tomorrow. If some button needs to be pressed tomorrow, you won’t likely fail at button pressing for the same reason you failed at switch flipping.
Doing rather than trying is a commitment to the second response to failure.
A quote I liked from “The Village”:
Do other people really work like that? I thought that the thing with the Yoda quote was that the Force only works if you 100% believe in it. Unlike the nature of our wold, Force do care about your state of mind, and not only about your actions. But we do not live in that word.
If there is something I don’t want to do, for what ever emotional reason that I don’t really want to admit, I would never trick my self in beveling that I have tried and failed. Instead, my inner clever arguer would try to convince me that the problem is too hard to even try, that the chance of success is to too small compared to the effort of trying. If my clever arguer wins this argument, then I would not try.
(Actually it usually goes like this: I spot what my inner clever arguer is doing. I then admit to my self, that I have a emotional preference for, what ever my inner clever arguer is trying to push for. I take this preference in to account, together with everything else that is relevant, and then I decide what to do. But that is besides the point here.)
Why would I even wont to pretend that I have tried and failed? True failure is painful. Does pretend failure feel different?
I can understand the concept of trying to appear to have tried, to pleas someone else. But that is a very different thing. I generally do not approve of lying to other, but it still conceptually different from lying to yourself.
I am rather offended by the the thought that when I say, “I am going to try”, some one might interpret that as “I am going to try to try”, or even “I am going to pretend to try”. Because that was not what I said, and it was defiantly not what I meant. When I say “I am going to try”, it means that I will put extra effort in to the task, just because I am aware of the risk of failure.
We do live a world where if you tell someone in hypnosis to move their arm up they will move their arm up but if you tell them to try to move their arm up they won’t move their arm up.
Yes, it general means that you put effort into the task. But effort doesn’t always mean effective action.
Let me clarify even more. To me, the word “try” referees to the conscious process of optimizing for succuss, with or with out constraints. Constraints, may be that I only want to put so much effort in to the problem, or that I am not willing to take certain risks, etc.
Also, to me the word “do” means that I intend to preform an action, that is trivial enough so that I do not feel a need to optimize.
For example, right now I am trying to explain my self. I am optimizing this text for clarity, under the ill defined, but very real constraint, that I am only willing to put in a limited amount of effort. But I am doing the accrual typing on the keyboard. I do not try to hit the right keys. Hitting keys are trivial, I don’t it flawlessly, but I do it well enough not to bother to optimize the effort thunder. Trying is more costly than doing, in my meaning of the words.
I never ever tried to try. I am not really sure what that would mean even using my meaning of the word “try”. I try or I do not try, there is no try to try. How ever, I did just spend some time trying to try to try, and failed.
When I say, I failed to try to try, I mean that, using my meaning of the word “try”. The action that Yudkovsky call “trying to try”, I would call “pretending to try”.
However, trying, as I use the word, does not necessarily mean trying hard. Some times a solution is worth the effort if and only if it is cheap. In this situation, if I do not know the difficulty of the problem, I will give it a light try. It can be, thinking about a problem for X minutes, and if I did not make any progress, I drop it. But that is still a try. During those minutes I optimist to win. Because I do want that win. I just don’t want it very strongly.
That is interesting. Do you know the underlying reason for this? I am guessing that it has to do with conscious and non conscious actions. Trying is a conscious effort, but doing is mostly not conscious. To my best understanding, hypnosis bypasses the conscious decision center of the brain, which would explain why there is no trying. But don’t trust me on this, because I know very little about hypnosis, and I am very good att making up explanations on the spot.
I was to a workshop once that involved hypnosis. It turned out that I am not very receptive to hypnosis. I am not saying I a immune, but it just did not work on me that time, and it did work on most of the others. I was really disappointed.
I am not convinced that there is a strong connection between the two mental phenomena, hypnosis and pretending to try, but the fact that my mind refuses both of them is evidence in this direction.
There’s the classic example of “don’t try to think of a pink elephant”. Most people you give that task will exert effort into not thinking of a pink elephant but that effort won’t lead to them not thinking of a pink elephant.
There’s trying. The person often does tense up their arm. It’s just that the arm doesn’t move as other muscles hold the arm in place.
In hypnosis you take certain metacognition away. If you tell someone to try they just try and exert effort but they don’t work towards a goal if you don’t give them a goal.
In addition to hypnosis the Alexander Technique is a system for movement where having a clear goal for movement and not trying to move is an important concept. It leads to people moving with less tension and more ergonomical.
I think in a variety of contexts where the effects of mental states matter naive people engage in effort when you tell them to try but not necessarily effort that works effectively towards a goal.
To move again to a more general level, Bob the manager who works 80 hours per week and sleeps 4 hours per day is trying really hard to do a good job. Certainly more than Dave who works 40 hours per week and sleeps 8 hours. It’s certainly possible that Bob is more productive than Dave as a result of putting in more effort but it isn’t certain. Maybe he spends too much time in busy work and isn’t rested enough to concentrate on what matters.
Ok, you, and possibly most people, associate the word “try”, only with putting in effort. For me “try” means something different (as I have tried to explain), because your try, is not a natural concept for me. I will just have to keep this difference in mind in future conversation, when ever it is important for the communication.
I think for most people if you ask them to define what “try” means they will tell you that it’s about putting in effort to achieve a goal. Emprirically that’s however doesn’t describe well the circumstances in which they use the word.
Especially on LW it might be possible that you actually don’t wouldn’t describe the manager who works 80 hours as trying to do his best at his job, but what you said doesn’t make me confident that’s the case.
I was at a hypnosis seminar where one of the exercises is about temporily forgetting numbers. There no mental action that you can do where you exert effort that gets you to forget the numbers but if you are in a mental state where you don’t try and follow the instructions of the hypnotherapist you will temporarily forget the numbers.
At the end of the seminar I think of roughly 20 people there were two for which it didn’t work. It didn’t work for me because I wanted to have the effect happen and therefore I couldn’t let go enough to stop trying to make it work. There was another person who happened to be a professional hypnotherapist for whom the same was true.
The mental state of just working towards a goal and not putting in any effort isn’t easy to achieve.
These do not go together. People on Lesswrong often would describe things in ways that would be very weird to an average person.
Also, in the case of the manager working 80 hours, remember that the definition is about effort, not about number of hours. People need not believe that effort is strictly correlated with number of hours.
And in the hypnosis example, most people would say something like “if you try to forget, it won’t work”. In other words, they would not say that the person who exerts effort isn’t trying, just that he’s not successfully trying.
Yes, normal people associate working 80 hours with effort and on LW you might have people who don’t. The thing that matters for trying is effort.
The main point is that exerting effort and doing what’s necessary to achieve an objective are two different things.
There are certain effects that can be achieved in trance that you can achieve if you exert effort. Telekinesis isn’t one of them, but it makes sense that a fictional character who can do telekinesis would need an effort less trance state to do it.
There are certain tasks like sitting in front of one’s computer that where you will have less back pain if you invest less effort into the act of sitting (and in the Alexander technique you can learn how to do the task with less effort).
Than there’s EY meaning that a lot of people will say “I will try” when they are asked to achieve an outcome where they aren’t certain whether they can achieve it with the strategy they choose to persue the goal. They commit to investing some energy into following the strategy but they don’t commit to the responsibility of making the outcome happen.
I agree that, from your stand point, you are correct in not entirely trusting me, when I claim to know my own brains working, in the case of this single word. And that is ok.
I wrote my fist post out of frustration over this way of interpreting “try”:
I usually succeed in keeping my rants of the Internet, but not always. Sorry about that, and for getting unnecessarily defensive at your responses.
As said, I am ok with you doubting me on weather I know my own brains working, in the case of this single word. But it would be fun if I could convince you. Do you want to help? Any idea of a test you could give me?
Regarding your example with Bob and Dave. How do I think is trying hardest? I do not know. To judge this, I would need to know the reasons for why they are doing what they are doing.
I have not yet defined how I want to measure the amount of trying. I have an intuitive idea, but it is less prices than my concept of trying. When I try to formalize my thoughs I get something like this:
Try X = Optimizing for X, usually given some constraints (e.g. unacceptable actions or risks, limited amount of time, money and other resources, that one is willing to spend on the try)
Amount of trying X = How much time, money and other resources one is spending directly on optimizing for X.
Trying ones best = Optimizing for X
Additionally, all the optimizations happens in the real world. Aside from deliberate constraints, there are always the real constraints of the real world, including how smart one is. (Edit: Shit, do I run in to the problem with determinism here? It should not matter, but I am not entirely sure. I need think more about this.)
This means that I can try my best at something, and you can still try harder, if you have more resources that can be invested. I expect that this sounds odd to you, but it actually lines up nicely with my intuition.
Most tests I could give you would would result in you trying to find the right answer and thus not test intuitive language usage. If you had a corpus of English text you wrote previously you could search it for “try” and get the first X examples. Then why could analyse what you meant with the word try.
But I think I can work with the rest of your post.
This suggest that investing more resources mean trying harder.
In cases where investing more resources means that success is less likely that notion of trying harder isn’t optimization for a goal.
The woman who’s playing hard to get isn’t “trying”. She isn’t investing resources. She might still use the strategy that produces the best results.
In the case of the hypnosis effect of forgetting the numbers, that’s not something I can achieve while trying to optimize for it. For me that seminar was a reference experience. I sat there and knew that I can only achieve the goal if I would stop trying to optimize for it. The fact that I really wanted to optimize for it and succeed only made it worse.
Investing resources and optimizing is different from doing what’s necessary.
Sometimes “Just be yourself” would be good advice if the answer person could accept it*, because it stops the optimization and the trying that are the biggest problem.
*In practice people can’t accept it so it usually isn’t effective advice.
Yes, I just said so
But only if the added resources actually goes towards optimizing for winning. More precisely: If and only if I think that adding more resources will improve my expected outcome, then adding more resources, is trying harder.
I know what you mean with the hypnosis, my experience was very similar. But I did less post analysis than you.
I am not going to get in to exactly why I hate the advise “Be yourself”, because it is a bit too personal and also off topic. But because I thought it was such a terrible advise, and why would anyone say that, I did some asking and thinking. Next time you are giving advise, Instead of saying “Be yourself”, say “Focus on others”. As you have already realized, saying “Be yourself” is telling people what not to do, which is not helpful. So tell them what to do instead. The best way to avoid doing X is to do Y instead, and there are extremely few situations where there are no possible Y to focus on. Mediation and trying to be hypnotized are the only examples I can think of, and even in mediation instructions, you are toled to focus on you breathing, or something, because doing nothing is too hard. But in most situations there are things you can focus you attention and efforts on, that are actually useful, and not just an artificial distraction. The circumstance where “Be yourself”, usually pop up is when someone needs advise on how to do a good impression on an other person (date, interview for a job, etc). In these situations, a good choice is to focus on the other person, to get to know them.
Being told to focus on breathing is indeed the version of meditation that’s popular for teaching beginners because it’s an easy entry. It isn’t too hard. There are harder version to mediate that don’t work via easy prompts.
The same goes for “Just be yourself” it’s too hard to expect the other person to do it, so you give them another prompt. But generally good social advice is more targeted to the individual person.
we call it trying until you don’t miss it, when you miss it you call it a “lose” but when you end up at your plan it is a”WIN” so it is just words and all is trying until you win because until you abandon you did’nt lose.
my english is bad sorry,”i did my best”;)