The Power of Agency
You are not a Bayesian homunculus whose reasoning is ‘corrupted’ by cognitive biases.
You just are cognitive biases.
You just are attribution substitution heuristics, evolved intuitions, and unconscious learning. These make up the ‘elephant’ of your mind, and atop them rides a tiny ‘deliberative thinking’ module that only rarely exerts itself, and almost never according to normatively correct reasoning.
You do not have the robust character you think you have, but instead are blown about by the winds of circumstance.
You do not have much cognitive access to your motivations. You are not Aristotle’s ‘rational animal.’ You are Gazzaniga’s rationalizing animal. Most of the time, your unconscious makes a decision, and then you become consciously aware of an intention to act, and then your brain invents a rationalization for the motivations behind your actions.
If an ‘agent’ is something that makes choices so as to maximize the fulfillment of explicit desires, given explicit beliefs, then few humans are very ‘agenty’ at all. You may be agenty when you guide a piece of chocolate into your mouth, but you are not very agenty when you navigate the world on a broader scale. On the scale of days or weeks, your actions result from a kludge of evolved mechanisms that are often function-specific and maladapted to your current environment. You are an adaptation-executor, not a fitness-maximizer.
Agency is rare but powerful. Homo economicus is a myth, but imagine what one of them could do if such a thing existed: a real agent with the power to reliably do things it believed would fulfill its desires. It could change its diet, work out each morning, and maximize its health and physical attractiveness. It could learn and practice body language, fashion, salesmanship, seduction, the laws of money, and domain-specific skills and win in every sphere of life without constant defeat by human hangups. It could learn networking and influence and persuasion and have large-scale effects on societies, cultures, and nations.
Even a little bit of agenty-ness will have some lasting historical impact. Think of Benjamin Franklin, Teddy Roosevelt, Bill Clinton, or Tim Ferris. Imagine what you could do if you were just a bit more agenty. That’s what training in instrumental rationality is all about: transcending your kludginess to attain a bit more agenty-ness.
And, imagine what an agent could do without the limits of human hardware or software. Now that would really be something.
(This post was inspired by some conversations with Michael Vassar.)
- Recovering from Rejection by 3 Jul 2023 9:42 UTC; 172 points) (EA Forum;
- The Power of Reinforcement by 21 Jun 2012 13:42 UTC; 172 points) (
- My Algorithm for Beating Procrastination by 10 Feb 2012 2:48 UTC; 145 points) (
- Thoughts on the January CFAR workshop by 31 Jan 2013 10:16 UTC; 60 points) (
- Leveling Up in Rationality: A Personal Journey by 17 Jan 2012 11:02 UTC; 51 points) (
- Rationality: An Introduction by 11 Mar 2015 19:00 UTC; 45 points) (
- 8 Apr 2013 21:41 UTC; 39 points) 's comment on New applied rationality workshops (April, May, and July) by (
- 1 Feb 2013 21:02 UTC; 35 points) 's comment on [Link] The Stanford Superman Experiment: Anchoring Empathy? by (
- Why officers vs. enlisted? by 30 Oct 2013 20:14 UTC; 23 points) (
- Why Do I Think I Have Values? by 3 Feb 2022 13:35 UTC; 22 points) (
- 10 Oct 2012 17:56 UTC; 22 points) 's comment on Abandoning Cached Selves to Re-Write My Source Code Partially, I’ve Become Unstable by (
- Calibrating Against Undetectable Utilons and Goal Changing Events (part1) by 20 Feb 2013 9:09 UTC; 13 points) (
- Calibrating Against Undetectable Utilons and Goal Changing Events (part2and1) by 22 Feb 2013 1:09 UTC; 10 points) (
- 26 Oct 2011 8:40 UTC; 10 points) 's comment on Practicing what you preach by (
- Agency and Life Domains by 16 Nov 2014 1:38 UTC; 8 points) (
- 16 Nov 2014 8:39 UTC; 5 points) 's comment on Agency and Life Domains by (
- 18 Nov 2012 16:30 UTC; 5 points) 's comment on Open Thread, November 16–30, 2012 by (
- 29 Aug 2023 20:19 UTC; 5 points) 's comment on Dear Self; we need to talk about ambition by (
- 17 May 2011 13:20 UTC; 4 points) 's comment on Conceptual Analysis and Moral Theory by (
- 25 Apr 2012 2:16 UTC; 4 points) 's comment on The Craft And The Community: Wealth And Power And Tsuyoku Naritai by (
- 5 Apr 2013 22:19 UTC; 4 points) 's comment on Open Thread, April 1-15, 2013 by (
- What is a problem? by 12 Mar 2017 2:04 UTC; 4 points) (
- 24 Oct 2011 21:35 UTC; 4 points) 's comment on Let Your Workers Gather Food by (
- 25 Feb 2013 19:50 UTC; 3 points) 's comment on Can biases be used to encourage rational thinking? by (
- 27 Jul 2013 9:53 UTC; 2 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 by (
- 15 Nov 2014 21:04 UTC; 1 point) 's comment on Intentionally Raising the Sanity Waterline by (
- 15 Nov 2014 20:05 UTC; 1 point) 's comment on Intentionally Raising the Sanity Waterline by (
- 9 May 2011 17:09 UTC; 0 points) 's comment on Toronto Meetup, May 10th by (
- 26 Oct 2011 8:39 UTC; 0 points) 's comment on The Power of Agency by (
- 12 Sep 2012 17:43 UTC; 0 points) 's comment on The noncentral fallacy—the worst argument in the world? by (
- 21 Sep 2012 15:07 UTC; -6 points) 's comment on The noncentral fallacy—the worst argument in the world? by (
I radically distrust the message of this short piece. It’s a positive affirmation for “rationalists” of the contemporary sort who want to use brain science to become super-achievers. The paragraph itemizing the powers of agency especially reads like wishful thinking: Just pay a little more attention to small matters like fixity of purpose and actually acting in your own interest, and you’ll get to be famous, rich, and a historical figure! Sorry, that is entirely not ruthless enough. You also need to be willing to lie, cheat, steal, kill, use people, betray them. (Wishes can come true, but they usually exact a price. ) It also helps to be chronically unhappy, if it will serve to motivate your extreme and unrelenting efforts. And finally, most forms of achievement do require domain-specific expertise; you don’t get to the top just by looking pretty and statusful.
The messy, inconsistent, and equivocating aspects of the mind can also be adaptive. They can save you from fanaticism, lack of perspective, and self-deception. How often do situations really permit a calculation of expected utility? All these rationalist techniques themselves are fuel for rationalization: I’m employing all the special heuristics and psychological tricks, so I must be doing the right thing. I’ve been so focused lately, my life breakthrough must be just around the corner.
It’s funny that here, the use of reason has become synonymous with “winning” and the successful achievement of plans, when historically, the use of reason was thought to promote detachment from life and a moderation of emotional extremes, especially in the face of failure.
How could you reliably know these things, and how could you make intentional use of that knowledge, if not with agentful rationality?
You can’t. I won’t deny the appeal of Luke’s writing; it reminds me of Gurdjieff, telling everyone to wake up. But I believe real success is obtained by Homo machiavelliensis, not Homo economicus.
This is reminding me of Steve Pavlina’s material about light-workers and dark-workers. He claims that working to make the world a better place for everyone can work, and will eventually lead you to realizing that you need to take care of yourself, and that working to make your life better exclusive of concern for others can work and will eventually convince you of the benefits of cooperation, but that slopping around without being clear about who you’re benefiting won’t work as well as either of those.
How can you tell the ratio between Homo machiavelliensis and Homo economicus, considering that HM is strongly motivated to conceal what they’re doing, and HM and HE are probably both underestimating the amount of luck required for their success?
fMRI? Also, some HE would be failed HM. The model I’m developing is that in any field of endeavor, there are one or two HMs at the top, and then an order-of-magnitude more HE also-rans. The intuitive distinction: HE plays by the rules, HM doesn’t; victorious HM sets the rules to its advantage, HE submits and gets the left-over payoffs it can accrue by working within a system built by and for HMs.
My point was that both the “honesty is the best policy” and the “never give a sucker an even break” crews are guessing because the information isn’t out there.
My guess is that different systems reward different amounts of cheating, and aside from luck, one of the factors contributing to success may be a finely tuned sense of when to cheat and when not.
Yeah, and the people who have the finest-tuned sense of when to cheat are the people who spent the most effort on tuning it!
I suspect some degree of sarcasm, but that’s actually an interesting topic. After all, a successful cheater can’t afford to get caught very much in the process of learning how much to cheat.
Love the expression. :)
Interesting. Personally I read it as a kind of “get back to Earth” message. “Stop pretending you’re basically a rational thinker and only need to correct some biases to truly achieve that. You’re this horrible jury-rig of biases and ancient heuristics, and yes while steps towards rationality can make you perform much better, you’re still fundamentally and irreparably broken. Deal with it.”
But re-reading it, your interpretation is probably closer to the mark.
Agency is still pretty absent there too. As it happens, I have something of an essay on just that topic: http://www.gwern.net/on-really-trying#on-the-absence-of-true-fanatics
This if false.
Yes. And domain-specific expertise is something that can be learned and practiced, by applying agency to one’s life. I’ll add it to the list.
If we are talking about how to become rich, famous, and a historically significant person, I suspect that neither of us speaks with real authority. And of course, just being evil is not by itself a guaranteed path to the top! But I’m sure it helps to clear the way.
Sure. I’m only disagreeing with what you said in your original comment.
I would say ‘overstated’. I assert that most people who became famous, rich and a historical figure used those tactics. More so the ‘use people’, ‘betray them’ and ‘lie’ than the more banal ‘evils’. You don’t even get to have a solid reputation for being nice and ethical without using dubiously ethical tactics to enforce the desired reputation.
Personally, I find that being nice and ethical is the best way to get a reputation for being nice and ethical, though your mileage may vary.
I don’t have a personal statement to make about my strategy for gaining a reputation for niceness. Partly because that is a reputation I would prefer to avoid.
I do make the general, objective level claim that actually being nice and ethical is not the most effective way to gain that reputation. It is a good default and for many, particularly those who are not very good at well calibrated hypocrisy and deception, it is the best they could do without putting in a lot of effort. But it should be obvious that the task of creating an appearance of a thing is different to that of actually doing a thing.
I don’t think anyone’s arguing that “reason” is synonymous with winning. There are a lot of people, however, arguing that “rationality” is systematized winning. I’m not particularly interested in detaching from life and moderating my emotional response to failure. I have important goals that I want to achieve, and failing is not an acceptable option to me. So I study rationality. Honestly, EY said it best:
Can you go into more detail about how you believe these particular people behaved more agenty than normal?
In case anybody asks how I was able to research and write two posts from scratch today:
It’s largely because I’ve had Ray Lynch’s ‘The Oh of Pleasure’ on continuous repeat ever since 7am, without a break.
(I’m so not kidding. Ask Jasen Murray.)
If this actually works reliably, I think it is much more important than anything in either of the posts you used it to write—why bury it in a comment?
I don’t know if it’s the song or the placebo effect, but it’s just written my thesis proposal for me.
Congrats!
Well there’s a piece of music easy to date to circa Blade Runner.
Maybe tomorrow I will try the Chariots of Fire theme, see what it does for me. :)
Hmmm. I wonder what else I’ve spent an entire day listening to over and over again while writing. Maybe Music for 18 Musicians, Tarot Sport, Masses, and Third Ear Band.
I just came across Tarot Sport; it’s the most insomnia-inducing trance I’ve ever heard.
I liked that song but then ended up listening to the #2 most popular song on that site instead. It provided me with decidedly less motivation. ;)
I just listened to four seconds of that song and then hit ‘back’ in my browser to write this comment. ‘Ugh’ to that song.
Just listened to it. The first minute or so especially gave an effect about as much as a strong coffee. A little agitating but motivating.
How do we know that it’s you writing, and not the music?
(Just kidding, really.)
Edit—please disregard this post
That is strange. I like the song though, thanks for passing it along. Like one of the other commenters, I will be testing out its effects.
Do you think if you listened to the song every day, or 3 days a week, or something, the effect on your productivity or peace of mind would dwindle? If not, do you plan to continue listening to it a disproportionate amount relative to other music?
ETA random comment: Something about it reminds me of the movie Legend.
I don’t believe this is really the cause, but I’m going to listen to it at work tomorrow just in case.
A lot of body language, fashion, salesmanship, seduction, networking, influence, and persuasion are dependent entirely on heuristics and intuition.
In the real world those that have less access to these traits (being people of the autistic spectrum, for example) tend to have a much harder time learning how to accomplish any of the named tasks. They also, for most of those tasks, have a much harder time seeing why one would wish to accomplish those tasks.
Extrapolating to a being that has absolutely no such intuition or heuristics then one is left with the question of what it is that they wish to actually do? Perhaps some of the severely autistic really are like this and never learn language as it never occurs to them that language could be useful and so have no desire to learn language.
With no built in programing to determine what is to be desired and what is not to be desired and no built in programing as to how the world works or does not work then how is one to determine what should be desirable or how to accomplish what is desired? As far as I can determine an agent without human hardware or software may be left spending its time attempting to figure out how anything works and figuring out what, if anything, it wants to do.
It may not even attempt to figure out anything at all if Curiosity is not rational but a built in heuristic. Perhaps someone has managed to build a rational AI but has neglected to give it built in desires and/or built in Curiosity and it did nothing so was assumed to not have worked.
Isn’t even the desire to survive a heuristic?
Sure. But are you denying these skills can be vastly improved by applying agency?
You mention severe autistics. I’m not sure how much an extra dose of agency could help a severe autistic. Surely, there are people for whom an extra dose of agency won’t help much. I wasn’t trying to claim that agency would radically improve the capabilities of every single human ever born.
Perhaps you are reacting to the idea that heuristics are universally bad things? But of course I don’t believe that. In fact, the next post in my Intuitions and Philosophy sequence is entitled ‘When Intuitions are Useful.’
This is what I am reacting to, especially when combined with what I previously quoted.
Oh. So… are you suggesting that a software agent can’t learn body language, fashion, seduction, networking, etc.? I’m not sure what you’re saying.
I am saying that without heuristics or intuitions what is the basis for any desires? If an agent is a software agent without built in heuristics and intuitions then what are its desires, what are its needs, and why would it desire to survive, to find out more about the world, to do anything? Where do the axioms it uses to think that it can modify the world or conclude anything come from?
Our built in heuristics and intuitions are what allow us to start building models of the world on which to reason in the first place and removing any of them demonstrably makes it harder to function in normal society or to act normally. Things that appear reasonable to almost everyone are utter nonsense and seem pointless to those that are missing some of the basic heuristics and intuitions.
If all such heuristics (e.g. no limits of human hardware or software) are taken away then what is left to build on?
I’ll jump in this conversation here, because I was going to respond with something very similar. (I thought about my response, and then was reading through the comments to see if it had already been said.)
I sometimes imagine this, and what I imagine is that without the limits (constraints) of our hardware and software, we wouldn’t have any goals or desires.
Here on Less Wrong, when I assimilated the idea that there is no objective value, I expected I would spiral into a depression in which I realized nothing mattered, since all my goals and desires were finally arbitrary with no currency behind them. But that’s not what happened—I continued to care about my immediate physical comfort, interacting with people, and the well-being of the people I loved. I consider that my built-in biological hardware and software came to the rescue. There is no reason to value the things I do, but they are built into my organism. Since I believe that it was being an organism that saved me (and by this I mean the product of evolution), I do not believe the organism (and her messy goals) can be separated from me.
I feel like this experiment helped me identify which goals are built in and which are abstract and more fully ‘chosen’. For example, I believe I did lose some of my values, I guess the ones that are most cerebral. (I only doubt this because with a spiteful streak and some lingering anger about the nonexistence of objective values, I could be expressing this anger by rejecting values that seem least immediate). I imagine with a heightened ability to edit my own values, I would attenuate them all, especially wherever there were inconsistencies.
These thoughts apply to humans only (that is, me) but I also imagine (entirely baselessly) that any creature without hardware and software constraints would have a tough time valuing anything. For this, I am mainly drawing on intuition I developed that if a species was truly immortal, they would be hard pressed to think of anything to do, or any reason to do it. Maybe, some values of artistry or curiosity could be left over from an evolutionary past.
Depends what kind of agent you have in mind. An advanced type of artificial agent has its goals encoded in a utility function. It desires to survive because surviving helps it achieve utility. Read chapter 2 of AIMA for an intro to artificial agents.
Precisely, that utility function is heuristic or intuition. Further survival can only be desired according to prior knowledge of the environment, so again a heuristic or intuition. It is also dependent on the actions that it is aware that it can perform (intuition or heuristic). One can only be an agent when placed in an environment, given some set of desires (heuristic) (and ways to measure accomplishing those desires), and given a basic understanding of what actions are possible (intuition), as well as whatever basic understanding of the environment is needed to be able to reason about the environment (intuition).
I assume chapter 2 of the 2nd edition is sufficiently close to chapter 2 of the 3rd edition?
I don’t understand you. We must be using the terms ‘heuristic’ and ‘intuition’ to mean different things.
A pre-programed set of assumptions or desires that are not chosen rationally by the agent in question.
edit: perhaps you should look up 37 ways that words can be wrong
Also, you appear to be familiar with some philosophy so one could say they are A Priori models and desires in the sense of Plato or Kant.
If this is where you’re going, then I don’t understand the connection to my original post.
Which sentence(s) of my original post do you disagree with, and why?
I have already gone over this.
Such an agent may not have the limits of human hardware or software but such an agent does require a similar amount of restrictions and (from the agents point of view) irrational assumptions and desires or it is my opinion that the agent will not do anything.
The human hangups are what allow us to practice body language, fashion, etc and what gives us the desire to do so. If we didn’t have such hang ups then, from experience, understanding such things is much harder, practicing such things is harder, and desiring such things requires convincing. It is easier to win if one is endowed with the required intuitions and heuristics to make practicing such things both desirable and natural.
There is no accounting for preferences (or desires) meaning such things are not usually rationally chosen and when they are there is still a base of non-rational assumptions. Homo economicus is just as dependent on intuition and heuristics as anyone else. The only place that it is different, at least as classically understood, is its ability to access near perfect information and to calculate exactly its preferences and probabilities.
edit Also
This is said as a bad thing when it is a necessary thing.
Desires/goals/utility functions are non-rational, but I don’t know what you mean by saying that an artificial agent needs restrictions and assumptions in order to do something. Are you just saying that it will need heuristics rather than (say) AIXI in order to be computationally tractable? If so, I agree. But that doesn’t mean it needs to operate under anything like the limits of humans hardware and software, which is all I claimed.
Sure, but I think a superintelligence could figure it out, the same way a superintelligence could figure out quantum computing or self-replicating probes.
Agreed. This is the Humean theory of motivation, which I agree with. I don’t see how anything I said disagrees with the Humean theory of motivation.
I didn’t say it as a bad thing, but a correcting thing. People think they have more access to their motivations than they really do. Also, it’s not a necessary thing that we don’t have much cognitive access to our motivations. In fact, as neuroscience progresses, I expect us to gain much more access to our motivations.
JohnH, I kept asking what you meant because the claims I interpreted from your posts were so obviously false that I kept assuming I was interpreting you incorrectly. I’m still mostly assuming that, actually.
You need to assume inductive priors. Otherwise you’re pretty much screwed.
wedrifid has explained the restriction part well.
Again, the superintelligence would need to have some reasons to desire to figure out any such thing and to think that it can figure out such things.
Even if this is true any motivation to modify our motivations would itself be based on our motivations.
I do not see how anything I said is obviously false. Please explain this.
Sure. Like, its utility function. How does anything you’re saying contradict what I claimed in my original post?
Sorry, I still haven’t gotten any value out of this thread. We seem to be talking past each other. I must turn my attention to more productive tasks now...
Hang on, you are going to claim that my comments are obviously false then argue over definitions and when definitions are agreed upon walk away without stating what is obviously false?
I seriously feel that I gotten the run around from you rather than at any time a straight answer. My only possible conclusions are you are being evasive or you have inconsistent beliefs about the subject (or both).
You seem to have used the words ‘heuristic’ and ‘intuition’ to refer to terminal values (eg. a utility function) and perhaps occam priors, as opposed to the usually understood meaning “a computationally tractable approximation to the correct decision making process (full bayesian updating or whatever)”. It looks like you and lukeprog actually agree on everything that is relevant, but without generating any feeling of agreement. As I see it, you said something like “but such an agent won’t do anything without an occam prior and terminal values”, to which lukeprog responded “but clearly anything you can do with an approximation you can do with full bayesian updating and decision theory”.
Basically, I suggest you Taboo “intuition” and “heuristic” (and/or read over your own posts with “computationally tractable approximation” substituted for “intuition” and “heuristic”, to see what lukeprog thinks is ‘obviously false’).
Thank you for that, I will check over it.
Luke isn’t arguing over definitions as far as I could see, he was checking to see if there was a possibility of communication.
A heuristic is a quick and dirty way of getting an approximation to what you want, when getting a more accurate estimate would not be worth the extra effort/energy/whatever it would cost. As I see it the confusion here arises from the fact that you believe this has something to do with goals and utility functions. It doesn’t. These can be arbitrary for all we care. But, any intelligence no matter it’s goals or utility function will want to achieve things, after all that’s what it means to have goals. If it has sufficient computational power handy it’ll use an accurate estimator, if not a heuristic.
Heuristics have nothing to do with goals, adaptations not ends
Yeah, you probably do want to let the elephant be in charge of fighting or mating with other elephants, once the rider has decided it’s a good idea to do so.
Intuitions are usually defined as being inexplicable. Apriori claims are usually explicable in terms of axioms, although axioms may be chosen for their intuitive appeal.
precisely.
Am I wrong in taking this to be a one-liner critique of all virtue ethical theories?
I’ve been thinking about this with regards to Less Wrong culture. I had pictured your “deliberative thinking” module as more of an “excuse generator”—the rest of your mind would make its decisions, and then the excuse generator comes up with an explanation for them.
The excuse generator is primarily social—it will build excuses which are appropriate to the culture it is in. So in a rationalist culture, it will come up with rationalizing excuses. It can be exposed to a lot of memes, parrot them back and reason using them without actually affecting your behavior in any way at all.
Just sometimes though, the excuse generator will fail and send a signal back to the rest of the mind that it really needs to change something, else it will face social consequences.
The thing is, I don’t feel that this stuff is new. But try and point it out to anyone, and they will generate excuses as to why it doesn’t matter, or why everyone lacks the power of agency except them, or that it’s an interesting question they’ll get around to looking at sometime.
So currently I’m a bit stuck.
...act as predicted by the model.
I know this is a year or two late, but: I’ve noticed this and find it incredibly frustrating. Turning introspection (yes, I know) on my own internally-stated motivations more often than not reveals them to be either excuses or just plain bullshit. The most frequent failure mode is finding that I did , not because it was good, but because I wanted to be seen as the sort of person who would do it. Try though I might, it seems incredibly difficult to get my brain to not output Frankfurtian Bullshit.
I sort-of-intend to write a post about it one of these days.
I loved this, but I’m not here to contribute bland praise. I’m here to point out somebody who does, in fact, behave as an agent as defined by the italicized statement: “reliably do things it believed would fulfill its desires” that continues with “It could change its diet, work out each morning, and maximize its health and physical attractiveness. ” I couldn’t help but think of Scott H Young, a blogger I’ve been following for months. I really look up to that guy. He is effectively a paragon of the model that you can shape your life to live it as you like. (I’m sure he would never say that though.) He actually referenced a Less Wrong article recently, and it’s not the first time he’s done it, which significantly increased my opinion of him. His current “thing” is trying to master the equivalent of a rigorous CS curriculum (using MIT’s requirements) in 12 months. Only those on the Less Wrong community stand a good chance of not thinking that’s pretty audacious.
http://lesswrong.com/user/ScottHYoung
Thanks, I should’ve known
Coming back to this post, I feel like it’s selling a dream that promises too much. I’ve come to think of such dreams as Marlboro Country ads. For every person who gets inspired to change, ten others will be slightly harmed because it’s another standard they can’t achieve, even if they buy what you’re selling. Figuring out more realistic promises would do us all a lot of good.
Excellent clarion call to raise our expectation of what agency is and can do in our lives, as well as to have sensible expectations of our and others’ humble default states. Well done.
One way of thinking about this:
There is behavior, which is anything an animal with a nervous system does with its voluntary musculature. Everything you do all day is behavior.
Then there are choices, which are behaviors you take because you think they will bring about an outcome you desire. (Forget about utility functions—I’m not sure all human desires can be described by one twice-differentiable convex function. Just think about actions taken to fulfill desires or values.) Not all behaviors are choices. In fact it’s easy to go through a day without making any choices at all. Mostly by following habits or instinctive reactions.
In classical economics, all behaviors are modeled as choices. That’s not true of people in practice, but possibly some people choose a higher percentage of their behaviors than other people do. Maybe it’s possible to train yourself to make more of your behaviors into choices. (In fact, just learning Econ 101 made me more inclined to consciously choose my behaviors.)
There is a reason for this. Making choices constantly is exhausting, especially if you consider all of the possible behaviours. For me, the way to go is to choose your habits. For example: I choose not to spend money on eating out. This a) saves me money, and b) saves me from extra calories in fast food. When pictures of food on a store window tempt me, I only have to appeal to my habit of not eating out. It’s barely conscious now. If I forget to pack enough food from home and I find myself hungry, and the ads unusually tempting, I make a choice to reinforce my habit by not buying food, although I am hungry and there is a cost to myself. The same goes for exercising: i maintain a habit of swimming for an hour 3 to 5 times a week, so the question “should I swim after work?” becomes no longer a willpower-draining conscious decision, but an automatic response.
If I were willing to put in the initial energy of choosing to start a new arbitrary habit, I’m pretty sure I could. As my mother has pointed out, in the past I’ve been able to accomplish pretty much everything I set my mind on (with the exception of becoming the youngest person to swim across Lake Ontario and getting into the military, but both of those plans failed for reasons pretty much outside my control.)
Part of the modelling of everything as choices is that for their purposes they don’t care whether the choice happens to be conscious or not. That is an arbitrary distinction that matters more to us for the purpose of personal development and so we can flatter each other’s conscious selves by pretending they are especially important.
I want to upvote this again.
done for you
It might be simply structural that the LessWrong community tends to be about armchair philosophy, science, and math. If there are people who have read through Less Wrong, absorbed its worldview, and gone out to “just do something”, then they probably aren’t spending their time bragging about it here. If it looks like no one here is doing any useful work, that could really just be sampling bias.
Even still, I expect that most posters here are more interested to read, learn, and chat than to thoroughly change who they are and what they do. Reading, learning, and chatting is fun! Thorough self-modification is scary.
Thorough and rapid self-modification, on the basis of things you’ve read on a website rather than things you’ve seen tested and proven in combination, is downright dangerous. Try things, but try them gradually.
And now, refutation!
To, um, what, exactly? I think the question whose solution you’re describing is “What ought one do?” Of these, you say:
That depends largely on your moral intuitions. I honestly think of all humans as people. I am always taken aback a little when I see evidence that lots of other folks don’t. You’d think I stop being surprised, but it often catches me when I’m not expecting it. I’d suggest that my intuitions about my morals when I’m planning things are actually pretty good.
That said, the salient intuitions in an emotionally-charged situation certainly are bad at planning and optimization. And so, if you imagine yourself executing your plan, I would honestly expect it to feel oddly amoral. It won’t feel wrong, necessarily, but it might not feel relevant to morality at all.
This is … sort of true, depending on what you mean. You might need to learn more, to be able to form a more efficient or more coherent plan. You might need to sleep right now. But, yes, you can prepare to prepare to prepare to change the world right away.
Staying aligned with a community of family and friends is not an arbitrary limitation. Humans are social beings. I myself am strongly introverted, but I also know that my overall mood is affected strongly by my emotional security in my social status. I can reflect on this fact, and I can mitigate its negative consequences, but it would be madness to just ignore it. In my case—and, I presume, in the case of anyone else who worries about being aligned with their family and friends—it’s terrifying to imagine undermining many of those relationships.
You need people that you can trust for deep, personal conversations; and you need people who would support you if your life went suddenly wrong. You may not need these things as insurance, you may not need to use friends and family in this way, but you certainly need them for your own psychological well-being. Being depressed makes one significantly less effective at achieving one’s goals, and we monkeys are depressed without close ties to other monkeys.
On the other hand, harmless-seeming deviations probably won’t undermine those relationships; they’re far less likely to ruin relationships than they seem. Rather, they make you a more interesting person to talk to. Still, it is a terrible idea to carelessly antagonize your closest people.
No! If we’re defining a “true rationalist” as some mythical entity, then probably so. If we want to make “true rationalists” out of humans, no! If you completely disregard common social graces like the outward appearance of humility, you will have real trouble coordinating world-changing efforts. I you disregard empathy for, say, people you’re talking to, you will seem rather more like a monster than a trustworthy leader. And if you ever think you’re unaffected by the absurdity heuristic, you’re almost certainly wrong.
People are not perfect agents, optimizing their goals. People are made out of meat. We can change what we do, reflect on what we think, and learn better how to use the brains we’ve got. But the vast majority of what goes on in your head is not, not, not under your control.
Which brings me to the really horrifying undercurrent of your post, which is why I stayed up an extra hour to write this comment. I mean, you can sit down and make plans for what you’ll learn, what you’ll do, and how you’ll save billions of lives, and that’s pretty awesome. I heartily approve! You can even figure out what you need to learn to decide the best courses of action, set plans to learn that, and get started immediately. Great!
But if you do all this without considering seemingly unimportant details, like having fun with friends and occasionally relaxing, then you will fail. Not only will you fail, but you will fail spectacularly. You will overstress yourself, burn out, and probably ruin your motivation to change the world. Don’t go be a “rationalist” martyr, it won’t work very well.
So, if you’re going to decompartmentalize your global aspirations and your local life, then keep in mind that only you are likely to look out for your own well-being. That well-being has a strong effect on how effective you can be. So much so that attempting more than about 4 hours per day of real, closely-focused mental effort will probably give you not just diminishing returns, but worse efficiency per day. That said, almost nobody puts in 4 hours a day of intense focus.
So, yes, billions are miserable, people die needlessly, and the world is mad. I am still going out tomorrow night and playing board games with friends, and I do not feel guilty about this.
The real question is: how big of an impact can this stuff make, anyway? And how much are people able to actually implement it into their lives?
Are there any good sources of data on that? Beyond PUA, The Game, etc?
Besides, in theory we want to discuss non-Dark Arts topics...
There are many topics that are relevant here that some have labelled ‘Dark Arts’.
It’s Tim Ferriss.
Either way, the guy’s a moron. He’s basically a much better packaged snake oil salesman.
He’s a very effective snake oil salesman.
People don’t change their sense of agency because they read a blog post.
“In alien hand syndrome, the afflicted individual’s limb will produce meaningful behaviors without the intention of the subject. The affected limb effectively demonstrates ‘a will of its own.’ The sense of agency does not emerge in conjunction with the overt appearance of the purposeful act even though the sense of ownership in relationship to the body part is maintained. This phenomenon corresponds with an impairment in the premotor mechanism manifested temporally by the appearance of the readiness potential (see section on the Neuroscience of Free Will above) recordable on the scalp several hundred milliseconds before the overt appearance of a spontaneous willed movement. Using functional magnetic resonance imaging with specialized multivariate analyses to study the temporal dimension in the activation of the cortical network associated with voluntary movement in human subjects, an anterior-to-posterior sequential activation process beginning in the supplementary motor area on the medial surface of the frontal lobe and progressing to the primary motor cortex and then to parietal cortex has been observed.[167] The sense of agency thus appears to normally emerge in conjunction with this orderly sequential network activation incorporating premotor association cortices together with primary motor cortex. In particular, the supplementary motor complex on the medial surface of the frontal lobe appears to activate prior to primary motor cortex presumably in associated with a preparatory pre-movement process. In a recent study using functional magnetic resonance imaging, alien movements were characterized by a relatively isolated activation of the primary motor cortex contralateral to the alien hand, while voluntary movements of the same body part included the concomitant activation of motor association cortex associated with the premotor process.[168] The clinical definition requires “feeling that one limb is foreign or has a will of its own, together with observable involuntary motor activity” (emphasis in original).[169] This syndrome is often a result of damage to the corpus callosum, either when it is severed to treat intractable epilepsy or due to a stroke. The standard neurological explanation is that the felt will reported by the speaking left hemisphere does not correspond with the actions performed by the non-speaking right hemisphere, thus suggesting that the two hemispheres may have independent senses of will.[170][171]
Similarly, one of the most important (“first rank”) diagnostic symptoms of schizophrenia is the delusion of being controlled by an external force.[172] People with schizophrenia will sometimes report that, although they are acting in the world, they did not initiate, or will, the particular actions they performed. This is sometimes likened to being a robot controlled by someone else. Although the neural mechanisms of schizophrenia are not yet clear, one influential hypothesis is that there is a breakdown in brain systems that compare motor commands with the feedback received from the body (known as proprioception), leading to attendant hallucinations and delusions of control.[173]