Where to Intervene in a Human?
The “What is Rationality?” page on the new CFAR website contains an illuminating story about Intel:
Semiconductor giant Intel was originally a memory chip manufacturer. But by 1985, memory chips had been losing them money for years. Co-founders Andy Grove (CEO) and Gordon Moore met to discuss the problem. The mood was grim. At one point, Andy turned to Gordon and asked, “If we get kicked out and the board brings in a new CEO, what do you think he would do?”
Gordon replied without hesitation. “He would get us out of the memory business.”
“Okay,” said Andy. “Then why shouldn’t you and I walk out the door, come back, and do it ourselves?”
That year, Andy and Gordon shifted the focus of the company to microprocessors, and created one of the greatest success stories in American business.
I presume Andy and Gordon had considered intervening at many different levels of action: in middle management, in projects, in products, in details, etc. They had probably implemented some of these plans, too. But the problem with Intel — it was in the wrong market! — was so deep that the place to intervene was at a very low level, the foundations of the entire company. It’s possible that in this situation, no change they could have made at higher levels of action would have made that big of a difference compared to changing the company’s market and mission.
In 1997, system analyst Donella Meadows wrote Places to Intervene in a System, in which she outlined twelve leverage points at which one could intervene in a system. Different levels of action, she claimed, would have effects of different magnitudes.
This got me thinking about levels of action and self-improvement. “I want to improve myself: where should I intervene in my own system next?”
My bet is that if the next greatest leverage point you can push on is something like neurofeedback, then you’re pretty damn self-optimized already.
In fact, I suspect almost nobody is that self-optimized. We do things like neurofeedback because (1) we don’t think enough about choosing the highest-leverage self-interventions, (2) in any case, we don’t know how to figure out which interventions would be higher leverage for ourselves, (3) even if there are higher-leverage interventions to be had, we might not successfully carry them through, but neurofeedback or whatever happens to be fun and engaging for us, and (3) sometimes, you gotta stop analyzing your situation and just do some stuff that looks like it might help.
Anyway, how can one figure out what the next highest-leverage self-interventions are for oneself? Maybe I just haven’t yet found the right keywords, but I don’t think there’s been much research on this topic.
Intuitively, it seems like hacking one’s motivational system is among the highest leverage interventions one can make, because high motivation allows on to carry through with lots of other interventions, and without sufficient motivation one can’t follow through with many interventions.
But if you’ve got a crippling emotional or physical condition, I suppose you’ve got to take care of that first — at least well enough to embark on the project of hacking your motivation system.
Or, if you’re in a crippling environment like North Korea or Nigeria or Detroit, then perhaps the highest level intervention for you is to get up and move someplace better. Only then will you be able to fix your emotions or hack your motivational system or whatever.
Maybe there’s something of a system to this that hasn’t been discovered, or maybe there’s no system at all because humans are too complex. I’m still in brainstorm mode on this topic.
What do you think are some generally highest-level self-improvement interventions that more people should be tackling before things like neurofeedback?
What algorithm could be used for discovering the next best intervention one can make to improve oneself?
Has there been any research on this issue?
- 5 Aug 2012 1:54 UTC; 36 points) 's comment on What are the optimal biases to overcome? by (
The highest-level hack I’ve found useful is to make a habit of noticing and recording the details of any part of my life that gives me trouble. It’s amazing how quickly patterns start to jump out when you’ve assembled actual data about something that’s vaguely frustrated you for a while.
I’ve had the same experience. Connected with this, I’ve found it very useful to periodically process the same notes into a polite summary and communicate them with people who are interested and working on the same or related tasks. It does all kinds of good stuff, like helping me develop a more realistic “outside perspective” of myself, allowing me to function as a role model for self-aware self-improvement, engaging commitment and consistency in useful directions, and so on.
Is it possible for you to give an example of this works in practice? I’m curios what type of things you would note down.
It sounds like a useful idea worth trying out, but I’m having trouble seeing how I would start using it.
Not the prettiest example, but I had a log-running acne problem that I could never seem to get a handle on. So a few years ago, I started writing down, every morning, whether I had new zits that day, what I was using on my face, and any other factors (like diet) I thought might be relevant. It suddenly became quite easy to zero in on the right solution (a low concentration benzoyl peroxide facewash), and I’ve been happy with the results ever since.
A second example is that I started a (rather involved and silly) spreadsheet tracking my time working one semester. It was far too complicated a system in retrospect, but the mere fact of observing my time-wasting led me to use my time moderately better than before.
And a third thing is keeping explicit track of what you spend, so that you notice what patterns are costing you money and can ask whether they’re worth it. (Or, in the other direction, I learned that I shouldn’t be so worried about marginal spending on clothes, since that amount is dwarfed by rent, food, etc. So I buy new clothes a bit more often.) There are automatic tools for budgeting (like Mint.com) if you trust them.
Seth Roberts found a cure to his acne problem by keeping track of how the number of pimples in his face fluctuated over time.
This is the “obvious” ideal which several of my ideas for hacks have been approximating. Adopting. Thanks.
Could you flesh out some details here? What did you record, how did you fit recording it into your daily routine, how did you analyze it? Thanks!
Is this enough detail?
The Intel story’s source should be cited. It’s in Only The Paranoid Survive by Grove, Chapter 5:
Currently, the website directly follows (“without hesitation”) but slightly mutates this (“brought in” ⇒ “brings in”, “memories” ⇒ “the memory business”).
Cited.
Wouldn’t this be better described as a very high level, not a very low level? You seem to adopt this mapping later on:
The concrete metaphor switches within the article.
High level can be meant in the sense of levels of action, where higher levels change more things.
Low-level can also mean that it underlies many things, so that deep changes change more things.
It would probably be helpful to adopt a consistent metaphor though.
Ask whether you’re hitting ‘the basics of what normal people think makes people healthy.’ Sebastian Marshall has a good list:
Intuition pump: you have a low-powered genie who can grant wishes to the level of contemporary technology available on the open market. What do you ask for before you get to “man, I wish I could consciously regulate broad measures of neurological activity inside my brain”?
.
I have just come back from a surprisingly disappointing OKCupid date. This response may be heavily jaded.
I have spent some time throwing around this idea with other OKCupid users. There is broad consensus that attraction is largely context-based, and in order for a matching algorithm to stand a chance of fostering attraction between two people, it would have to introduce them at a point when they’d be receptive to each other.
A necessary component to this would be keeping a running value on all users’ self-esteem, adjusting it for things like ignored messages and fecundity, and occasionally asking questions like “how long was it since you last had sex?”, “do you weigh more or less than you did six months ago?”, and “has your mother complained about not having any grandkids recently?”
Who would answer those in a way indicating probable low self esteem? That’d be crazy!
There are a few answers to this:
OKCupid does actually ask quite a lot of personal questions, which people do answer. A few years ago the answers were kept private, but now users have the option to make them public, and there exists a certain amount of pressure to do so. I imagine this change results in less honest / accurate answers, but you would still be surprised what people admit to.
The service wouldn’t have to tell you it was keeping track of your self-esteem over time, and matching you with concordant suitors at points when you’d both be most vulnerable to each other’s charms. It would just ask you questions, like a curious but candid friend.
The questions I proposed above were gauche semi-serious examples. There are probably a number of more subtle questions that would correlate strongly with self-esteem without setting off alarm bells in the people that answer them.
Part of the reason for me talking about it is how unpalatable and creepy the idea is, and how a lot of the factors surrounding people being attracted to each other are not available to dating website service providers without a lot of effort they’re probably not prepared to invest. There are probably some areas they can capitalise upon, however.
This isn’t something that requires alarm bells. This is a dating website. Full signalling and screening mode is activated as a matter of course. It is extremely unlikely that I could benefit from giving the system evidence that I have low self esteem so I am not going to do so unless all else is compellingly not equal. I suppose this also requires being able to judge what questions have what self-esteem connotations but that isn’t too hard.
It occurs to me that I play OkCupid as a min-maxing munchkin. (I recommend this. It seems to work!)
I don’t find it especially creepy. Sounds useful. I want the website to take whatever information I give it and connect me with people in the most effective way possible. Anything I don’t want it to know I will not tell it (I will lie to it if necessary).
.
I would bet a sizeable sum of money that most users do not approach OKCupid in the same way you or I do, consciously or otherwise.
.
That’s the sort of thing this algorithm is supposed to flag up.
.
Oh sweet Jesus there are more than five pages...
Michael Keenan pointed out that Scott Adams recommends maximizing one’s energy should be the priority. That sounds pretty plausible.
Then why don’t you spend more time on finding tactics to increase your energy level? The eight you’ve listed seem pretty good, but surely they’re just the tip of the iceberg.
That is, in fact, my new priority. :)
Fantastic. Me too!
Any luck so far?
Doing hacker exercises every morning
Taking a cold shower every morning
Putting on pants
Lying flat on my back and closing my eyes until I consciously process all the things that are nagging at me at begin to feel more focused
Asking someone to coach me through getting started on something
Telling myself that doing something I don’t want to do will make me stronger
Squeezing a hand grip exerciser for as long as I can (inspired by Muraven 2010; mixed results with this one)
You?
My interventions for energy are less creative: drink water, do jumping jacks, take drugs, etc.
we miss you @aaronsw
A bit off topic, does CFAR accept bitcoin?
Not yet.
In my experience, one doesn’t notice things that are wrong until there is something to contrast them to. For example, you might not even notice that you need new glasses until seeing the world through better ones.
So a first step might just be radical change. Be mindful of the adjustments you’re making when changing food, location, or employment. Flail about a bit and do informal self-experimentation along as many dimensions as possible. This should help highlight location / emotional / physical conditions and suchlike that are getting in the way.
I realized I needed glasses when I was hiking with a friend. She took off her glasses to clean them, and I happened to look through them. It was a rather instant reaction of “oh wow, I need glasses!”
I got lucky in that she has an almost identical prescription to me, obviously :)
(5) Neurofeedback is fun. I certainly wouldn’t do it because I thought it was the single most effective thing I could possibly do that second. But I like doing fun things sometimes.
[One of those (3)s should be a (4).]
What the argument that neurofeedback has a relatively low utility?
Perhaps it’s the expense? I looked into it very briefly, and apparently professional neurofeedback costs thousands of dollars!
I don’t think it’s setting a good example for the CFAR to use an unreliable (self-serving, given from hindsight) anecdote to make a point. The source listed for that story is an autobiography by one of the people in it.
If the truth of the events doesn’t matter, why not use a more accessible urban legend than one that requires knowledge of microprocessors vs memory chips and the timeline of Intel’s relative success?
These seem to go in increasing order of “that really needs to be made more specific before calling it a ‘crippling environment’.”
Nothing new but doing thorough predictions about things you think you understand and follow up on them, being very fastidious about concreteness about predictions prevents me from re(miss)interpreting vague predictions when I get the results.
Trying a bit of this, a bit of that, and comparing results? I doubt it can get any more precise, because the interventions on different levels can be, well, different.
For small changes I would recommend trying each strategy one week (to filter out work-day cycle and other noise), and having a set of similar tasks, randomly assigned to those weeks; or one repetitive task. But some level of change would probably disrupt such setting. As an example, if my task is to “motivate myself to clean my room” and the intervention is “move to a different environment”, then of course, when I move to a new room, cleaning it is a different task than cleaning my old room, so it is not completely fair to compare my efficiency in those tasks.
Could the high level be discovered gradually, instead of making the first correct guess? Such as: start with some low-level improvement, and when you find that something is stopping you, analyze it, and do a meta-action. So instead of starting at the right level (and risking going too meta), we could instead start at the right place (where the outcome is measurable) and gradually find the necessary level of change.
But even this cannot be done exactly, because at a sufficiently high level I may choose a different outcome. For example, I work at a paperclip factory, and my initial goal is to make more and better paperclips. First step: I try doing overtime, but then I find I am too tired to continue this way. Second step: I get regular sleep, exercise and eat healthy food. Third step: I attend paperclip-making lessons. So far my progress is measurable. Fourth step: I realize I actually don’t care about paperclips, I just do it to make money; so I change a job to something that pays better. Oops, my first metric (paperclip) just broke; I need to replace it with money. Fifth step: After having enough money I realize that more money does not bring me most happiness, so I would prefer having more free time while making the same amount of money; or maybe less money but also less expense. Oops, my second metric broke too, and no replacement is precise enough… I could try some psychological questionaires for measuring happiness, but that seems to easy to cheat.
The first place to improve is stop doing stupid things. If you don’t know what you’re doing that’s stupid, then figure that out, and stop doing it.
Make new mistakes. Stop doing what isn’t working, with or without having a new plan, with or without it being stupid. That particular form of not-working will end, and that ending is a lessening of cost or an increase of gain or both (all desirable outcomes).
“Make new mistakes” has been my New Year’s Resolution for several years now. I highly recommend it (as long as it’s understood to refer to social risks and not physical ones).
Can you give us some advice about how to figure out whether what we’re doing is stupid or not? What exactly do you mean by stupid?
Posting vague advice about not doing stupid things is stupid. Don’t do it, it will get downvoted. The reason it’s stupid is that everyone already KNOWS that it’s bad to do stupid things. The stupid things we do are not done in the knowledge that they are stupid. “Figure out what you’re doing that’s stupid” is basically the entire point of this site, and you’re hiding an awful lot of knowledge and complexity in a simple order.
I upvoted it. It’s a useful thing to constantly remind yourself of.
The first thing anyone must do before any other self improvement is even logically possible is doing something about self deception, otherwise any self improvement attempts degenerate to be a form of wireheading as you will self deceive as to what the self improvement achieves, and you would end up improving your ability to self deceive.
I would suggest dropping this belief in silver bullet self improvement, burying it, and putting a stake through it’s heart, as the first step. If you look at the accomplishments of people who couldn’t just improve their subjective performance by improving ability to self deceive—technical fields for example—about the only type of self improvement you see is training, on complicated problems, with tests.
The self deception is a cognitive process that we are reward/punishment conditioned to do, internally, when doing free form non-externally verified thought. E.g. a Christian would be subject to anxiety-like feeling when considering the arguments against the Christianity, and reward feeling when coming up with arguments why Christianity is right, and would get conditioned to feel good about invalid approach to reasoning and feel bad about valid approach to reasoning. Quitting religion won’t reverse this conditioning. The conditioning could perhaps be reversed by studying mathematics for long time and doing the exercises (and getting punished for self deception as self deception would be resulting in failures), or some similar occupation where there is reliable external verification of correctness.
edit: Sorry, christianity is only meant as an example. This applies to any other ill-founded belief, religious or otherwise. The same can also happen the forms of atheisms that include belief in validity of an invalid reason against existence of god. The christianity is simply a world’s most popular religion at the time, and by far the most popular in the developed countries, and so it is an important case.
Your base point about being careful about self-deception is made crappy by your rant about christians and your weird veiled accusations.
Regarding Christians, those were a common example. There are many Christian de-converts here—don’t you recall feeling a tingle of fear and anxiety as you explored the possibility that your previous life was wasted on a wrong idea? This is an example of negative reinforcement. If I can’t bring up #1 world religion as the example of religiousness, then what?
He also has a strange obsession over “conditioning”, which he appears to think is the fundamental mechanism of the brain.
Whenever you have positive and negative feelings correlate with behaviour, you get conditioning. Every time. It really is this fundamental. There are many other equally fundamental mechanisms, of course, but they act in addition to the conditioning, not as replacement.
When someone builds a working model of that hypothesis as foundational psychology, in sufficient detail to refute alternative hypotheses (such as the ones that people act to maximise utility, or that they act to achieve their purposes) I’ll consider taking it seriously. I do not believe this has ever been done.
There has been a multitude of experiments, on humans and other animals, demonstrating that conditioning works. If when you touch a specific odd shaped object, you get a mild electric shock, it will become difficult for you to touch that object even when you are fully consciously aware that the shocking circuit is disconnected, and you will experience aversion to touching that object (i.e. you will act as if picking up that object had extra cost compared to other objects, even though you are fully aware you won’t be shocked). This is repeatable scientific finding with broad ramifications. (and it is stable over a multitude of positive and negative reinforcements).
Regarding whenever people act to ‘maximize utility’, that is trivially falsified by any experiment where people demonstrably make a wrong choice (e.g. don’t switch in monty hall). People do not act as to ‘maximize utility’, and that’s why people need training to better achieve their goals. What you listed is not ‘alternative hypotheses’, it’s normative statements about what people should do under particular moral philosophies.
Thanks for mentioning the Monty Hall problem. I hadn’t heard of it before and I found it incredibly interesting.
When I was a professor, I ridiculed (over beer in a bar) graduate students who were telling me it made sense to switch. One student came up with a clever demonstration using the digits in the serial numbers of a dollar bill as a random number generator where he asked me about switching in the 10-door generalization of the Monty Hall problem. With 10 doors and only one prize, it quickly became apparent that I had my head up my arse.
I learned something that day. Two things if you count the monty hall problem. The other: if I am arrogant and obnoxious in my beliefs, I will motivate smart people who disagree with me to figure out how to convince me of my error. Of course, there are no karma points in bars (or at least they are not as obvious) so I did not learn how dangerous such an otherwise productive path is to your karma.
Agreed that the reputation costs of being seen as arrogant and obnoxious are not as immediately obvious in some communities as in others.
I don’t think his objection was that conditioning isn’t a real thing that’s really real, but that it’s not a basis for a fully-descriptive theory of psychological behaviour.
That said, I don’t think you were suggesting it was in the first place.
FWIW, I do think it isn’t a real thing that’s really real, but I’m not all that interested in a prolonged discussion on the matter.
Thank you for the clarification.
More like “suggestive experiments that people read far too much into”.
Talk to Tim Tyler about that. He seems to be as convinced by utility-maximising (as a description of what people actually do, not a norm) as you are of conditioning. There may be others here who believe the same and have better arguments than he does. I think they’re all wrong, so I can’t argue on their behalf, but I will point out the obvious refutation that they might offer, viz. utility is subjective.
It’s pretty amusing how everybody has their own favorite simplified model which they overextend to attempt to explain all human behavior.
Brains are hierarchical backpropagating neural networks! No, they’re Bayesian networks! No, they’re Goedelian pattern recognizers! No, the mind is an algorithm which optimizes for utility! No, it maximizes for reproductive fitness! No, it maximizes for perceptual control! No, it maximizes for status in the tribe!
And then casually applies insights from their own introspection about their own mind to other people, and assumes that everybody else is wrong rather than, perhaps, different.
I’ve made the most progress in “intervening in myself” after I stopped believing that there was some single, simple, fundamental rule underlying all psychology and behavior.
e: I’m not trying to make fun of anyone in particular in this conversation—I was just ragging on the tendency of folks to confuse their map with their map of their map.
It’s not based on self observation of some kind or simple fundamental rule underlying all behaviour (the all part is an obvious strawman brought in by RichardKennaway) . However, the conditioning does affect any behaviour, as far as experiments show.
If you are unable to see the difference between ‘gravity affects every massive object’ and ‘gravitation is a fundamental rule explaining all the universe’, then nothing can help you.
I’m not disagreeing with you. I’m merely pointing out that humans fall too much in love with their pet idea de jour.
Actually not entirely sure why I’m being downvoted, perhaps my comment came off as snarky.
edit: after rereading it, it looks like I was attacking you, when really I was just expressing frustration at an entirely different group of people who write books attempting to convince other people that they have the One True Secret of Life.
I agree. BTW, I can’t downvote anyone.
I’m not trying to explain something with conditioning and just conditioning alone though; all I am saying is that we should expect self deception to get reinforced as it results in internal reward (and avoidance of self deception easily results in punishment). Regarding the voting, I also was down to −7 on this: http://lesswrong.com/lw/ai9/how_do_you_notice_when_youre_rationalizing/5y3w so I do not care a whole lot.
Why did you change usernames?
Well, I wanted to leave for it being generally a waste of time; still, not everyone here is stupid (non stupid list includes Yvain, Wei_Dai, Will Newsome even though he’s nuts, a few others). The relevant question is why I don’t just delete this account.
But why did you stop posting under the other name?
So, you’re saying all these explanations are Turing-complete?