Meta Addiction
I was wondering if anyone has ever had the feeling, like I get sometimes, that they were addicted to ‘meta-level’ optimizing rather than low-level acting? As in, I’d rather think about how to encourage myself to brush my teeth more than brush my teeth. I’m guessing there’s something about this under the akrasia threads?
The motivations to remain in meta and thinking about things rather than acting on them seems to be that it takes less effort to think about doing things than to do them, and there is potentially more long-term benefit in making an overall improvement than in engaging in a specific action. The drawback is that if you remain thinking about meta all the time, you won’t get anything done.
- How did you make your way back from meta? by 7 Sep 2023 17:23 UTC; 23 points) (
- 19 Apr 2012 5:26 UTC; 22 points) 's comment on How can we get more and better LW contrarians? by (
- Conjecture on Addiction to Meta-level Solutions by 18 Mar 2016 4:13 UTC; 4 points) (
Related: Levels of Action.
As someone who suffers from this, I try to test my self-improvement ideas against reality very soon after thinking of them. A principle I try to follow: If you think of some brilliant scheme for improving yourself, and there’s no reason not to implement it right away, do so so you can start collecting data ASAP.
I like that because it interrupts the urge to come up with more ideas.
XKCD on my desk at work: http://xkcd.com/974/
This topic made me wonder if we can’t make things, or we make create things that are harder to put your finger on—better ideas. Another way of putting that is what we have conceptual OCD, and compulsively spend our time straightening up our ideas.
Instead of paper clip maximizers, we’re good idea maximizers.
Or sounds-superfically-good-enough-to-make-you-feel-good idea maximizers?
Most definitely. That’s sort of what I meant by conceptual OCD. We internally survey our ideas, they seem a mess, and we feel a compulsion to tidy them up. Once we’re tidy internally, we feel ok, and take a nap.
We are good idea maximizers, once all the data is stuffed into our heads. But the application of the ideas to make a change outside ourselves is not something that motivates us. We’re model builders, not model appliers.
As you suggest, that probably leaves our ideas half baked. We haven’t caught the inconsistencies and inadequacies, because we’ve never tested the model against reality, or even against a clear powerpoint presentation outside our own heads.
We’re internal inconsistency minimizers. Once the internals seem coherent, we feel fine and move on. We’re powerful and useful machines, if used properly.
This year I’ve quadrupled the amount of structured meta thinking I do, compared to last year, and I have seen a big improvement in my ability to make and stick to goals. So I think more meta-thinking can help you get more done, if you have a problem with sticking to resolutions, as I do. Probably the meta-thinking has to have a point to it, though.
But I’ve also been amused at just how much meta-thinking it takes for me to achieve a goal. Like, currently, achieving brushing my teeth more would take hours and hours of thinking about brushing my teeth, considering adopting the goal of brushing my teeth more, motivating myself to brush more, expressing “brushing more” as a pithy phrase, tracking my brushing daily, reviewing my brushing track record weekly etc etc.
So, in future I’d definitely like to reduce the ratio of meta-thinking to goal-achieving, by a lot, but still, I’m getting more done with more meta-thinking at the moment.
Edit: come to think of it, I could stand to brush my teeth more.
It is not just “more meta” versus “less meta”. On any level one can do the right thing, do the wrong thing, do it skillfully, or do it clumsily.
Problem is, many intelligent people seem to have a bias “any meta is good meta”. The advantages of going meta are obvious to intelligent people. But there are also dangers: meta can be attractive for the wrong reasons. It allows one to avoid work; so a choice between “more meta” or “less meta” is influenced by laziness. It allows one to avoid updating (if “updating” is “meta work”, then I see a pattern here), because the more meta one is, the further one is from experience, so the pressure of reality is weaker.
We can go meta to avoid reality, and at the same time pretend we do it because we try to be better at handling reality. We should check whether the meta level is really helpful, but we can always avoid it by saying “it does not seem helpful now, but in the long run it will be helpful”, although we have no evidence for that. (There is an evidence that in general, going meta can be helpful. But I’m talking about a lack of evidence that this specific instance of going meta is helpful.)
If going meta makes you achieve your goals, then you are doing it right. Perhaps it’s not the best way, but at least it is better than nothing. If your ability to make goals and achieve them is improving, that’s evidence that you are doing the right thing at the second meta level too. But the evidence appears at the bottom level; without it, all the meta work would be useless.
Sometimes when trying to get instant gratification by making dubious commitments to optimization you may find yourself questioning the method’s efficacy. Don’t worry, just like a heroin addict must increase their dosage you must move on to stronger stuff. From now on, try to get your gratification from meta-meta-optimizing. You’re already off to a great start with this thread; try reading the sequences.
hrm, found something on it here: http://lesswrong.com/lw/aq/how_much_thought/ . Still reading it. But I guess the concept of bounded rationality encompasses it pretty well. Any practical approaches to bounding your rationality? A stopwatch maybe?
Some thinking on what to think about is very important, unfortunately it is also very hard to get it right. For example here we can discuss optimal decisions involving probability, entirely forgetting the limited runtime and the effect of introducing risk on the efficacy of bounded calculations in the future. When you take risks, for example, you double the size of the expected-utility-calculating tree, meaning that in limited time you cut down on depth.
Then there’s this: you can think how to optimize your behaviour by single digit percentage, for example, by trying to do nonbiased estimates of your utility, which you won’t be doing very well anyway because the world is hard to predict. Or you can spend that runtime e.g. learning to program, then coming up and writing some popular application, putting it up on a relevant store, and getting way more than enough money for papering over your inefficiencies.
Of course. Doing low level stuff like brushing your teeth is boring. Going meta is fun.
Eventually you need to actually cash out your strategies and really brush your teeth, at which point going meta can be a form of procrastination that has the benefit of making you feel like you are being productive.
I try to mentally file metacognition under “enjoyable pastime”, but I’m not sure if the low level resource manager agrees with the user. This produces an acute form of akrasia wherein, while attempting to be productive, I go really meta, encounter a stack overflow, resolve the issue, and then treat myself to a well deserved break because I’m such a brilliant meta-theoretician.
This is fine if you then go and implement your meta solution. It may not be the most efficient solution but if meta is easier for you and you can follow through on it then harness it. (Works for me!)
And yet here you are, going meta about meta.
If going meta doesn’t work you’re not doing it enough.
How do you know that?
I have faith.
there may be some value in intentionally going meta, I guess: trust the maximum recursion depth of the brain to give out long before you’re likely to run out of energy to keep going sideways at the same level. If you DO find a decent meta strategy, starting from the broadest plan and fleshing it out all the way to the bottom of actually doing things is often a good direction of attack anyways.