I’m jealous of all these LW meetups happening in places that I don’t live. Is there not a sizable contingent of LW-ers in the DC area?
Vive-ut-Vivas
I’m not understanding the disagreement here. I’ll grant that imperfect knowledge can be harmful, but is anybody really going to argue that it isn’t useful to try to have the most accurate map of the territory?
I haven’t read that paper—but thanks for the link, I’ll definitely do so—but it seems that that’s a separate issue from choosing which beliefs to have based on what it will do for your social status. Still, I would argue that limiting knowledge is only preferable in select cases—not a good general rule to abide by, partial knowledge of biases and such notwithstanding.
And ideally, you’d take that fact into account in forming your actual beliefs. I think it’s pretty well-established here that having accurate beliefs shouldn’t actually hurt you. It’s not a good strategy to change your actual beliefs so that you can signal more effectively—and it probably wouldn’t work, anyway.
It’s probably useful at this point to differentiate between actual beliefs and signaled beliefs, particularly because if your beliefs control anticipation (and accurately!), you would know which beliefs you want to signal for social purposes.
And many of the people in this community rub me the wrong way.
Yes, like you, for stealing my post idea! Kidding, obviously.
At the risk of contributing to this community becoming a bit too self-congratulatory, here are some of the more significant concepts that I’ve grokked from reading LW:
No Universally Compelling Arguments and Ghosts in the Machine. Shamefully, it never even occurred to me to de-anthropomorphize the idea of a mind.
You Provably Can’t Trust Yourself and No License To Be Human, along that same theme.
The Luminosity sequence is a bit under-celebrated, I think, with relation to its value. I’ve found that to be one of the most important things I’ve read here, and applying those concepts has aided me in improving my life in not-insignificant ways.
Affective Death Spirals! I cannot praise this enough for giving me the skills to recognize this phenomenon and keep myself from engaging in this at the negative end.
Most of all, LW has taught me that being the person that I want to be takes work. To actually effect any amount of change in the world requires understanding the way it really is, whether you’re doing science or trying to understand your own personality flaws. Refusing to recognize said flaws doesn’t make them go away, reality doesn’t care about your ego, etc.
And apparently there was this Bayes guy who had a pretty useful theorem...
The other two were a friend of mine and a productivity blog whose name and url I have since forgotten.
Don’t forget: Wikipedia happened.
And this is precisely why I haven’t lost all hope for the future. (That, and we’ve got some really bright people working furiously on reducing x-risk.) On rare occasions, humanity impresses me. I could write sonnets about Wikipedia. And I hate when so-called educators try to imply Wikipedia is low status or somehow making us dumber. It’s the kind of conclusion that the Gatekeepers of Knowledge wish was accurate. How can you possibly get access to that kind of information without paying your dues? It’s just immoral.
I pose this question: if you had to pick just one essay to introduce someone to LW, which one would you pick and why? I’d like to spread access to the information in the sequences so that it can benefit others as it did me, but I’m at a loss as to where specifically to start. Just tossing a link to the list of sequences is.....overwhelming, to say the least. And I’ve been perusing them for so long that I can’t remember what it’s like to read with fresh eyes, and the essays that have the most impact on me now were incomprehensible to me a year ago, I think.
So, I was directed toward this post, in no small part because I am, demographically, a bit unusual for LW. At times, I’m quite optimistic about LW and rationality-in-general’s prospects, but then I remember that my being here, and participating, is the product of happenstance. But then again, I actually have been pointed to LW from three different sources, so perhaps it was inevitable.
Ah, but here comes my embarrassing admission:
Most people who are already awesome enough to have passed through all these filters are winning so hard at life (by American standards of success) that they are wayyy too busy to do boring, anti-social & low-prestige tasks like reading online forums in their spare time (which they don’t have much of).
The above is a much more influential factor in my considering how much to participate than I feel happy admitting. I’ll openly admit that being rational is not my default mode; I wasn’t even targeted as “bright” as a kid. No out-of-ordinary test scores came from me. I have had to really work to get my thoughts to avoid being immediately processed through a Is this the kind of belief that will get me social status? filter. So, I do have this massive fear that being rational is just not natural for me. Nor is my IQ, I suspect, anywhere near the high end of the spectrum here....though that filter for social status has been, I think, obscuring my intelligence for most of my life.
Does socializing on the internet feel low-status to me? Yeah, it does....and had I not basically grown up on the internet, I doubt I’d ever give a community like this a second glance. It’s been really tough divorcing society’s ideal of what is status-y from what I actually want to do. I love the internet, and I spend a vast amount of time on it, but it still feels low status to me, and so it’s not something I advertise. Despite my ability to find more interesting conversation here than I can possibly hope to find in real life!
So, even though I was pointed to LW multiple times independently, I probably would never have actually become an active participant (insofar as I am one) had I not had the personal endorsement of my brother, who is an active member, that this was a very intelligent place. Honestly, I wasn’t properly calibrated to identify this place as, well, what it actually is. I don’t know what to suggest to get this to be more appealing to people that are like me—that is, smart enough to benefit from the sequences, but not likely to seek it out on their own. The rationality book is probably the best bet.
It seems reasonable, and is consistent with my own experience, that deliberately and vividly imagining the pleasant experiences associated with doing X activates the former and inhibits the latter.
But since I can’t actually copy this technique and have it work every time, I suspect that other people find it equally unenlightening, which is why I think it’s a poor model for actually bringing someone out of procrastination. That is, I think there’s something else going on in your head in addition to just imagining the pleasant experience that you’re not recognizing and therefore can’t communicate. Not just you, of course, this is exactly what I’m struggling with: identifying why my brain works differently some days than others. I’m in the middle of tracking what the conditions are when I have an “on” day versus an “off” one. I’ve already noticed that if I write down the patterns of thoughts that I have when “on”, thinking them back to myself when I’m “off” doesn’t actually change my mental state. I really want to identify what factor(s) will turn me from “off” to “on” every single time. An impossible goal, alas.
And although I’m using the multiple selves / sub-agents terminology, I think it’s really just a rhetorical device. There are not multiple selves in any real sense.
I would actually dispute this, but that goes into what you actually mean by a “self”. I don’t see how it’s not obvious that are multiple agents at work; the problem of akrasia is, then, trying to decide which agent actually gets to pilot your brain at that instant. I suspect this is alleviated, to some extent, by increased self-awareness; if you can pick out modes of thought that you don’t actually want to “endorse” (like the “I want to be a physicist” versus “I don’t want to do physics” example below), you are probably more likely to have the ability to override what you label as “not endorsed” than if you are actually sitting there wondering “wait, is this what I really think? Which mode is me?”
It seems to me that rationality is not superego strengthening, but ego strengthening- and the best way to do that is to elevate whoever isn’t present at the moment. If your superego wants you to embark on some plan, consult your id before committing (and making negative consequences immediate is a great way to do that); if your id wants you to avoid some work, consult your superego before not doing it.
Thing is, I don’t think this actually happens. When I’m being productive and not procrastinating, and I try to sit back and analyze why I’m “on” that day, I might attribute it to something like “hmm, long-term desires seem to be overriding short-term desires today, clearly this is the key”. As if, for whatever reason, my short-term self was on vacation that day. My belief is that what’s happening is something much more fundamental, and something that we actually have much less control over than we think; the conditions for not-procrastinating were already in place, and I later added on justifications like, “man, I really need to listen to far mode!”. This is why, when I’m having a day where I am procrastinating, those same thoughts just don’t move me. It’s not the thought that’s actually determining your actions (“My desire to make an A in this class SHOULD BE stronger than my desire to comment on Less Wrong, so therefore I am going to override my desire to play on the internet to do work instead”), but the conditions that allow for the generation of those thoughts. I think that’s why telling myself “I don’t want to do this problem set, but I know I need to” doesn’t actually move me....until it does.
YMMV, of course. Others might be able to induce mental states of productivity by thinking really hard that they want to be productive, but I sure can’t. It’s either there or it isn’t. I can’t explain why it’s there sometimes, but if you ask me in a productive mode why I’m able to get so much more done, well, it’s just obvious that far mode is more important.
This reference point phenomenon is, to me, the kind of thing that seems obvious after you’ve already done it, but isn’t actually helpful if you’re trying to change a behavior.
If you’re trying to get into the habit of going to the gym or whatever, you already know that it’s going to be to your benefit in “far” mode but “near” mode you just doesn’t want to go. Near mode you has better stuff to do right now, healthfulness is far mode’s problem. You can’t re-program yourself to associate “working out” with “feeling good” until you’ve already been doing it for a while. This has been my experience, anyway. I run every day, and it’s just part of what I do, but the catalyst to getting into this habit wasn’t that I was suddenly able to convince myself that this was something that was good for me and that later on I’d enjoy it, even if I didn’t enjoy it now—no, the reason I started running was because at the time I had an immediate desire to do it (stress, pent-up frustration with life situations). I have absolutely no ability to trick my near mode to do things to the benefit of far mode; it has to have utility to me, right now.
Of course, now that I’ve been doing this for a while, when I’m about to go run I don’t even have a mental dialog where I have to convince myself that it’s something that I want to do—I just do it. If I haven’t run today, then obviously I am going to run, there’s modus ponens. If for some reason I have a voice saying I don’t want to do it, my brain immediately overrides that with, “But that just doesn’t make sense!”. If I were trying to convey this mental process to someone else, I might say something like, “well, I just envision myself running and having a good experience, and then not running and not having that good experience, so I’ve changed my reference point”. This after-the-fact explanation sort of explains what’s happening in my mind, but doesn’t actually give somebody else tools that allow them to actually copy it. The only advice I’d give is to find an actual compelling reason to do it whatever it is right now, rather than trying to fake yourself into thinking you want to do something that you really don’t.
Basically, you’re right about the changing reference points but I think you’ve got the order mixed up. That happens after you’ve changed the behavior.
As a general rule, I try not to lie to myself. I wasn’t referring to the social convention of picking a side to cheer for, but the internal conflict that occurs when you love someone and they turn around and hurt you; for instance, your SO makes a huge mistake, but you’re reluctant to let that outweigh all of the good qualities that they have. It then turns into a situation where you have to determine where exactly that moral event horizon lies that then makes them unsuitable as your partner. (If anybody has an algorithm for this, please, help me out!)
And on an equally depressing note, I’ve run into this with significant others. Sadly, I’ve found that my inability to subscribe to the Good Guy/Bad Guy narrative hasn’t resulted in optimizing relationships.
Twinkie diet helps nutrition professor lose 27 pounds
I think it’s supposed to be his mother, Lily.
I mentioned a few comments below that I have experience with this method. It works. What I’ve worked on is specifically rehearsing the transitions between topics, and you can even practice this with a friend who pretends to be a stranger. Role playing is actually fantastic for acquiring conversation skill, and both of you benefit.
**I don’t want to re-start the argument from last night, so I want to say that this method is only helpful if you’re trying to get from small talk to meaningful conversation, not trying to break the ice in the first place.
The inertia of the conventional wisdom (“you’ve gotta go to college!”) is further making the new generation slow to adapt to the reality, not to mention another example of Goodhart’s Law.
I wish I could vote this comment up a hundred times. This insane push toward college without much thought about the quality of the education is extremely harmful. People are more focused on slips of paper that signal status versus the actual ability to do things. Not only that, but people are spending tens of thousands of dollars for degrees that are, let’s be honest, mostly worthless. Liberal arts and humanities majors are told that their skill set lies in the ability to “think critically”; this is a necessary but not sufficient skill for success in the modern world. (Aside from the fact that their ability to actually “think critically” is dubious in the first place.) In reality, the entire point is networking, but there has to be a more efficient way of doing this that isn’t crippling an entire generation with personal debt.
I could make an appearance. I’m not super familiar with DC so staying pretty close to a metro station would be ideal.