Just another Bay Area Singulatarian Transhumanist Libertarian Rationalist Polyamor-ish coder & math nerd. My career focuses on competitive governance; personally I’m very into personal development (“Inward & upward”); lately I’ve gotten super into cultivation novels because I want to continuously self-improve until my power has grown to where I can challenge the very heavens to protect humanity.
patrissimo
Step 0: Get a time machine Step 1: Go back in time and tell yourself not to waste time on a degree, but to go invent Google or Facebook or something useful Step 2: Profit!
Couldn’t it just be an erroneous application of (an intuited version of) Newton’s law of cooling, which says that heat transfer is linearly proportional to heat difference? They assume that the thermostat temperature is setting the temperature of the heating element, and then apply their intuited Newton’s Law.
Seems pretty rational to me.
Lately I’ve been thinking about if and how learning math can improve one’s thinking in seemingly unrelated areas.
This seems like a classic example of the standard fallacious defense of undirected research (that it might and sometimes does create serendipitous results)?
Yes, learning something useless/nonexistent might help you learn useful things about stuff that exists, but it seems awfully implausible that it helps you learn more useful things about existence than studying the useful and the existing. Doing the latter will also improve your thinking in seemingly unrelated areas...while having the benefit of not being useless.
If instead of learning the clever tricks of combinatorics as an undergraduate, I had learned useful math like statistics or algorithms, I think I would have had just as much mental exercise benefit and gotten a lot more value.
Ok, great, I’m glad I misunderstood.
Completely agree with your general point on marginal analysis (although I’m a TDT skeptic), and am a fan of GiveWell, but this is trivially wrong:
It is not possible for everyone to behave this way in elections: no voter is able to consider the existing distribution of votes before casting their own.
This seems to assume away information about the size of the electorate as well as any predictive power about the outcome. Surely the marginal benefit of a Presidential vote in a small swing state is massively higher than in a large solidly Democratic state, for example. And in addition to historical results, there is polling data in advance of the election to improve predictions.
Besides this being theoretically true, we can see it empirically from the spending patterns of both Presidential campaigns and political parties on Congressional races. They allocate money to the states / races where they believe it will do the most marginal good, which is often a very inequal distribution. Thus they do, in fact “consider the existing distribution of votes before casting” their advertising dollars.
At the risk of provoking defensiveness I will say that it really sounds like you are trying to rationalize your preferences as being rational when they aren’t.
I say this because the examples that you were giving (local food kitchen, public radio), when compared to truly efficient charities (save lives, improve health, foster local entrepreneurship), are nothing like “save 9 kids + some other benefits” vs. “save 10 kids and nothing else”. It″s more like “save 0.1 kids that you know are in your neighborhood” vs. “save 10 kids that you will never meet” (and that’s probably an overestimate on the local option). Your choice of a close number is suspicious because it is so wrong and so appealing (by justifying the giving that makes you happy).
The amount of happiness that you create through local first world charities is orders of magnitude less than third world charities. Therefore, if you are choosing local first world charities that help “malnourished” kids who are fabulously nourished by third world standards, we can infer that the weight you put on “saving the lives of children” (and with it, “maximizing human quality-adjusted life years”) is basically zero. Therefore, you are almost certainly buying warm fuzzies. That’s consumption, not charity. I’m all for consumption, I just don’t like people pretending that it’s charity so they can tick their mental “give to charity” box and move on.
I agree extremely on the issue of procrastination not being restful, this is a standard theme in modern productivity writing. Procrastination (like reading blogs / tweets / etc) is a sort of worst of both worlds, it is neither useful nor restful, it passes the time and avoids immediate pain without providing pleasure or renewal.
That’s why The Energy Project, Pomodoro, Zen Habits, etc. recommend that you schedule renewal breaks into your day—at a minimum midmorning, lunch, and midafternoon. I think the deliberate practice literature recommends breaks every 90 minutes. Taking a walk outside & exercise are oft-recommended, but really, just being conscious of the goal of renewal and experimenting to find things that will work is all you need. It’s helped me be more productive.
Social conversations with co-workers are also good, but it’s important that they be relaxed & guilt-free. One of the secrets of renewal is that it works much better if accepted as a need, for some reason guilty renewal doesn’t renew. Renewal requires relaxation while guilt prevents it, something like that.
Glad to hear that you’re learning (and writing about) basic productivity hacks like this, LW will get its instrumental rationality black belt yet :).
References:
http://zenhabits.net/take-lots-of-breaks-to-get-more-done/ http://www.theenergyproject.com/search/node/renewal
Wow, SIAI has succeeded in monetizing Less Wrong by selling karma points. This is either a totally awesome blunder into success or sheer Slytherin genius.
- Feb 26, 2011, 3:27 PM; 2 points) 's comment on Making money with Bitcoin? by (
Learning math sure isn’t useless, and it seems to mostly consist of thinking about useless or nonexistent things.
I learned a lot of math (undergraduate major), and while it entertained me, it has been almost completely useless in my life. And the forms of math I believe to be most useful and wish I’d learned instead (statistics) are useful because they are so directly applicable to the real world.
What useful math have you learned that doesn’t involve reference to useful or existent things?
I worry that new year’s resolutions are a Schelling point for failed self-improvement that, by using a fundamentally flawed approach, tend to fail and then discourage people from future attempts at positive change.
Can we try to switch to the meme of “Annual retreat & reflect about one’s life, goals, and habits”, rather than these so frequently failed “resolutions”, whose very name implies that the solution is more “resolve”, and thus the problem is insufficient “resolve”, rather than insufficient experimentation, knowledge about habit formation, realism about achievable change, or any of the other numerous actual reasons?
I mean, it’s 2010, and we know we lose weight through hacks, not the application of more willpower—same goes for anything else.
“That’s how great arguments work: you agree with every step (and after a while you start believing things you didn’t originally).”
Also how great propaganda works.
If you are going to describe a “great argument” I think you need to put more emphasis on it being tied to the truth rather than being agreeable. I would say truly great arguments tend not to be agreeable, b/c the real world is so complex that descriptions without lots of nuance and caveats are pretty much always wrong. Whereas simplicity is highly appealing and has a low cognitive processing cost.
I found this quote brilliant solely because of the incongruous “like” in there. It makes the whole thing turn into a Deep Mystery instead of a Deep Saying.
After all, wouldn’t someone who does the important things also stick to the most important words, ie those with content, unlike “like”? If so, how delightful is the erroneous arrogance of this quote! If not, what a fascinating challenge to my assumptions about the implications of language pattern!
Apparently this community really values the combination of wit, brevity & correctness, which are all good things.
Unfortunately, since your brief witty correct remark was about something irrelevant, that means we are rewarding entertainment that wins status/appreciation without contributing to meaningful discussion, relative to deep and/or thoughtful insights. Quite understandable, but I can see why you were horrified—one expects better of LWers.
I interpret this as evidence against the correctness of the elitism strain in LW culture. We are all monkeys, the great thing about LW is that we know it and want to change it—not that we have.
I don’t think this is true. I know people who “assume good faith”, and they are amazing a a pleasure to debate with—it never becomes argument. But I have not found this to be correlated with analytical thinking—if anything, the opposite.
Rather, my experience with analytical people (incl. myself) is that they just don’t see the emotional subtext. They see the argument, the logical points, and they don’t even think about the status implications, who challenged whose authority, and so forth. It’s not as pleasant to think of we non-neurotypicals as oblivious rather than charitable, but it seems more accurate to me.
For example, the idea that all that matters is whether my argument is good is so natural to me and core to my family upbringing that it’s taken me many years to unlearn it. To learn that people care how an argument is phrased, how openly you suggest they are wrong, and who the authority figure is (ie whether the challenger is of low status in that context).
In some ways, my obliviousness was very powerful for me, because ignoring status cues is a mark of status, as are confidence and being at ease with high-status people—all of which flow from my focus on ideas over people or their status. Yet as I’ve moved from more academic/intellectual circles to business/wealth circles, it’s become crucial to learn that extra social subtext, because most of those people get driven away if you don’t have those extra layers of social sense and display it in your conversational maneuvering.
I have also found that being able to speak bluntly and off the top of my head about what I believe to be true is enormously valuable for me in truth-seeking. Having friends and forums where that is the culture is immensely valuable. Yet learning how to not do that—how to use my “polite pen”—has also been immensely valuable to me in getting my ideas across to a broader audience.
Each has it’s place, and I think what most LWers need to hear is the point in this post, but I think it would have been clearer if all the examples were from the workplace / regular life. Then it wouldn’t have had this challenge to LW culture you perceived.
If we’re going to talk about the cognitive framing effects of language, as the original post did, how about your use of the word “Mundane”?
To me, it seems actively harmful to accurate thinking, happiness, and your chance of doing good in the world. The implication is characterizing most humans as a separate lower class, with the suggestion of contempt and/or disgust for those inferior beings, which has empirically led to badness (historically: genocide. in my personal experience: it has been poisonous to Objectivism and various atheist groups I’ve been in).
I’d like to hear some examples where framing most people as both “lesser” and “other” has led to good for the world, because all the ones I’m pullin’ up are pretty awful...
This is an awesome response and extension, although it doesn’t invalidate the point that we should learn what signals our words will give and choose them consciously. It’s basically always better to understand and use the subtext. Whether using it to make sure you don’t accidentally press the emotional buttons of a good-willed collaborator, or understanding when others are using it to exploit you.
In my experience, relentless politeness + authenticity (don’t give up your basic point, but phrase it very nicely) is a great help at defeating setups. In the presentation case, sure, the questioner has upgraded the idea. But he has still pointed out it’s core flaw! A less adept questioner might either a) not question at all, knowing that it looks like a rude challenge, or b) question rudely because he doesn’t know how to be polite. Either one of which would make it more likely for the bad idea to pass unchallenged.
The key is authenticity: politeness shouldn’t stop you from putting the knife into something that should die, it should just make it so smooth that it hurts the minimum and shows everyone that you are acting in the common interest. It’s an empowering tool so that you can play the game of fighting back against bad gaming without looking like a gamer or a fighter.
Anyway, I have a sunny disposition so I don’t share your negative framing of this, but your meta-point about how others can use these rules for evil and/or selfishness is great (although maybe at too high a level of Slytherin to be really useful to most LWers).
You seem to be assuming that what you want to hear is how people should be learning to communicate (“I’d prefer they skip it”), but part of the point is that we are not like most people. If you want to communicate effectively with the broader population, then you have to focus on what they like to hear, not judge communication suggestions based on whether you would like hearing it.
Also, I love brevity, but I charitably assumed that the politeness examples were exaggerated to make the point. Exaggerated examples, while they often bother analytical types who already get the point (“but that’s too far the other way!”) are (IMHO) quite useful at helping get across new ideas by magnifying them.
And compactness is hard, as is habit change. So developing compact politeness seems harder than developing politeness and then polishing it with brevity and clarity. Maybe too hard for some people—one habit at a time is often easier.
Yeah, but you’d get lots of applause!
This is not a way to take advantage of confirmation bias. Confirmation bias means that others look for confirming evidence for their true theories, and ignore disconfirming evidence. This process is not much affected by you adding extra confirmatory evidence—they can find plenty on their own. Instead, it is a way to fool rational people—for example, Bayesians who update based on evidence will update wrong if fed biased evidence. Which doesn’t really fit here.
The way to actually use confirmation bias to convince people of things is to present beliefs you want to transmit to them as evidence for things they already believe. Then confirmation bias will lead them to believe this new evidence without question, because they wish to believe it to confirm their existing beliefs.