I read a lot of C&H growing up, and looking back at it, I’m surprised at how many interesting ideas it contains. I wonder how much of my present self was shaped by having these ideas implanted at age 8 or 9...
machrider
Steven Strogatz did a series of blog posts at NY Times going through a variety of math concepts from elementary school to higher levels. (They are presented in descending date order, so you may want to start at the end of page 2 and work your way backwards.) Much of the information will be old hat to LWers, but it is often presented in novel ways (to me, at least).
Specifically related to this post, the visual proof of the Pythagorean theorem appears in the post Square Dancing.
The fact is that there are many battles worth fighting, and strong skeptics are fighting one (or perhaps a few) of them. (As I was disgusted to see recently, human sacrifice apparently still happens.) However, I also think it’s ok to say that battle is not the one that interests you. You don’t have the capacity to be a champion for all possible good causes, so it’s good that there is diversity of interest among people trying to improve the human condition.
Thanks for the clarification, I see what you mean. The distinction between repetitive, droning thoughts and actively reasoning about the problem makes sense.
I think eugman is more referring to negative thoughts that cycle through a depressed person’s head on a regular basis. They’re messages that remind you that you’re a failure, you let people down, you’re not going anywhere, and they play through your brain almost all your waking hours.
The negative thoughts you described are the ones that healthy people encounter in real, negative situations that must be dealt with. In that case, rumination is appropriate and finding rational solutions is desirable. But when your brain is essentially buggy and constantly replaying cached, (often incorrect or completely out of proportion) negative beliefs, it might be entirely appropriate to forcibly jump to another track instead of dwelling on it.
Put another way, in a depressed brain, rumination and focus on the “problem” is the default mode of operation. Sometimes it eventually yields positive solutions, but frequently it’s more of a death spiral. Short circuiting that kind of process seems entirely reasonable to me.
Are there any good examples of the long strategy working? Ron Paul seemed like a potential case of exactly that, and in 2008 he was rallying support on the internet and raking in serious political campaign contributions. He got a small chunk of the popular vote and raised the profile of libertarianism a little. However, a few years later the media have still apparently decided that he is unelectable and give him far less coverage than the “mainstream” candidates. (I’m not a Ron Paul fan myself, but he should appeal to the fiscal conservative base and he seems to be a man of integrity.)
i read it, and I disagree. I think it’s irrational to expect everyone to do what he suggests, and it only works if everyone does it.
Edit: Using the word “strategic” is probably misleading. Eliezer proposes a particular strategy—vote for someone you actually like, regardless of popularity or perceived likelihood of winning. It’s still a strategy, and voting is still a game. So the argument isn’t really about whether or not to vote “strategically”, it’s about which strategy one should use.
In my original comment I argue for the meta-strategy of changing the electoral system to one that isn’t as broken as plurality systems are. As well, I argue that it still makes sense given the current system to continue to vote for the least evil candidate who has a shot at winning.
It might just be that I disagree with him, but I find this post out of character for Eliezer. He argues against being strategic or using game theoretical approaches, which is surprising to me. How can that possibly make sense? Shouldn’t I try to maximize the value of my vote given my expectations of the game I’m playing and the people I’m playing with/against? Essentially, I think he’s arguing for an idealistic solution instead of a pragmatic one.
I guess I should admit that, in a perfect world, voting for whom you actually want, regardless of perceived popularity, might work well. However, it seems more important to me, having identified that the electoral system seems to consistently produce these kinds of results, to try to identify the problem. Is the problem really with the voters, or is it inherent in the structure of the rules?
What should democracy produce, ideally? It should produce election results that closely mirror what people actually want. It turns out that the plurality voting system, which we use in most places in the US, is well known to support a two-party stranglehold as a failure mode. It is very likely to produce an outcome which leaves most people unsatisfied. Why not work on fixing the system that produces this result instead of just hoping for everyone in the country to suddenly agree to play the game by different rules? (In San Francisco, we use “instant runoff” voting rules that produce an outcome more in line with what people actually want. Of course, it’s not perfect.)
Essentially my question is, why would you insist that people shouldn’t vote strategically, when it is clearly in their best interests to do so? If you strongly believe (for example) Rick Perry would be a threat to your well being, why would you go vote for a third party instead of doing your best to ensure Perry doesn’t win?
What percentage of educated Westerners would you guess are to the right (as operationalized below) of you on economic questions?
Sorry, I find this survey terrible. I don’t know how to answer most of the questions. Questions like the above require me to have more knowledge than I personally have (about the internal state of billions of educated Westerners). You are supposed to do this work for us by asking 5 to 10 representative questions with which we can strongly agree/strongly disagree, etc, and then use that information to categorize responders.
The way this survey is written I don’t even feel comfortable submitting my response, because the percentages are wild guesses. Further, I don’t even know what it means to be “left” or “right” on race and gender issues. Also, the categories in the first part contain multiple, sometimes conflicting labels. It’s really hard to know how to respond to those, as well.
I say all this as someone with concrete political beliefs! If you asked me specific questions, I would happily answer them. But I’m not comfortable speculating about the political beliefs of people occupying an entire hemisphere.
This is the subtext implied in the saying, “A Lannister always pays his debts,” from A Game of Thrones by George R. R. Margin. It is frequently applied in the context of compensating someone for helping one of the Lannisters, but it also functions as a warning against misdeeds.
This is a good summary, but a post like this is greatly strengthened by links to external resources to justify or expand upon the claims it makes. If I didn’t know anything about the topic, some of the text would be unclear to me, and I would want the ability to click around and learn more. For example:
What is the sunk cost fallacy? (Link to wikipedia/LWwiki)
There is some recent evidence about rationality as a treatment for depression
Also, I think one of the first reactions a typical person will have is, “Rationality? Of course I’m rational.” To start from square one on this topic, you have to explain to people that, surprisingly enough, they aren’t. Politely, of course. Then you can start talking about why it’s important to work on.
All that said, I think the examples given are great; they’re salient problems for most people, and you can make a good case that rationality will improve one’s outcomes for those problems.
The link to the post is incorrect; it points to the previous rerun, should point here: http://lesswrong.com/lw/li/unbounded_scales_huge_jury_awards_futurism/
Edit: It has been fixed. Thanks. :)
Doesn’t that depend on heart attacks being a function of age rather than a function of time? Anti-aging doesn’t necessarily mean anti-arterial-plaque-buildup. I do agree that entire classes of problems might go away though, which would be amazing.
I don’t believe so, but maybe someone smarter than me can explain this. The magic 4% of a million = 40k value indeed should factor in, but it shouldn’t dominate the expected value to the degree that you’re making it.
Let’s try a different angle:
Then, with 4% interest on my $160k yearly, it would take me about 5.5 years to accumulate that million dollars, or 11000 hours.
So over 5.5 years, you theoretically earned $1,220,000. A million in savings plus $40k living expenses for 5.5 years. Effective hourly wage is $110.90.
At an effective hourly wage of $110 your expected lottery ticket return is 1.0 hours, not 1.1
I believe you left out the opportunity cost of spending the $100 on a ticket instead of letting it accrue 4% interest. That is, you compared $100 in today’s dollars to $100 in 2017 dollars, but it should’ve been $124 in 2017 dollars.
Thanks so much for the detailed response.
Wow, that OkCupid result is surprising. It has not been my experience. What are you doing that causes people to reach out to you in a friendly (rather than romantic) way on there? (Or are you the one reaching out?)
And I agree with regard to the intellectual standard, especially if you consider your intelligence a defining characteristic. Reading the discussion here (and not having much to contribute) has… recontextualized my own self-image.
Still it seems reasonable to point out the opportunity cost of spending a couple trillion dollars on a misguided war effort. It is true that the economy would be in better shape without those expenditures, and it’s also probably true that US federal budget constraints would be different as a result. (However it may still have been spent elsewhere instead of scientific research.)
“UFO” has a colloquial sense that does, in fact, mean aliens (or trans-dimensional beings or what have you). I would posit that this is the sense of the word Eliezer used in the quoted text.