Interesting that your debate predictions tend too low. In my debate experience, nearly everyone consistently overestimated their likelihood of winning a given round. This bias tended to increase the better the debaters perceived themselves to be.
AShepard
Upvoted for introducing the very useful term “effective belief”.
I think if we tabooed (taboo’d?) “arbitrary”, we would all find ourselves in agreement about our actual predictions.
but because it is the standard value, you can be more confident that they didn’t “shop around” for the p value that was most convenient for the argument they wanted to make. It’s the same reason people like to see quarterly data for a company’s performance—if a company is trying to raise capital and reports its earnings for the period “January 6 - April 12″, you can bet that there were big expenses on January 5 and April 13 that they’d rather not include. This is much less of a worry if they are using standard accounting periods.
All good points. To clarify, 50% is the marginal tax rate from the OP’s system alone. A major reason that effective marginal tax rates can be so high is that programs like (to be US centric) food stamps and Medicaid are means tested, so they phase out or go away entirely as you make more income. If the OP’s system would retain those kinds of programs, their contribution to the marginal tax rate would come on top of the 50% cited above.The net effect of enacting this system would depend on which parts of the current bundle of social insurance programs it would displace (in the US, presumably the EITC and TANF, at least).
I don’t think that’s quite right. The marginal tax rate is going to be 50% no matter the value of x, given your formula. Your social security payment is half the difference between your income and the x threshold, so each additional dollar you earn below that threshold loses you 0.5 dollars of social security. This is true whether the threshold is $10,000 or $100,000.
You are right, though, that there will be a correspondence between the minimum wage and the level of x. I don’t think this is causal, but popular notions about the ideal levels for both the minimum wage and ‘x’ will probably both reflect underlying notions about an “acceptable standard of living”. If there’s a correspondence between the level of the minimum wage and the fraction of people who give up working because of this system, I think it would be chiefly because of this correlation (in addition to the employment effect of having a minimum wage at all).
Interesting idea. It’s in the same family as the Earned Income Tax Credit and the Negative Income Tax.
The immediate potential downside I see is that this would effectively institute a very high marginal tax rate on income below ‘x’. For every additional dollar that someone who makes less than x earns, they lose 0.5 dollars of social security. That’s a 50% implicit marginal tax rate, on top of whatever the official marginal tax rate is. By comparison, the highest marginal tax rate for federal income taxes in the United States is 35%, which is only applied to household earnings beyond $370,000 (source). The implication of standard economic theory is that many people would simply choose not to work and earn .75x.
Long-time reader, only occasional commenter. I’ve been following LW since it was on Overcoming Bias, which I found via Marginal Revolution, which I found via the Freakonomics Blog, which I found when I read and was fascinated by Freakonomics in high school. Reading the sequences, it all clicked and struck me as intuitively true. Although my “mistrust intuition” instinct is a little uncomfortable with that, it all seems to hold up so far.
In the spirit of keeping my identity small I don’t strongly identify with too many groups or adjectives. However, I’ve always self-identified as “smart” (whatever that means). If you were modeling my utility function using one variable, I’m most motivated by a desire to learn and know more (like Tsuyoku Naritai, except without the fetish for unnecessary Japanese). I’ve spent most of my life alternately trying to become the smartest person in the room and looking for a smarter room.
I just graduated from college and am starting work at a consulting firm in Chicago soon, which I anticipate will be the next step in my search for a smarter room. My degree is in economics, a discipline I enjoy because it is pretty good at translating incorrect premises into useful conclusions. I also dabbled fairly widely, realizing spring of my senior year that I should have started taking computer science earlier.
I’ve been a competitive debater since high school, which has helped me develop many useful skills (public speaking, analyzing arguments, brainstorming pros/cons rapidly, etc.). I was also exposed to some bad habits (you can believe whatever you want if no one can beat your arguments, the tendency to come to genuinely believe that your arbitrarily assigned side is correct). Reading some of the posts here, especially your strength as a rationalist, helped me crystallize some of these downsides, though I still rate the experience as strongly positive.
I am a male and a non-theist, although I’ve grown up in an area where many of my family members and acquaintances have real and powerful Christian beliefs (not belief in belief, the real deal). This has left me with a measure of reverence for the psychological and rhetorical power of religion. I don’t have particularly strong feelings on cryonics or the singularity, probably because I just don’t find them that interesting. Perhaps I should care about them more, given how important they could be, but I haven’t displayed any effort to do so thus far. It makes me wonder if “interestingness bias” is a real phenomenon.
My participation here over the years has been limited to reading, lurking, and an infrequent comment here and there. I’ve had a couple ideas for top level posts (including one on my half-baked notion that “rationalists” should consider following virtue ethics), but I have not yet overcome my akrasia and written them. Just recently, I have started using Anki to really learn the sequences. I am also using it to memorize basically useless facts that I can pull out in pub trivia contests, which I enjoy probably more than I should.
I simply care a lot about the truth and I care comparatively less about what people think (in general and also about me), so I’m often not terribly concerned about sounding agreeable.
Can you clarify this statement? As phrased, it doesn’t quite mesh with the rest of your self-description. If you truly did not care about what other people thought, it wouldn’t bother you that they think untrue things. A more precise formulation would be that you assign little or no value to untrue beliefs. Furthermore, you assign very little value to any emotions that for the person are bound up in their holding that belief.
The untrue belief and the attached emotions are not the same thing, though they are obviously related. It does not follow from “untrue beliefs deserve little respect” that “emotions attached to untrue beliefs deserve little respect”. The emotions are real after all.
Downloaded and set up with a couple of Divia’s decks. How many decks do you recommend working through at one time? For reference, I’m currently doing one deck on the default settings, which works out to ~40 cards a day (20 new, ~20 review) and takes 5-7 minutes.
I haven’t read the post yet, but the title is awesome.
I’m surprised that you don’t mention the humanities as a really bad case where there is little low-hanging fruit and high ideological content. Take English literature for example. Barrels of ink have been spilled in writing about Hamlet, and genuinely new insights are quite rare. The methods are also about as unsound as you can imagine. Freud is still heavily cited and applied, and postmodern/poststructuralist/deconstructionist writing seems to be accorded higher status the more impossible to read it is.
Ideological interest is also a big problem. This seems almost inevitable, since the subject of the humanities is human culture, which is naturally bound up with human ideals, beliefs, and opinions. Academic disciplines are social groups, so they have a natural tendency to develop group norms and ideologies. It’s unsurprising that this trend is reinforced in those disciplines that have ideologies as their subject matter. The result is that interpretations which do not support the dominant paradigm (often a variation on how certain sympathetic social groups are repressed, marginalized, or “otherized”), are themselves suppressed.
One theory of why the humanities are so bad is that there is no empirical test for whether an answer is right or not. Incorrect science leads to incorrect predictions, and even incorrect macroeconomics leads to suboptimal policy decisions. But it’s hard to imagine what an “incorrect” interpretation of Hamlet even looks like, or what the impact of having an incorrect interpretation would be. Hence, there’s no pressure towards correct answers that offsets the natural tendency for social communities to develop and enforce social norms.
I wonder if “empirical testability” is a should be included with the low-hanging fruit heuristic.
I think you can get some useful insights into the reasons why punishments might differ based on moral luck if you take an ex ante rather than an ex post view. I.e. consider what effect the punishment has in expectation at the time that Alice and Yelena are deciding whether to drive home drunk or not, and how recklessly to drive if they do.
Absent an extremely large and pervasive surveillence system, most incidences of drunk driving will go undetected. In order to acheive optimal deterence of drunk driving, those that do get caught have to be punished much more. While drunk drivers will face different punishments ex post, the expected punishment they face ex ante will be the same. If there are in fact factors that make your drunk driving less dangerous (less drunk, more skilled driver, slower speed, etc.), these will decrease the expected punishment.
So basically, the ex ante expected punishment for a particular dangerous act does not differ based on moral luck. Ex post punishment does differ, and that is a cost, but the countervailing benefit of not having a costly and intrusive surveillence system outweighs it I think.
Notes:
This is the second time I’ve linked this recently, but Gary Becker’s Crime and punishment: An economic approach is a very useful way to think through these issues.
This argument applies much less in the attempted murder / murder case, because the chance that an attempted murder is caught and prosecuted is much higher, probably even higher than the probability a murderer is caught (because the victim usually has lots of relevant information).
For the purposes of this comment, I considered drunk people to be rational actors. They are not, but this is a nonissue, because drunk selves are only allowed to exist at the discretion of sober selves.
You are certainly correct, and I think what you say reinforces the point. Building comfort is a social function rather than an information exchange function, which is why you don’t particularly care whether or not your conversation leads to more accurate predictions for tomorrow’s weather.
Let me try a Hansonian explanation: conversation is not about exchanging information. It is about defining and reinforcing social bonds and status hierarchies. You don’t chit-chat about the weather because you really want to consider how recent local atmospheric patterns relate to long-run trends, you do it to show that you care about the other person. If you actually cared about the weather, you would excuse yourself and consult the nearest meteorologist.
Written communication probably escapes this mechanism—the mental machinery for social interaction is less involved, and the mental machinery for analytical judgment has more room to operate. This probably happens because there was no written word in the evolutionary context, so we didn’t evolve to apply our social interaction machinery to it. A second reason is that written communication is relatively easily divorced from the writer—you can encounter a written argument over vast spatial or temporal separation—so the cues that kick the social brain into gear are absent or subdued. The result, as you point out, it is easier to critically engage with a written argument than a spoken one.
This seems like something that natural conversationalists already do intuitively. They have a broad range of topic about which they can talk comfortably (either because they are knowledgeable about the specific subjects or because they have enough tools to carry on a conversation even in areas with which they are unfamiliar), and they can steer the conversation around these topics until they find one that their counterpart can also talk comfortably about. Bad conversationalists either aren’t comfortable talking about many subjects, are bad at transitioning from one subject to another, or can’t sense or don’t care when their counterpart doesn’t care about a given topic.
The flip side of this is that there are 3 ways of improving one’s conversational ability: learning more about more subjects, practicing transitions between various topics, and learning the cues for when one’s counterpart is bored or uninterested by the current topic. Kaj focuses on the second of these, but I think the other two strategies ought not be forgotten. It’s no use learning to steer the conversation when there are no areas of overlapping interest to steer to, or when you can’t recognize whether you are in one or not.
I’m in. Started reading through it this past winter but stopped. Hopefully this group will provide some motivation.
You might check out Gary Becker’s writings on crime, most famously Crime and Punishment: An Economic Approach. He starts from the notion that potential criminals engage in cost-benefit analysis and comes to many of the same conclusions you do.
I agree that applied rationality is important, but I’m not sure that there needs to be another site for that to happen. This recent post, for example, seems like an example of exactly what the OP wants to see. Perhaps what should be done is creating an official “Applied Rationality” tag for all such posts and an easy way to filter them. That way, if a bad scenario happens where new readers more interested in politicized fighting than rationality are drawn to this site because there’s a discussion on gun control, they can be easily quarantined. But if this site maintains its high signal/noise ratio, the community benefits from trying out its tools in action.
It looks like a couple of footnotes got cut off.