I agree these are possibilities. However, it seems to me that if you’re going to use improbable good fortune in some areas as evidence for being in a holodeck, it only makes sense to use misfortune (or at least lack of optimization, or below-averageness) in other areas as evidence against it. It doesn’t sit well with me to write off every shortcoming as an intentional contrivance to make the simulation more “real” for you, or to give you additional challenges. Of course, we’re only talking a priori probability here; if, say, Eliezer directly catalyzed the Singularity and found himself historically renowned, the odds would have to go way up.
AlexU
Shouldn’t the fact that they can probably imagine better versions of themselves reduce this probability? If you’re in a holodeck, in addition to putting yourself at the center of the Singularity, why wouldn’t you also give yourself the looks of Brad Pitt and the wealth of Bill Gates?
I use Google SMS for that. Just text ’em with “define [word]” and you’ve got a dictionary at your fingertips.
In a confrontation between two parties, it’s more likely that the stronger one will pose the greater threat to you. By supporting the underdog and hoping for a fluke victory, you’re increasing your own survival odds. It seems we’re probably evolved to seek parity—where we then have the best chance of dominating—instead of seeking dominant leaders and siding with them, which is a far more complex and less certain process.
Am I missing something? Also, it would be interesting to see whether females and males have the same reactions toward the overdog.
There are some brilliant theists out there. The best theologians are largely indistinguishable from the best philosophers, who are typically quite rational people, to say the least.
Still, the chances that the most advanced theologians are the most advanced rationalists—more advanced than the best philosophers, physicists, computer scientists, etc., rather than merely comparable—seems slim.
Agreed. Particularly in hypothetical cases where one rationally concludes that it would be in their best interest to behave irrationally, e.g., over-confidence in oneself or belief in God. Even if one arrived at those conclusions, it’s not clear to me how anyone could decide to become irrational in those ways. Pascal’s notion of “bootstrapping” oneself into religious belief never struck me as very plausible. Interestingly though, “faking” confidence in oneself often does tend to lead to real confidence via some sort of feedback mechanism, e.g., interactions with women.
Good point. It might be that there are very few business ideas that actually are rational to have confidence in—otherwise, someone probably would have implemented them already. In other words, most business ideas, even the ones that turn out to be good ones, might be inherently bad gambles a priori.
While I can imagine a situation where one’s utility function would be as you described, it’s a pretty contrived one, e.g., a destitute crack addict suffering from a painful terminal illness, where the second best choice would be suicide. More importantly, for the typical crack user—the kind Phil Goetz was referencing—there’s almost always going to be something they could be spending the money on that would give them a higher expected utility over the long run (“bettering one’s situation”). It’s no small claim to say there isn’t.
Still, if everyone who does succeed has an irrational belief in their own success, then it’s not wrong to conclude that such a belief is probably a prerequisite (though certainly not a “guarantee”) for success.
Doesn’t this make some very big assumptions about the fixity of people’s circumstances? If my life is so bad that smoking crack begins to seem rational, then surely, taking actual steps to improve my life would be more rational. Similarly, I imagine that the $5 spent on a lottery ticket could be better spent on something that was a positive first step toward improving even the worst of circumstances. Seems the only way this wouldn’t be true would be if you simply assert, by fiat, that the person’s circumstances are immutable, but I’m not sure whether this accords with reality. (One’s politics are clearly implicated here.)
Someone should do a post attempting to define what exactly “rationalism” is. Right now I see lots of discussion on how to build rationalist communities, whether rationalism always “wins,” why you should be a rationalist, etc., but very little on what the content of this term is, and very little on how to be a rationalist. A newcomer could be excused for thinking that “rationalist” just means someone who goes around exhorting others to become rationalists. Maybe there’s nothing wrong with that, though; perhaps rationalism, at its core, is simply reminding yourself and others to think hard about things at all times.
Yes. Rationalism shouldn’t be see as a bag of discrete tricks, but rather, as the means for achieving any given end—what it takes to do something you want to do. The particulars will vary, of course, depending on the end in question, but the rational individual should do better at figuring them out.
On a side note, I’m not sure coming up with better slogans, catchphrases, and neologisms is the right thing to be aiming for.
Whatever it is you want to do with your life. I can’t think of many fields in which a rational outlook wouldn’t be of use. This goes back to fundamental values, interests, talents, etc. -- the dictates of rationalism can’t decide everything for you.
All else being equal, shouldn’t rationalists, almost by definition, win? The only way this wouldn’t happen would be in a contest of pure chance, in which rationality could confer no advantage. It seems like we’re just talking semantics here.
Anyone care to explain why this comment (and for that matter, the one below) was downvoted? Given that my karma score just dropped about 10 points in under an hour, I can only assume someone is going through my history and downvoting me for some reason. Great use of the karma system.
I have trouble seeing why radical honesty should be seen as a virtue by default. It’s fairly clear that radical honesty doesn’t necessarily promote happiness. From a utilitarian perspective, then, it should be value-neutral.
I personally place a high value on having true beliefs. More than most people. However, I’m not sure I’d value true beliefs over my own happiness. If I were a devout Christian, for example, and derived a great amount of comfort from my faith, I’m not sure I’d want someone to convince me otherwise. Given that most people will value true beliefs even less than I do, I’d find it even harder to justify convincing others of God’s non-existence. That’s imposing my own value judgments upon others, often to the detriment of their happiness. Studies have shown that depressed people are more likely to have accurate beliefs than happy people. If there’s a causal connection between the two, what are we to make of radical honesty?
Similarly, one also finds that practicing radical honesty in the social sphere is unlikely to win one many friends, and will in fact piss a lot of people off. Little white lies are what grease many of our most important social interactions. What’s to be gained by a policy of radical honesty in that domain?
Radical honesty is a chief virtue in science and academia, of course; maybe the chief virtue. But to apply that norm to the world at large is to ignore basic facts of human psychology and social interaction.
An easy way to see when your comments have been replied to, and to read those replies, would be great. Reddit has this feature. Right now I’m unaware of any way to do this on LW besides checking each of the individual parent posts.
The latter.
Write shorter posts. Write in a simpler and less oracular prose style. And write more substantive posts—at times, it seems as if you believe your every passing thought deserves 2,000 words. I’ll often read your posts and, while recognizing some germ of a worthwhile idea there, regret the time and effort it took to locate it.
Yes, but these are things most reasonably intelligent people know, or figure out, anyway. It seems correct to chalk up these insights to rationality, but trivially so. I don’t see what extra work studying rationality per se would be doing for us here.
Mensa has a fairly low cutoff. If you attended a prestigious college or grad school, work in a profession or in the sciences, or even hang out with scientifically or philosophically minded people, chances are you’ve already been a part of groups with higher aggregate IQ’s. I wouldn’t bother.