EDIT: this comment was made when I was in a not-too-reasonable frame of mind, and I’m over it.
Is teaching, learning, studying rationality valuable?
Not as a bridge to other disciplines, or a way to meet cool people. I mean, is the subject matter itself valuable as a discipline in your opinion? Is there enough to this? Is there anything here worth proselytizing?
I’m starting to doubt that. “Here, let me show you how to think more clearly” seems like an insult to anyone’s intelligence. I don’t think there’s any sense teaching a competent adult how to change his or her habits of thought. Can you imagine a perfectly competent person—say, a science student—who hasn’t heard of “rationalism” in our sense of the world, finding such instruction appealing? I really can’t.
Of course I’m starting to doubt the value (to myself) of thinking clearly at all.
Yesterday I spoke with my doctor about skirting around the FDA’s not having approved of a drug that may be approved in Europe first (it may be approved in the US first). I explained that one first-world safety organization’s imprimatur is good enough for me until the FDA gives a verdict, and that harm from taking a medicine is not qualitatively different than harm from not taking a medicine.
We also discussed a clinical trial of a new drug, and I had to beat him with a stick until he abandoned “I have absolutely no idea at all if it will be better for you or not”. I explained that abstractly, a 50% chance of being on a placebo and a 50% chance of being on a medicine with a 50% chance of working was better than assuredly taking a medicine with a 20% chance of working, and that he was able to give a best guess about the chances of it working.
In practice, there are other factors involved, in this case it’s better to try the established medicine first and just see if it works or not, as part of exploration before exploitation.
We also discussed a clinical trial of a new drug, and I had to beat him with a stick until he abandoned “I have absolutely no idea at all if it will be better for you or not”. I explained that abstractly, a 50% chance of being on a placebo and a 50% chance of being on a medicine with a 50% chance of working was better than assuredly taking a medicine with a 20% chance of working, and that he was able to give a best guess about the chances of it working.
Better yet, if you aren’t feeling like being altruistic you go on the trial then test the drug you are given to see if it is the active substance. If not you tell the trial folks that placebos are for pussies and go ahead and find either an alternate source of the drug or the next best thing you can get your hands on. It isn’t your responsibility to be a control subject unless you choose to be!
Downvoted for encouraging people to screw over other people by backing out of their agreements… What would happen to tests if every trial patient tested their medicine to see if it’s a placebo? Don’t you believe there’s value in having control groups in medical testing?
Lessdazed is describing quite a messy situation. Let me split out various subcases.
First is the situation with only one approval authority running randomised controlled trials on medicines. These trials are usually in three phases. Phase I on healthy volunteers to check for toxicity and metabolites. Phase II on sufferers to get an idea of the dose needed to affect the course of the illness. Phase III to prove that the therapeutic protocol established in Phase II actually works.
I have health problems of my own and have fancied joining a Phase III trial for early access to the latest drugs. Reading around for example it seems to be routine for drugs to fail in Phase III. Outcomes seem to be vaguely along the lines of three in ten are harmful, six in ten are useless, one in ten is beneficial. So the odds that a new drug will help, given that it was the one out of ten that passed Phase III, are good, while the odds that a new drug will help, given that it is about to start on Phase III are bad.
Joining a Phase III trial is a genuinely altruistic act by which the joiner accepts bad odds for himself to help discover valuable information for the greater good.
I was confused by the idea of joining a Phase III trial and unblinding it by testing the pill to see whether one had been assigned to the treatment arm of the study or the control arm. Since the drug is more likely to be harmful than to be beneficial, making sure that you get it is playing against the odds!
Second, Lessdazed seemed to be considering the situation in which EMA has approved a drug and the FDA is blocking it in America, simply as a bureaucratic measure to defend its home turf. If it were really as simple as that, I would say that cheating to get round the bureaucratic obstacles is justified.
However the great event of my lifetime was man landing on the Moon. NASA was brilliant and later became rubbish. I attribute the change to the Russians dropping out of the space race. In the 1960′s NASA couldn’t afford to take bad decisions for political reasons, for fear that the Russians would take the good decision themselves and win the race. The wider moral that I have drawn is that big organisations depend on their rivals to keep them honest and functioning.
Third: split decisions with the FDA and the EMA disagreeing, followed by a treat-off to see who was right, strike me as essential. I dread the thought of a single, global medicine agency that could prohibit a drug world wide and never be shown up by approval and successful use in a different jurisdiction.
Hmm, my comment is losing focus. My main point is that joining a Phase III trial is, on average, a sacrifice for the common good.
Downvoted for encouraging people to screw over other people by backing out of their agreements… What would happen to tests if every trial patient tested their medicine to see if it’s a placebo? Don’t you believe there’s value in having control groups in medical testing?
Downvoted for actively polluting the epistemic belief pool for the purpose of a shaming attempt. I here refer especially (but not only) to the rhetorical question:
Don’t you believe there’s value in having control groups in medical testing?
I obviously believe there’s a value in having control groups. Not only is that an obvious belief but it is actually conveyed by my comment. It is a required premise for the assertion of altruism to make sense.
My comment observes that sacrificing one’s own (expected) health for the furthering of human knowledge is an act of altruism. Your comment actively and directly sabotages human knowledge for your own political ends. The latter I consider inexcusable and the former is both true and necessary if you wish to encourage people who are actually capable of strategic thinking on their own to be altruistic.
You don’t persuade rationalists to conform to your will by telling them A is made of fire or by trying to fool them into believing A, B and C don’t even exist. That’s how you persuade suckers.
Your comment actively and directly sabotages human knowledge for your own political ends.
OK, see, I thought this might happen. I love your first comment, much more than ArisKatsaris’, but despite it having some problems ArisKatsaris is referring to, not because it is perfect. I only upvoted his comment so I could honestly declare that I had upvoted both of your comments, as I thought that might diffuse the situation—to say I appreciated both replies.
Don’t get me wrong—I don’t really mind ArisKatsaris’ comment and I don’t think it’s as harmful as you seem to, but I upvoted it for the honesty reason.
You just committed an escalation of the same order of magnitude that he did, or more, as his statements were phrased as questions and were far less accusatory. I thought you might handle this situation like this and I mildly disapprove of being this aggressive with this tone this soon in the conversation.
A very slightly harmful instance of a phenomenon that is moderately bad when done on things that matter.
I thought you might handle this situation like this and I mildly disapprove of being this aggressive with this tone this soon in the conversation.
Where ‘this soon’ means the end. There is nothing more to say, at in this context. (As a secondary consideration my general policy is that conversations which begin with shaming terminate with an error condition immediately.) I do, however, now have inspiration for a post on the purely practical downsides of suppression of consideration of rational alternatives in situations similar to that discussed by the post.
EDIT: No, not post. It is an open thread comment by yourself that could have been a discussion post!
I obviously believe there’s a value in having control groups Not only is that obvious but it is actually conveyed by my comment. It is a required premise for the assertion of altruism to make sense.
Not so, there exists altruism that is worthless or even of negative value. An all-altrustic CooperateBot is what allows DefectBots to thrive. Someone can altruistically spend all his time praying to imaginary deities for the salvation of mankind, and his prayers would still be useless. To think that altruism is about value is a map-territory confusion.
My comment observes that sacrificing one’s own (expected) health for the furthering of human knowledge is an act of altruism.
Your comment doesn’t just say it’s altruistic. It also tells him that if he doesn’t feel like being an altruist, that he should tell people that “placebos are for pussies”. Perhaps you were just joking when you effectively told him to insult altruists, and I didn’t get it.
Either way, if he defected in this manner, not just he’d be partially sabotaging the experiment he signed up for, he’d probably be sabotaging his future chances of being accepted in any other trial. I know that if I was a doctor, I would be less likely to accept you in a medical trial.
Your comment actively and directly sabotages human knowledge for your own political ends.
Um, what? I don’t understand. What deceit do you believe I committed in my above comment?
Wedrifid made a strategic observation that if a person cares more about their own health then the integrity of the trial it makes sense to find out whether they are on placebo and, if they are, leave the trial and seek other solutions. He did this with somewhat characteristic colorful language.
You then voted him down for expressing values you disagree with. This is a use of downvoting that a lot of people here frown on, myself included (though I don’t downvote people for explaining their reasons for downvoting, even if those reasons are bad). Even if wedrifid thought people should screw up controlled trials for their own benefit his comment was still clever, immoral or not.
Of course, he wasn’t actually recommending the sabotage of controlled trials—though his first comment was sufficiently ambiguous that I wouldn’t fault someone for not getting it. Luckily, he clarified this point for you in his reply. Now that you know wedrifid actually likes keeping promises and maintaining the integrity of controlled trials what are you arguing about?
Wedrifid made a strategic observation that if a person cares more about their own health then the integrity of the trial it makes sense to find out whether they are on placebo and, if they are, leave the trial and seek other solutions.
To me it didn’t feel like an observation, it felt like a very strong recommendation, given phrases like “Better yet”, “tell them placebos are for pussies”, “It isn’t your responsibility!”, etc
Even if wedrifid thought people should screw up controlled trials for their own benefit his comment was still clever, immoral or not.
Eh, not really. It seemed shortsighted—it doesn’t really give an alternate way of procuring this medicine, it has the possibilty to slightly delay the actual medicine from going on the market (e.g. if other test subjects follow the example of seeking to learn if they’re on a placebo and also abandon the testing, that forcing the thing to be restarted from scratch), and if a future medicine goes on trial, what doctor will accept test subjects that are known to have defected in this way?
Now that you know wedrifid actually likes keeping promises and maintaining the integrity of controlled trials what are you arguing about?
Primarily I fail to understand what deceit he’s accusing me of when he compares my own attitude to claiming that “A is made of fire” (in context meaning effectively that I said defectors will be punished posthumously go to hell; that I somehow lied about the repercussions of defections).
He attacks me for committing a crime against knowledge—when of course that was what I thought he was committing, when I thought he was seeking to encourage control subjects to find out if they’re a placebo and quit the testing. Because you know—testing = search for knowledge, sabotaging testing = crime against knowledge.
Basically I can understand how I may have misunderstood him—but I don’t understand in what way he is misunderstanding me.
You’re confuting two things here: whether rationality is valuable to study, and whether rationality is easy to proselytize.
My own experience is that it’s been very valuable for me to study the material on Less Wrong- I’ve been improving my life lately in ways I’d given up on before, I’m allocating my altruistic impulses more efficiently (even the small fraction I give to VillageReach is doing more good than all of the charity I practiced before last year), and I now have a genuine understanding (from several perspectives) of why atheism isn’t the end of truth/meaning/morals. These are all incredibly valuable, IMO.
As for proselytizing ‘rationality’ in real life, I haven’t found a great way yet, so I don’t do it directly. Instead, I tell people who might find Less Wrong interesting that they might find Less Wrong interesting, and let them ponder the rationality material on their own without having to face a more-rational-than-thou competition.
Instead, I tell people who might find Less Wrong interesting that they might find Less Wrong interesting, and let them ponder the rationality material on their own without having to face a more-rational-than-thou competition.
This phrase jumped out in my mind as “shiny awesome suggestion!” I guess in a way it’s what I’ve been trying to do for awhile, since I found out early, when learning how to make friends, that most people and especially most girls don’t seem to like being instructed on living their life. (“Girls don’t want solutions to their problems,” my dad quotes from a book about the male versus the female brain, “they want empathy, and they’ll get pissed off if you try to give them solutions instead.”)
The main problem is that most of my social circle wouldn’t find LW interesting, at least not in its current format. Including a lot of people who I thought would benefit hugely from some parts, especially Alicorn’s posts on luminosity. (I know, for example, that my younger sister is absolutely fascinated by people, and loves it when I talk neuroscience with her. I would never tell her to go read a neuroscience textbook, and probably not a pop science book either. Book learning just isn’t her thing.)
Depending on what you mean by ‘format’, you might be able to direct those people to the specific articles you think they’d benefit from, or even pick out particular snippets to talk to them about (in a ‘hey, isn’t this a neat thing’ sense, not a ‘you should learn this’ sense).
“Pick out particular snippets” seems to work quite well. If something in the topic of conversation tags, in my mind, to something I read on LessWrong, I usually bring it up and add it to the conversation, and my friends usually find it neat. But except with a few select people (and I know exactly who they are) posting an article on their facebook wall and writing “this is really cool!” doesn’t lead to the article actually being read. Or at least they don’t tell me about reading it.
If facebook is like twitter in that regard, I mostly wouldn’t expect you to get feedback about an article having been read—but I’d also not expect an especially high probability that the intended person actually read it, either. What I meant was more along the lines of emailing/IMing them individually with the relevant link. (Obviously this doesn’t work too well if you know a whole lot of people who you think should read a particular article. I can’t advise about that situation—my social circle is too small for me to run into it.)
It wasn’t actually on account of this discussion that I introduced my friend to LW (since I didn’t read Swimmer and Adelene’s comments till afterward)- I just posted the reaction here because it was funny and relevant.
I don’t know what Twitter is like, but the function on Facebook that I prefer to use (private messages) is almost like email and seems to be replacing email among much of my social circle. I will preferentially send my friends FB messages instead of emails, since I usually get a reply faster.
Writing on someone’s wall is public, and might result in a slower reply because it seems less urgent. But it’s still directed at a particular person, and it would be considered rude not to reply at all. But when I post an article or link, the reply I often get is “thanks, looks neat, I’ll read that later.”
Can you imagine a perfectly competent person—say, a science student—who hasn’t heard of “rationalism” in our sense of the world, finding such instruction appealing? I really can’t.
I was recently around some old friends who are lacking in rationality, and kept finding myself at a complete loss. I wanted to just grab them and say exactly that.
In other news, I’ve learned that some lessons in how to politely and subtly teach rationality would be quite welcome >.>
Are you not comfortable with that happening at all, or not comfortable with being involved in one?
What are your concerns—wasting your time, being perceived as belonging to a “weird” group, being drawn into a group process that is a net negative value to you?
I realize I’m not answering your original question. I’m still thinking about that one.
I’m more than a little surprised to see you say this, given your past writings on the subject—if asked I would certainly have guessed that your reply to your own question would have been “yes, of course”.
I’m curious to know more, if you’re comfortable saying more. Not sure what to say otherwise.
People with a common interest meeting up seems natural enough. I have reservations about normativism with respect to ways of thinking, but it does seem to me that what we are learning here is worthwhile in and of itself: because it is about finding out exactly what we are, and because—just like a zebra—what we are is something rare and peculiar and fascinating.
Well, if there are other people who feel that way, they’re free to meet up to share that interest.
My serious answer: I’m not sure there’s a well-defined, cumulative, discipline-like body of knowledge in the LessWrong memeplex. I don’t know how it could be presented to an intelligent outsider who’s never heard of it. I don’t know whether it could be presented in a way that makes us look good.
My not-so-serious answer: a lot of the time I just don’t care any more.
It sounds to me like you might be in some kind of depression or low-enthusiasm state. I don’t hear a coherent critique in these comments, so much as a general sense of “boo ‘rationality’/LW”.
Contrast:
Are you not comfortable with that happening at all, or not comfortable with being involved in one?
I’m not comfortable with it existing. I think it’s not useful.
and
People with a common interest meeting up seems natural enough.
Well, if there are other people who feel that way, they’re free to meet up to share that interest
This feels inconsistent; as if you had been caught giving a non-true rejection.
Now and then I go a bit crazy and find it difficult to value anything. Luckily the worst symptom is that I don’t get much done for a while, and post gloomy comments on websites.
At least for me, being able to look at things like this (“I am in a bad mood”
instead of “everything sucks”) is quite a blessing. Hope you feel better now or
soon! [Edit: wording tweak.]
You might be reading SarahC as saying that teaching a competent adult to change his or her habits of thought is not possible (if you’re not, ignore this comment), but I think she’s saying that it’s not worthwhile.
If it is not worthwhile for competent adults to learn something as basic as “how to change their mind” then I would have to agree with the conclusion that we are doomed.
Er why, exactly? Most competent adults in history have not known how to change their mind? The worlds has improved because of those who do. It seems to me that the key variable in teaching rationality is whether the student is willing. Most people just don’t care that much about the truth of their far-beliefs. But occasional people do and those are the people you can teach. Thats why everyone here is a truth fetishist.
What we need is more pro-truth propaganda so that in the next generation the pool of potential rationalists is larger.
The emphasis here is on worthwhile: the idea that changing your mind, and knowing how to, has a tangible benefit, and one that is (generally, on average) worth the effort it takes to learn. If there’s no particular benefit to changing your mind, then either (a) you have already selected the best outcome or (b) your choices are irrelevant.
If this is the best possible world, then I feel okay calling us doomed; it’s a pretty lousy world.
As to irrelevancy, well, to think that I’d live the same life regardless of whether “Will you marry me?” is met with yes or no? That is not a world I want. The idea that given a set of choices, the outcome remains the same across them is just a terrifying nihilistic idea to me.
The claim is that, for lots of people, the net gain from changing their mind is so minimal as to not be worth the time spent studying. This implies strongly that, for lots of people, they have either (a) already made the best choice or (b) are not faced with any meaningful choices.
(a) implies that either lots of people are completely incapable of good decisions or are the Chosen Of God, their every selection Divinely Inspired from amongst the best of all possible worlds. Which goes back to this being a pretty lousy world.
(b) flies in the face of all the major decisions people normally make (marriage, buying a house, having children, etc.), and suggests that, statistically, a lot of the “important decisions” in my own life are probably meaningless unless I am the Chosen Of Bayes, specially exempt from the nihilism that blights the mundane masses.
For some people there may be the class (c) that the cost of learning rationality is much, much higher than normal. If your focus is on this group, that’s a whole different conversation about why I think this is really rare :)
Just to begin with, the above is a terrible way to structure an inductive argument about something as variable has human behavior. Obviously few people are “completely incapable of good decisions or are the Chosen Of God” and no important decisions in life are “meaningless”. It is, however, the case that most decisions don’t matter all that much and that, when they do, people usual do a pretty good job without special training.
But the real issue that you’re missing is opportunity cost. Lots of people don’t know how to read or do arithmetic. Lots of people can’t manage personal finances. Lots of people need more training to get a better job. Lots of people suffer from addiction. Lots of people don’t have significant chunks of free time. Lots of people have children to raise. Almost everyone could benefit from learning something but many people either do not have the time or would benefit far more from learning a particular skill or trade rather than Bayesian math and how to identify cognitive biases.
Almost everyone could benefit from learning something but many people either do not have the time or would benefit far more from learning a particular skill or trade rather than Bayesian math and how to identify cognitive biases.
I’m not disagreeing with this at all. But given the option of teaching someone nothing or teaching them this? I think it’s a net gain for them to learn how to change their mind. And I think most people have room in their life to pretty easily be casually taught a simple skill like this, or at least the basics. I’ve been teaching it as part of casual conversations with my roommate just because I enjoy talking about it.
But given the option of teaching someone nothing or teaching them this?
But that isn’t the question.
I think it’s a net gain for them to learn how to change their mind.
I think it is a net gain for a person to learn the arguments of Christian apologetics, that doesn’t mean it is worthwhile for everyone to learn the arguments of Christian apologetics. Time is a limited resource.
I’ve taught aspects of rationality to lots of people because I like talking about it too. But my friends and family have learned it as a side effect of doing something they would be doing anyway, having interesting conversations with me. Some of them are interested in things like cognitive biases and learn on their own. But we don’t yet have anything here that makes dramatic differences in people’s lives such that it is important they spend precious resources on learning it.
ETA: That was a bit brisk of me. I think we just have different definitions of “worthwhile”. :-)
If something’s being worthwhile or not is a major consideration in whether or not we are doomed, doesn’t that make it worthwhile? OTOH, if you mean “If we are the same amount of doomed whether or not people learn to change their minds, then we are very doomed,” you are right.
EDIT: this comment was made when I was in a not-too-reasonable frame of mind, and I’m over it.
Is teaching, learning, studying rationality valuable?
Not as a bridge to other disciplines, or a way to meet cool people. I mean, is the subject matter itself valuable as a discipline in your opinion? Is there enough to this? Is there anything here worth proselytizing?
I’m starting to doubt that. “Here, let me show you how to think more clearly” seems like an insult to anyone’s intelligence. I don’t think there’s any sense teaching a competent adult how to change his or her habits of thought. Can you imagine a perfectly competent person—say, a science student—who hasn’t heard of “rationalism” in our sense of the world, finding such instruction appealing? I really can’t.
Of course I’m starting to doubt the value (to myself) of thinking clearly at all.
Yesterday I spoke with my doctor about skirting around the FDA’s not having approved of a drug that may be approved in Europe first (it may be approved in the US first). I explained that one first-world safety organization’s imprimatur is good enough for me until the FDA gives a verdict, and that harm from taking a medicine is not qualitatively different than harm from not taking a medicine.
We also discussed a clinical trial of a new drug, and I had to beat him with a stick until he abandoned “I have absolutely no idea at all if it will be better for you or not”. I explained that abstractly, a 50% chance of being on a placebo and a 50% chance of being on a medicine with a 50% chance of working was better than assuredly taking a medicine with a 20% chance of working, and that he was able to give a best guess about the chances of it working.
In practice, there are other factors involved, in this case it’s better to try the established medicine first and just see if it works or not, as part of exploration before exploitation.
This is serious stuff.
Better yet, if you aren’t feeling like being altruistic you go on the trial then test the drug you are given to see if it is the active substance. If not you tell the trial folks that placebos are for pussies and go ahead and find either an alternate source of the drug or the next best thing you can get your hands on. It isn’t your responsibility to be a control subject unless you choose to be!
Downvoted for encouraging people to screw over other people by backing out of their agreements… What would happen to tests if every trial patient tested their medicine to see if it’s a placebo? Don’t you believe there’s value in having control groups in medical testing?
Lessdazed is describing quite a messy situation. Let me split out various subcases.
First is the situation with only one approval authority running randomised controlled trials on medicines. These trials are usually in three phases. Phase I on healthy volunteers to check for toxicity and metabolites. Phase II on sufferers to get an idea of the dose needed to affect the course of the illness. Phase III to prove that the therapeutic protocol established in Phase II actually works.
I have health problems of my own and have fancied joining a Phase III trial for early access to the latest drugs. Reading around for example it seems to be routine for drugs to fail in Phase III. Outcomes seem to be vaguely along the lines of three in ten are harmful, six in ten are useless, one in ten is beneficial. So the odds that a new drug will help, given that it was the one out of ten that passed Phase III, are good, while the odds that a new drug will help, given that it is about to start on Phase III are bad.
Joining a Phase III trial is a genuinely altruistic act by which the joiner accepts bad odds for himself to help discover valuable information for the greater good.
I was confused by the idea of joining a Phase III trial and unblinding it by testing the pill to see whether one had been assigned to the treatment arm of the study or the control arm. Since the drug is more likely to be harmful than to be beneficial, making sure that you get it is playing against the odds!
Second, Lessdazed seemed to be considering the situation in which EMA has approved a drug and the FDA is blocking it in America, simply as a bureaucratic measure to defend its home turf. If it were really as simple as that, I would say that cheating to get round the bureaucratic obstacles is justified.
However the great event of my lifetime was man landing on the Moon. NASA was brilliant and later became rubbish. I attribute the change to the Russians dropping out of the space race. In the 1960′s NASA couldn’t afford to take bad decisions for political reasons, for fear that the Russians would take the good decision themselves and win the race. The wider moral that I have drawn is that big organisations depend on their rivals to keep them honest and functioning.
Third: split decisions with the FDA and the EMA disagreeing, followed by a treat-off to see who was right, strike me as essential. I dread the thought of a single, global medicine agency that could prohibit a drug world wide and never be shown up by approval and successful use in a different jurisdiction.
Hmm, my comment is losing focus. My main point is that joining a Phase III trial is, on average, a sacrifice for the common good.
It’s in Phase III.
Downvoted for actively polluting the epistemic belief pool for the purpose of a shaming attempt. I here refer especially (but not only) to the rhetorical question:
I obviously believe there’s a value in having control groups. Not only is that an obvious belief but it is actually conveyed by my comment. It is a required premise for the assertion of altruism to make sense.
My comment observes that sacrificing one’s own (expected) health for the furthering of human knowledge is an act of altruism. Your comment actively and directly sabotages human knowledge for your own political ends. The latter I consider inexcusable and the former is both true and necessary if you wish to encourage people who are actually capable of strategic thinking on their own to be altruistic.
You don’t persuade rationalists to conform to your will by telling them A is made of fire or by trying to fool them into believing A, B and C don’t even exist. That’s how you persuade suckers.
OK, see, I thought this might happen. I love your first comment, much more than ArisKatsaris’, but despite it having some problems ArisKatsaris is referring to, not because it is perfect. I only upvoted his comment so I could honestly declare that I had upvoted both of your comments, as I thought that might diffuse the situation—to say I appreciated both replies.
Don’t get me wrong—I don’t really mind ArisKatsaris’ comment and I don’t think it’s as harmful as you seem to, but I upvoted it for the honesty reason.
You just committed an escalation of the same order of magnitude that he did, or more, as his statements were phrased as questions and were far less accusatory. I thought you might handle this situation like this and I mildly disapprove of being this aggressive with this tone this soon in the conversation.
A very slightly harmful instance of a phenomenon that is moderately bad when done on things that matter.
Where ‘this soon’ means the end. There is nothing more to say, at in this context. (As a secondary consideration my general policy is that conversations which begin with shaming terminate with an error condition immediately.) I do, however, now have inspiration for a post on the purely practical downsides of suppression of consideration of rational alternatives in situations similar to that discussed by the post.
EDIT: No, not post. It is an open thread comment by yourself that could have been a discussion post!
I’m not unsympathetic.
Compare and contrast my(September 7th, 2011) approach to yours(September 7th, 2011), I guess.
ADBOC, it didn’t have to be.
It sort of soon became one.
Not so, there exists altruism that is worthless or even of negative value. An all-altrustic CooperateBot is what allows DefectBots to thrive. Someone can altruistically spend all his time praying to imaginary deities for the salvation of mankind, and his prayers would still be useless. To think that altruism is about value is a map-territory confusion.
Your comment doesn’t just say it’s altruistic. It also tells him that if he doesn’t feel like being an altruist, that he should tell people that “placebos are for pussies”. Perhaps you were just joking when you effectively told him to insult altruists, and I didn’t get it.
Either way, if he defected in this manner, not just he’d be partially sabotaging the experiment he signed up for, he’d probably be sabotaging his future chances of being accepted in any other trial. I know that if I was a doctor, I would be less likely to accept you in a medical trial.
Um, what? I don’t understand. What deceit do you believe I committed in my above comment?
Let me see if I can summarize this thread:
Wedrifid made a strategic observation that if a person cares more about their own health then the integrity of the trial it makes sense to find out whether they are on placebo and, if they are, leave the trial and seek other solutions. He did this with somewhat characteristic colorful language.
You then voted him down for expressing values you disagree with. This is a use of downvoting that a lot of people here frown on, myself included (though I don’t downvote people for explaining their reasons for downvoting, even if those reasons are bad). Even if wedrifid thought people should screw up controlled trials for their own benefit his comment was still clever, immoral or not.
Of course, he wasn’t actually recommending the sabotage of controlled trials—though his first comment was sufficiently ambiguous that I wouldn’t fault someone for not getting it. Luckily, he clarified this point for you in his reply. Now that you know wedrifid actually likes keeping promises and maintaining the integrity of controlled trials what are you arguing about?
To me it didn’t feel like an observation, it felt like a very strong recommendation, given phrases like “Better yet”, “tell them placebos are for pussies”, “It isn’t your responsibility!”, etc
Eh, not really. It seemed shortsighted—it doesn’t really give an alternate way of procuring this medicine, it has the possibilty to slightly delay the actual medicine from going on the market (e.g. if other test subjects follow the example of seeking to learn if they’re on a placebo and also abandon the testing, that forcing the thing to be restarted from scratch), and if a future medicine goes on trial, what doctor will accept test subjects that are known to have defected in this way?
Primarily I fail to understand what deceit he’s accusing me of when he compares my own attitude to claiming that “A is made of fire” (in context meaning effectively that I said defectors will be punished posthumously go to hell; that I somehow lied about the repercussions of defections).
He attacks me for committing a crime against knowledge—when of course that was what I thought he was committing, when I thought he was seeking to encourage control subjects to find out if they’re a placebo and quit the testing. Because you know—testing = search for knowledge, sabotaging testing = crime against knowledge.
Basically I can understand how I may have misunderstood him—but I don’t understand in what way he is misunderstanding me.
Upvoted comment and parent.
You’re confuting two things here: whether rationality is valuable to study, and whether rationality is easy to proselytize.
My own experience is that it’s been very valuable for me to study the material on Less Wrong- I’ve been improving my life lately in ways I’d given up on before, I’m allocating my altruistic impulses more efficiently (even the small fraction I give to VillageReach is doing more good than all of the charity I practiced before last year), and I now have a genuine understanding (from several perspectives) of why atheism isn’t the end of truth/meaning/morals. These are all incredibly valuable, IMO.
As for proselytizing ‘rationality’ in real life, I haven’t found a great way yet, so I don’t do it directly. Instead, I tell people who might find Less Wrong interesting that they might find Less Wrong interesting, and let them ponder the rationality material on their own without having to face a more-rational-than-thou competition.
This phrase jumped out in my mind as “shiny awesome suggestion!” I guess in a way it’s what I’ve been trying to do for awhile, since I found out early, when learning how to make friends, that most people and especially most girls don’t seem to like being instructed on living their life. (“Girls don’t want solutions to their problems,” my dad quotes from a book about the male versus the female brain, “they want empathy, and they’ll get pissed off if you try to give them solutions instead.”)
The main problem is that most of my social circle wouldn’t find LW interesting, at least not in its current format. Including a lot of people who I thought would benefit hugely from some parts, especially Alicorn’s posts on luminosity. (I know, for example, that my younger sister is absolutely fascinated by people, and loves it when I talk neuroscience with her. I would never tell her to go read a neuroscience textbook, and probably not a pop science book either. Book learning just isn’t her thing.)
Depending on what you mean by ‘format’, you might be able to direct those people to the specific articles you think they’d benefit from, or even pick out particular snippets to talk to them about (in a ‘hey, isn’t this a neat thing’ sense, not a ‘you should learn this’ sense).
“Pick out particular snippets” seems to work quite well. If something in the topic of conversation tags, in my mind, to something I read on LessWrong, I usually bring it up and add it to the conversation, and my friends usually find it neat. But except with a few select people (and I know exactly who they are) posting an article on their facebook wall and writing “this is really cool!” doesn’t lead to the article actually being read. Or at least they don’t tell me about reading it.
If facebook is like twitter in that regard, I mostly wouldn’t expect you to get feedback about an article having been read—but I’d also not expect an especially high probability that the intended person actually read it, either. What I meant was more along the lines of emailing/IMing them individually with the relevant link. (Obviously this doesn’t work too well if you know a whole lot of people who you think should read a particular article. I can’t advise about that situation—my social circle is too small for me to run into it.)
I, uh, just did that, and received this reply half an hour later:
I think that counts as a success.
Upvotes to you for trying something instead of defaulting to doing nothing.
It wasn’t actually on account of this discussion that I introduced my friend to LW (since I didn’t read Swimmer and Adelene’s comments till afterward)- I just posted the reaction here because it was funny and relevant.
Sorry for the delayed reply...
I don’t know what Twitter is like, but the function on Facebook that I prefer to use (private messages) is almost like email and seems to be replacing email among much of my social circle. I will preferentially send my friends FB messages instead of emails, since I usually get a reply faster.
Writing on someone’s wall is public, and might result in a slower reply because it seems less urgent. But it’s still directed at a particular person, and it would be considered rude not to reply at all. But when I post an article or link, the reply I often get is “thanks, looks neat, I’ll read that later.”
At some point I was that person. Weren’t you?
A little bit but it varies wildly based on who you are.
Not really.
I was recently around some old friends who are lacking in rationality, and kept finding myself at a complete loss. I wanted to just grab them and say exactly that.
In other news, I’ve learned that some lessons in how to politely and subtly teach rationality would be quite welcome >.>
Where’s that coming from, then?
Well, there’s been some talk about organizing a meetup group in my area, and I’m not really comfortable with that.
Are you not comfortable with that happening at all, or not comfortable with being involved in one?
What are your concerns—wasting your time, being perceived as belonging to a “weird” group, being drawn into a group process that is a net negative value to you?
I realize I’m not answering your original question. I’m still thinking about that one.
I’m not comfortable with it existing. I think it’s not useful.
I’m more than a little surprised to see you say this, given your past writings on the subject—if asked I would certainly have guessed that your reply to your own question would have been “yes, of course”.
I’m curious to know more, if you’re comfortable saying more. Not sure what to say otherwise.
People with a common interest meeting up seems natural enough. I have reservations about normativism with respect to ways of thinking, but it does seem to me that what we are learning here is worthwhile in and of itself: because it is about finding out exactly what we are, and because—just like a zebra—what we are is something rare and peculiar and fascinating.
Well, if there are other people who feel that way, they’re free to meet up to share that interest.
My serious answer: I’m not sure there’s a well-defined, cumulative, discipline-like body of knowledge in the LessWrong memeplex. I don’t know how it could be presented to an intelligent outsider who’s never heard of it. I don’t know whether it could be presented in a way that makes us look good.
My not-so-serious answer: a lot of the time I just don’t care any more.
It sounds to me like you might be in some kind of depression or low-enthusiasm state. I don’t hear a coherent critique in these comments, so much as a general sense of “boo ‘rationality’/LW”.
Contrast:
and
This feels inconsistent; as if you had been caught giving a non-true rejection.
That turned out to be the case.
Now and then I go a bit crazy and find it difficult to value anything. Luckily the worst symptom is that I don’t get much done for a while, and post gloomy comments on websites.
At least for me, being able to look at things like this (“I am in a bad mood” instead of “everything sucks”) is quite a blessing. Hope you feel better now or soon! [Edit: wording tweak.]
.
You might be reading SarahC as saying that teaching a competent adult to change his or her habits of thought is not possible (if you’re not, ignore this comment), but I think she’s saying that it’s not worthwhile.
If it is not worthwhile for competent adults to learn something as basic as “how to change their mind” then I would have to agree with the conclusion that we are doomed.
Er why, exactly? Most competent adults in history have not known how to change their mind? The worlds has improved because of those who do. It seems to me that the key variable in teaching rationality is whether the student is willing. Most people just don’t care that much about the truth of their far-beliefs. But occasional people do and those are the people you can teach. Thats why everyone here is a truth fetishist.
What we need is more pro-truth propaganda so that in the next generation the pool of potential rationalists is larger.
The emphasis here is on worthwhile: the idea that changing your mind, and knowing how to, has a tangible benefit, and one that is (generally, on average) worth the effort it takes to learn. If there’s no particular benefit to changing your mind, then either (a) you have already selected the best outcome or (b) your choices are irrelevant.
If this is the best possible world, then I feel okay calling us doomed; it’s a pretty lousy world.
As to irrelevancy, well, to think that I’d live the same life regardless of whether “Will you marry me?” is met with yes or no? That is not a world I want. The idea that given a set of choices, the outcome remains the same across them is just a terrifying nihilistic idea to me.
The claim isn’t that it isn’t worthwhile to learn rationalism, period. The claim is that for lots of people, it isn’t worthwhile.
The claim is that, for lots of people, the net gain from changing their mind is so minimal as to not be worth the time spent studying. This implies strongly that, for lots of people, they have either (a) already made the best choice or (b) are not faced with any meaningful choices.
(a) implies that either lots of people are completely incapable of good decisions or are the Chosen Of God, their every selection Divinely Inspired from amongst the best of all possible worlds. Which goes back to this being a pretty lousy world.
(b) flies in the face of all the major decisions people normally make (marriage, buying a house, having children, etc.), and suggests that, statistically, a lot of the “important decisions” in my own life are probably meaningless unless I am the Chosen Of Bayes, specially exempt from the nihilism that blights the mundane masses.
For some people there may be the class (c) that the cost of learning rationality is much, much higher than normal. If your focus is on this group, that’s a whole different conversation about why I think this is really rare :)
Just to begin with, the above is a terrible way to structure an inductive argument about something as variable has human behavior. Obviously few people are “completely incapable of good decisions or are the Chosen Of God” and no important decisions in life are “meaningless”. It is, however, the case that most decisions don’t matter all that much and that, when they do, people usual do a pretty good job without special training.
But the real issue that you’re missing is opportunity cost. Lots of people don’t know how to read or do arithmetic. Lots of people can’t manage personal finances. Lots of people need more training to get a better job. Lots of people suffer from addiction. Lots of people don’t have significant chunks of free time. Lots of people have children to raise. Almost everyone could benefit from learning something but many people either do not have the time or would benefit far more from learning a particular skill or trade rather than Bayesian math and how to identify cognitive biases.
I’m not disagreeing with this at all. But given the option of teaching someone nothing or teaching them this? I think it’s a net gain for them to learn how to change their mind. And I think most people have room in their life to pretty easily be casually taught a simple skill like this, or at least the basics. I’ve been teaching it as part of casual conversations with my roommate just because I enjoy talking about it.
But that isn’t the question.
I think it is a net gain for a person to learn the arguments of Christian apologetics, that doesn’t mean it is worthwhile for everyone to learn the arguments of Christian apologetics. Time is a limited resource.
I’ve taught aspects of rationality to lots of people because I like talking about it too. But my friends and family have learned it as a side effect of doing something they would be doing anyway, having interesting conversations with me. Some of them are interested in things like cognitive biases and learn on their own. But we don’t yet have anything here that makes dramatic differences in people’s lives such that it is important they spend precious resources on learning it.
ETA: That was a bit brisk of me. I think we just have different definitions of “worthwhile”. :-)
If something’s being worthwhile or not is a major consideration in whether or not we are doomed, doesn’t that make it worthwhile? OTOH, if you mean “If we are the same amount of doomed whether or not people learn to change their minds, then we are very doomed,” you are right.
I think that concisely summarizes the point I was trying to make. Thank you! :)