I’m not sure that “he believes many things that people on Less Wrong also believe” is a very good indication of rationality. Optimism regarding the future and liking technology does a better job of explaining much of that, I think.
Actually, I don’t think this is where his views and those of LW overlap so much. It’s much more in the area of his Hansonian cynicism about people and their ability to know why they do what they do, and his attitude of biting the bullet regarding unpleasant truths. Those two things, I would say, are the major philosophical overlaps between what he writes in this book, and LW’s outlook about things. (As opposed to mere agreement about specific facts or strategies.)
Hm, fair enough. Having read some more of his blog I agree that he does have essential rationalist traits such as being genuinely interested in the truth. He also seems to rely more on basic clear-thinking than overly theoretical arguments, which I am a fan of. However, he does occasionally say things like this:
The Karma Hypothesis is that releasing this idea to the world will put me in a good position with the universe when my book comes out and my start-up launches in beta any day now. I could use some good luck. And if karma isn’t a real thing, I hope the idea will make the world a better place because I’m part of that too.|
It’s possible it’s just a joke, but off-hand remarks like that make me feel reluctant to take anything he says at face value. Believing or even seriously considering warm fuzzy things that don’t have evidence to back them up is a major red flag for me.
I agree that he does have essential rationalist traits such as being genuinely interested in the truth.
I was more under the impression that he’s genuinely interested in useful.
It’s possible it’s just a joke
He’s a professional humorist: it’s his job to make jokes.
Believing or even seriously considering warm fuzzy things that don’t have evidence to back them up is a major red flag for me.
Depending on your definition of “evidence” you’ll either love him or hate him then. The book makes a big deal about one of his strategies for happiness, which is to deliberately spend a lot of time thinking about awesome outcomes that actually have very little probability of happening, as if they were likely to happen. The object isn’t to convince himself that these things will happen, but rather to trick the “moist robot” into feeling good about the future, so that he will have the motivation to stick to his systems.
[Edit to add: I should note that I do not endorse Adams or his views in general—I just think that this one particular book of his is extremely valuable, and more than worth working around any jokes or epistemically problematic theories. Most self-help books have far more epistemic issues than this one does, after all, and in general it’s a serious mistake to overlook good instrumental advice attached to bad theories. Bad theories are the default condition for new knowledge: most good theories evolve as alternate explanations for a reasonably predictive model attached to a bad theory.]
Making jokes is fine, and I do like his style of writing for the most part (I have read earlier books of his). The issue is whether or not I can trust his claims on face value. If I read advise of his which is based on scientific claims I don’t want to be left wondering if perhaps his advise is terrible because the current body of academic knowledge points in the opposite direction. Giving self-help advise based on common sense/experience alone is not as useful to me if I have no reason to believe his common sense on the matter is any better than mine. In that case, I have to evaluate the trustworthiness of his self-help advise in terms of his overall rationality, in which case believing in nice things because they are nice to believe in (I am firmly on board with the Bayesian view of evidence, FYI) is a very bad sign. It means that if he says things like “you have to believe in yourself” I have to ask myself if he is just saying that because it sounds nice, or because it is known to be an effective strategy.
So basically, what I would like to know is how you determine that his advise is good. Is it basically a good summary of existing thought on the matter, and is that why you recommend it? Or is it that it just jibes well with your own intuition? Or does it fair well on objective measures of quality such as accuracy of scientific claims?
I don’t know what point you’re really trying to make here; I find it irritating when people basically say, “I’m not convinced; convince me,” because it puts social pressure on me to overstate my case. (It’s also an example of the trap mentioned in HPMOR where continually answering someone’s interrogatories leads to the impression of subordinate status.)
I don’t agree with your arguments in Adams’ case, for a number of reasons, but because of the adversarial position you’re taking, an onlooker would likely confuse my attacks on your errors to be in support of Adams, which isn’t really my intent.
As I said before, I support the book, not Adams’ writing, beliefs, or opinions in general. It contains many practical points that are highly in agreement with the LW zeitgeist, backed with extensive study citations, along with many non-obvious and unique suggestions that appear to make more sense than the usual sort of suggestions.
Many of those suggestions could be thought of as rooted in a “shut up and multiply” frame of mind, like Adams notion that it’s worth using small amounts of “bad” or high-calorie foods to tempt one to eat more good foods—like dipping carrots in ranch dressing or cooking broccoli in regular butter—if one would otherwise not have eaten the “good” food.
This is the type of idea one usually doesn’t see in diet literature, because it appears to be against a deontological morality of “good” and “bad” foods, whereas Adams is making a consequentialist argument.
Quite a lot of the book is like that, actually, in the sense that Adams presents ideas that should occur to people—but usually don’t—due to biases of these sorts. He talks a lot about how a big purpose of the book is to give people permission to do these things, or to set an example. (He mentions that the example of one of his coworkers becoming published was a huge influence on his future path, and that his example of being published inspired coworkers at his next job. “Permission”, in the sense of peer examples or explicit encouragement, is a powerful tool of influence.)
At this point, I think I’ve said all I’m willing to on this subthread. If you want to know more, read the book and look at the citations yourself. The book is physically available in hundreds of libraries, and electronically available from dozens of library systems, so you needn’t spend a penny to look at the references (or advice) for yourself.
I think your point about status is a bit silly: I am asking you these questions because I defer to your judgement and value your expertise highly, which should raise rather than lower your status. Nonetheless I appreciate that it’s annoying to be put in the position of having to convince people to do something that’s good for them, so thank you very much for taking the time to answer my questions. I think your arguments are good and it’s helped me and hopefully other people reading this to decide whether the book is worth reading.
The issue is whether or not I can trust his claims on face value. If I read advise of his which is based on scientific claims I don’t want to be left wondering if perhaps his advise is terrible because the current body of academic knowledge points in the opposite direction
The whole point of being a rationalist is to avoid taking things at face value and always thinking critically about stuff you read.
For a lot of question in that realm the scientific data isn’t conclusive.
Actually, I don’t think this is where his views and those of LW overlap so much. It’s much more in the area of his Hansonian cynicism about people and their ability to know why they do what they do, and his attitude of biting the bullet regarding unpleasant truths. Those two things, I would say, are the major philosophical overlaps between what he writes in this book, and LW’s outlook about things. (As opposed to mere agreement about specific facts or strategies.)
Hm, fair enough. Having read some more of his blog I agree that he does have essential rationalist traits such as being genuinely interested in the truth. He also seems to rely more on basic clear-thinking than overly theoretical arguments, which I am a fan of. However, he does occasionally say things like this:
It’s possible it’s just a joke, but off-hand remarks like that make me feel reluctant to take anything he says at face value. Believing or even seriously considering warm fuzzy things that don’t have evidence to back them up is a major red flag for me.
I was more under the impression that he’s genuinely interested in useful.
He’s a professional humorist: it’s his job to make jokes.
Depending on your definition of “evidence” you’ll either love him or hate him then. The book makes a big deal about one of his strategies for happiness, which is to deliberately spend a lot of time thinking about awesome outcomes that actually have very little probability of happening, as if they were likely to happen. The object isn’t to convince himself that these things will happen, but rather to trick the “moist robot” into feeling good about the future, so that he will have the motivation to stick to his systems.
[Edit to add: I should note that I do not endorse Adams or his views in general—I just think that this one particular book of his is extremely valuable, and more than worth working around any jokes or epistemically problematic theories. Most self-help books have far more epistemic issues than this one does, after all, and in general it’s a serious mistake to overlook good instrumental advice attached to bad theories. Bad theories are the default condition for new knowledge: most good theories evolve as alternate explanations for a reasonably predictive model attached to a bad theory.]
Making jokes is fine, and I do like his style of writing for the most part (I have read earlier books of his). The issue is whether or not I can trust his claims on face value. If I read advise of his which is based on scientific claims I don’t want to be left wondering if perhaps his advise is terrible because the current body of academic knowledge points in the opposite direction. Giving self-help advise based on common sense/experience alone is not as useful to me if I have no reason to believe his common sense on the matter is any better than mine. In that case, I have to evaluate the trustworthiness of his self-help advise in terms of his overall rationality, in which case believing in nice things because they are nice to believe in (I am firmly on board with the Bayesian view of evidence, FYI) is a very bad sign. It means that if he says things like “you have to believe in yourself” I have to ask myself if he is just saying that because it sounds nice, or because it is known to be an effective strategy.
So basically, what I would like to know is how you determine that his advise is good. Is it basically a good summary of existing thought on the matter, and is that why you recommend it? Or is it that it just jibes well with your own intuition? Or does it fair well on objective measures of quality such as accuracy of scientific claims?
I don’t know what point you’re really trying to make here; I find it irritating when people basically say, “I’m not convinced; convince me,” because it puts social pressure on me to overstate my case. (It’s also an example of the trap mentioned in HPMOR where continually answering someone’s interrogatories leads to the impression of subordinate status.)
I don’t agree with your arguments in Adams’ case, for a number of reasons, but because of the adversarial position you’re taking, an onlooker would likely confuse my attacks on your errors to be in support of Adams, which isn’t really my intent.
As I said before, I support the book, not Adams’ writing, beliefs, or opinions in general. It contains many practical points that are highly in agreement with the LW zeitgeist, backed with extensive study citations, along with many non-obvious and unique suggestions that appear to make more sense than the usual sort of suggestions.
Many of those suggestions could be thought of as rooted in a “shut up and multiply” frame of mind, like Adams notion that it’s worth using small amounts of “bad” or high-calorie foods to tempt one to eat more good foods—like dipping carrots in ranch dressing or cooking broccoli in regular butter—if one would otherwise not have eaten the “good” food.
This is the type of idea one usually doesn’t see in diet literature, because it appears to be against a deontological morality of “good” and “bad” foods, whereas Adams is making a consequentialist argument.
Quite a lot of the book is like that, actually, in the sense that Adams presents ideas that should occur to people—but usually don’t—due to biases of these sorts. He talks a lot about how a big purpose of the book is to give people permission to do these things, or to set an example. (He mentions that the example of one of his coworkers becoming published was a huge influence on his future path, and that his example of being published inspired coworkers at his next job. “Permission”, in the sense of peer examples or explicit encouragement, is a powerful tool of influence.)
At this point, I think I’ve said all I’m willing to on this subthread. If you want to know more, read the book and look at the citations yourself. The book is physically available in hundreds of libraries, and electronically available from dozens of library systems, so you needn’t spend a penny to look at the references (or advice) for yourself.
I think your point about status is a bit silly: I am asking you these questions because I defer to your judgement and value your expertise highly, which should raise rather than lower your status. Nonetheless I appreciate that it’s annoying to be put in the position of having to convince people to do something that’s good for them, so thank you very much for taking the time to answer my questions. I think your arguments are good and it’s helped me and hopefully other people reading this to decide whether the book is worth reading.
The whole point of being a rationalist is to avoid taking things at face value and always thinking critically about stuff you read.
For a lot of question in that realm the scientific data isn’t conclusive.