I just returned to the parent comment by way of comment-stalking muflax, and got even more out of it this time. You live in an interesting place, Will; and I do enjoy visiting.
Still not sure where the “dovetailing” of Leibniz comes in; or what the indefinite untrustworthy basement layers of Ken Thompson have to do with Elysium; but perhaps I’ll get it on my next reading.
Nerfhammer’s excellent wikipedia contributions reminded me of your disdain for the heuristics and biases literature. The disdain seems justified (for example, the rhyme-as-reason effect depends on Bayesian evidence: a guideline immortalized in verse has likely been considered longer than the average prose observation); but, are there any alternatives for working toward more effective thinking?
You live in an interesting place, Will; and I do enjoy visiting.
It’s gotten about twice as interesting since I wrote that comment. E.g. I’ve learned a potentially very powerful magick spell in the meantime.
“Reality-Warping Elysium” was a Terence McKenna reference; I don’t remember its rationale but I don’t think it was a very good one.
Nerfhammer’s excellent wikipedia contributions reminded me of your disdain for the heuristics and biases literature.
I think I may overstate my case sometimes; I’m a very big Gigerenzer fan, and he’s one of the most cited H&B researchers. (Unlike most psychologists, Gigerenzer is a very competent statistician.) But unfortunately the researchers who are most cited by LessWrong types, e.g. Kahneman, are those whose research is of quite dubious utility. What’s frustrating is that Eliezer knows of and appreciates Gigerenzer and must know of his critiques of Kahneman and his (overzealous semi-Bayesian) style of research, but he almost never cites that side of the H&B research. Kaj Sotala, a cognitive science student, has pointed out some of these things to LessWrong and yet the arguments don’t seem to have entered into the LessWrong memeplex.
The two hallmarks of LessWrong are H&B and Bayesian probability: the latter is often abused, especially in the form of algorithmic probability, and decision theorists have shown that it’s not as fundamental as Eliezer thought it was; and the H&B literature, like all psychology literature, is filled with premature conclusions, misinterpretations, questionable and contradictory results, and generally an overall lack of much that can be used to bolster rationality. (It’s interesting and frustrating to see many papers demonstrating “biases” in opposite directions on roughly the same kind of problem, with only vague and ad hoc attempts to reconcile them.) If there’s a third hallmark of LessWrong then it’s microeconomics and game theory, especially Schelling’s style of game theory, but unfortunately it gets relatively neglected and the posts applying Schellingian and Bayesian reasoning to complex problems of social signaling hermeneutics are very few and far-between.
I may have adjusted too much, but… Before I read a 1980s(?) version of Dawes’ “Rational Choice in an Uncertain World” I had basically the standard LessWrong opinion of H&B, namely that it’s flawed like all other science but you could basically take its bigger results for granted as true and meaningful; but as I read Dawes’ book I felt betrayed: the research was clearly so flawed, brittle, and easily misinterpreted that there’s no way building an edifice of “rationality” on top of it could be justifiable. A lot of interesting research has surely gone on since that book was written, but even so, that the foundations of the field are so shoddy indicates that the field in general might be non-negligibly cargo cult science. (Dawes even takes a totally uncalled for and totally incorrect potshot at Christians in the middle of the book; this seems relatively innocuous, but remember that Eliezer’s naive readers are doing the same thing when they try to apply H&B results to the reasoning of normal/superstitious/religious folk. It’s the same failure mode; you have these seemingly solid results, now you can clearly demonstrate how your enemies’ reasoning is wrong and contemptible, right? It’s disturbing that this attitude is held even by some of the most-respected researchers in the field.)
I remain stressed and worried about Eliezer, Anna, and Julia’s new organization for similar reasons; I’ve seen people (e.g. myself) become much better thinkers due to hanging out with skilled thinkers like Anna, Steve Rayhawk, Peter de Blanc, Michael Vassar, et cetera; but this improvement had nothing to do with “debiasing” as such, and had everything to do with spending a lot of time in interesting conversations. I have little idea why Eliezer et al think they can give people anything more than social connections and typical self-help improvements that could be gotten from anywhere else, unless Eliezer et al plan on spending a lot of time actually talking to people about actual unsolved problems and demonstrating how rationality works in practice.
but, are there any alternatives for working toward more effective thinking?
Finding a mentor or at least some peers and talking to them a lot seems to work somewhat, having high intelligence seems pretty important, not being neurotypical seems as important as high intelligence, reading a ton seems very important but I’m not sure if it’s as useful for people who don’t start out schizotypal. I think that making oneself more schizotypal seems like a clear win but I don’t know how one would go about doing it; maybe doing a lot of nitrous or ketamine, but um, don’t take my word for it. There’s a fundamental skill of taking some things very seriously and other things not seriously at all that I don’t know how to describe or work on directly. Yeah, I dunno; but it seems a big thing that separates the men from the boys and that is clearly doable is just reading a ton of stuff and seeing how it’s connected, and building lots of models of the world based on what you read until you’re skilled at coming up with off-the-cuff hypotheses. That’s what I spend most of my time doing. I’m certain that getting good at chess helps your rationality skills and I think Michael Vassar agrees with me; I definitely notice that some of my chess-playing subskills for thinking about moves and counter-moves get used more generally when thinking about arguments and counter-arguments. (I’m rated like 1800 or something.)
If there’s a third hallmark of LessWrong then it’s microeconomics and game theory, especially Schelling’s style of game theory, but unfortunately it gets relatively neglected and the posts applying Schellingian and Bayesian reasoning to complex problems of social signaling hermeneutics are very few and far-between.
I blame the fact the Eliezer doesn’t have a sequence talking about them.
Less Wrong is mainly a set of beliefs and arguments selected for their persuasiveness in convincing people that creating friendly AI is of utmost importance (follow the link above if you think I am wrong).
The two hallmarks of LessWrong are H&B and Bayesian probability: the latter is often abused...
I believe that the most abused field is artificial intelligence. The ratio of evidence to claims about artificial intelligence is extremely low.
I just returned to the parent comment by way of comment-stalking muflax, and got even more out of it this time. You live in an interesting place, Will; and I do enjoy visiting.
Still not sure where the “dovetailing” of Leibniz comes in; or what the indefinite untrustworthy basement layers of Ken Thompson have to do with Elysium; but perhaps I’ll get it on my next reading.
Nerfhammer’s excellent wikipedia contributions reminded me of your disdain for the heuristics and biases literature. The disdain seems justified (for example, the rhyme-as-reason effect depends on Bayesian evidence: a guideline immortalized in verse has likely been considered longer than the average prose observation); but, are there any alternatives for working toward more effective thinking?
It’s gotten about twice as interesting since I wrote that comment. E.g. I’ve learned a potentially very powerful magick spell in the meantime.
“Reality-Warping Elysium” was a Terence McKenna reference; I don’t remember its rationale but I don’t think it was a very good one.
I think I may overstate my case sometimes; I’m a very big Gigerenzer fan, and he’s one of the most cited H&B researchers. (Unlike most psychologists, Gigerenzer is a very competent statistician.) But unfortunately the researchers who are most cited by LessWrong types, e.g. Kahneman, are those whose research is of quite dubious utility. What’s frustrating is that Eliezer knows of and appreciates Gigerenzer and must know of his critiques of Kahneman and his (overzealous semi-Bayesian) style of research, but he almost never cites that side of the H&B research. Kaj Sotala, a cognitive science student, has pointed out some of these things to LessWrong and yet the arguments don’t seem to have entered into the LessWrong memeplex.
The two hallmarks of LessWrong are H&B and Bayesian probability: the latter is often abused, especially in the form of algorithmic probability, and decision theorists have shown that it’s not as fundamental as Eliezer thought it was; and the H&B literature, like all psychology literature, is filled with premature conclusions, misinterpretations, questionable and contradictory results, and generally an overall lack of much that can be used to bolster rationality. (It’s interesting and frustrating to see many papers demonstrating “biases” in opposite directions on roughly the same kind of problem, with only vague and ad hoc attempts to reconcile them.) If there’s a third hallmark of LessWrong then it’s microeconomics and game theory, especially Schelling’s style of game theory, but unfortunately it gets relatively neglected and the posts applying Schellingian and Bayesian reasoning to complex problems of social signaling hermeneutics are very few and far-between.
I may have adjusted too much, but… Before I read a 1980s(?) version of Dawes’ “Rational Choice in an Uncertain World” I had basically the standard LessWrong opinion of H&B, namely that it’s flawed like all other science but you could basically take its bigger results for granted as true and meaningful; but as I read Dawes’ book I felt betrayed: the research was clearly so flawed, brittle, and easily misinterpreted that there’s no way building an edifice of “rationality” on top of it could be justifiable. A lot of interesting research has surely gone on since that book was written, but even so, that the foundations of the field are so shoddy indicates that the field in general might be non-negligibly cargo cult science. (Dawes even takes a totally uncalled for and totally incorrect potshot at Christians in the middle of the book; this seems relatively innocuous, but remember that Eliezer’s naive readers are doing the same thing when they try to apply H&B results to the reasoning of normal/superstitious/religious folk. It’s the same failure mode; you have these seemingly solid results, now you can clearly demonstrate how your enemies’ reasoning is wrong and contemptible, right? It’s disturbing that this attitude is held even by some of the most-respected researchers in the field.)
I remain stressed and worried about Eliezer, Anna, and Julia’s new organization for similar reasons; I’ve seen people (e.g. myself) become much better thinkers due to hanging out with skilled thinkers like Anna, Steve Rayhawk, Peter de Blanc, Michael Vassar, et cetera; but this improvement had nothing to do with “debiasing” as such, and had everything to do with spending a lot of time in interesting conversations. I have little idea why Eliezer et al think they can give people anything more than social connections and typical self-help improvements that could be gotten from anywhere else, unless Eliezer et al plan on spending a lot of time actually talking to people about actual unsolved problems and demonstrating how rationality works in practice.
Finding a mentor or at least some peers and talking to them a lot seems to work somewhat, having high intelligence seems pretty important, not being neurotypical seems as important as high intelligence, reading a ton seems very important but I’m not sure if it’s as useful for people who don’t start out schizotypal. I think that making oneself more schizotypal seems like a clear win but I don’t know how one would go about doing it; maybe doing a lot of nitrous or ketamine, but um, don’t take my word for it. There’s a fundamental skill of taking some things very seriously and other things not seriously at all that I don’t know how to describe or work on directly. Yeah, I dunno; but it seems a big thing that separates the men from the boys and that is clearly doable is just reading a ton of stuff and seeing how it’s connected, and building lots of models of the world based on what you read until you’re skilled at coming up with off-the-cuff hypotheses. That’s what I spend most of my time doing. I’m certain that getting good at chess helps your rationality skills and I think Michael Vassar agrees with me; I definitely notice that some of my chess-playing subskills for thinking about moves and counter-moves get used more generally when thinking about arguments and counter-arguments. (I’m rated like 1800 or something.)
Well shoot, don’t tell us about it—our disbelief might stop it from working.
If I don’t tell you what it is or what it does then I think I’m okay. Admittedly I don’t have much experience in the field.
I blame the fact the Eliezer doesn’t have a sequence talking about them.
Less Wrong has been created with the goal in mind of getting people to support SIAI.
Less Wrong is mainly a set of beliefs and arguments selected for their persuasiveness in convincing people that creating friendly AI is of utmost importance (follow the link above if you think I am wrong).
I believe that the most abused field is artificial intelligence. The ratio of evidence to claims about artificial intelligence is extremely low.