Dawkins is normally a much sharper thinker than this, his arguments could have been made much more compelling. Anyway, I am going to sidestep the moral issue and look at the epistemic question.
Evolutionarily speaking the fundamental non-obvious insight is that there’s little advantage to be had in signalling weakness and vulnerability if you don’t happen to be a social and therefore intelligent animal with a helpful tribe close by. There’s no reason to wire pain signals halfway ‘round the brain and back just to suffer in more optimal ways if there’s no one around to take advantage of thereby. We can strengthen this argument with a complementary but disjunctive mechanistic analysis. It is important to look at humans’ cingulate cortex (esp. ACC), insula, pain asymbolia and related insular oddities, reward signal propagation, et cetera. This would be a decent paper to read but I’m too lazy to read it, or this one for that matter. Do note that much brain research is exaggeration and lies, especially about the ACC, as I had the unfortunate pleasure of discovering recently.
Philosophy is perhaps better suited to this question. Metaphysically speaking it must be acknowledged that animals are obviously not as perfect as humans, and are therefore less Godlike, and therefore less sentient, as can all be proven in the same vein as Leibniz’s famous Recursive Universal Dovetailing Measure-Utility Inequality Theorem. His arguments are popularly referred to as the “No Free Haha-God-Is-Evil” theorems, though most monads are skeptical of the results’ practical applicability to monads in most monads. Theologians admit that they are puzzled by the probably impossible logical possibility of an acausal algorithm employing some variation on Thompson’s “Reality-Warping Elysium” process, but unfortunately any progress towards getting any bits about a relevant Chaitin’s omega results in its immediate diagonalization out of space, time, and all mathematically interesting axiom sets. This qua “this” can also be proven by “Goedel’s ontological proof” if you happen to be Goedel (naturally).
My default position is that suffering as we know it is fundamentally tied in with extremely important and extremely complex social decision theoretic game theoretic calculus modeling stuff, and also all that metaphysics stuff. I will non-negligibly update if someone can show me a good experiment demonstrating something like “learned helplessness” in non-hominids or non-things-that-hunted-in-packs-for-a-long-time-then-were-artificially-molded-into-hominid-companions. That high-citation rat study looked like positive bias upon brief inspection, but maybe that was positive bias.
On the meta level though, the nicest thing about going sufficiently meta is that you don’t have to worry about enlightened aqua versus turquoise policy debates. Which by the way continues to reliably invoke the primal forces of insanity. It’s like using a tall metal rod as a totem pole for spiritual practice, in a lightning storm, while your house burns down, with the entire universe inside it, and also the love of your life, who is incredibly attractive. Maybe a cool post would be “Policy is the Mind Killer”, about how all policy discussion should be at least 16 meta levels up, because basically everything anyone ever does is a lost purpose. (It has not yet been convincingly shown that humanity is not a lost purpose, but I think this is a timeful/timeless confusion and can be dissolved in short order with right view.) Talking about how to talk about thinking about morality is a decent place to start from and work our way up or down, and in the meantime posts like multifoliaterose’s one on Lab Pascals are decent mind-teasers maybe. But object level policy debates just entrench bad cognitive habits. Dramatic cognitive habits. Gauche weapons from a less civilized age… of literal weapons. Your strength as a rationalist is your ability to be understood by Douglas Hofstadter and no one else. Ideally that would include yourself. And don’t forget to cut through in the same motion, of course. Anyway this is just unsolicited advice aimed without purpose, and I acknowledge that debating lilac versus mauve can be fun some times. …I’m not gay, it’s just an extended metaphor extension.
Off-the-cuff hypothesis that I arrogantly deem more interesting than the discussion topic: The prefrontal cortex is exploiting executive oversight to rent-seek in the neural Darwinian economy, which results in egodystonic wireheading behaviors and self-defeating use of genetic, memetic, and behavioral selection pressure (a scarce resource), especially at higher levels of abstraction/organization where there is more room for bureaucratic shuffling and vague promises of “meta-optimization”, where the selection pressure actually goes towards the cortical substructural equivalent of hookers and blow. Analysis across all levels of organization could be given but is omitted due to space, time, and thermodynamic constraints. The pre-frontal cortex is basically a caricature of big government, but it spreads propagandistic memes claiming the contrary in the name of “science” which just happens to be largely funded by pre-frontal cortices. The bicameral system is actually very cooperative despite misleading research in the form of split-brain studies attempting to promote the contrary. In reality they are the lizards. This hypothesis is a possible explanation for hyperbolic discounting, akrasia, depression, Buddhism, free will, or come to think of it basically anything that at some point involved a human brain. This hypothesis can easily be falsified by a reasonable economic analysis.
Folk ’round these parts are smart enough not to get dragged down in Blue versus Green political debate, most of the time. But there is much speculation about policy that is even more insane than Blue versus Green even if it does happen to be more sophisticated and subtle. (Mauve versus lilac is more sophisticated than blue versus green.) For example, it is a strong attractor in insanityspace for people to hear about el singularidad and say “Well obviously this means we should kill everyone in the world except the FAI team because that’s what utilitarianism says, I can’t believe you people are advocating such extreme measures, that is sick and I have contempt for you, and if you’re not advocating such extreme measures then you must be inconsistent and not actually believe anything you say! DRAMA DRAMA DRAMA!”. Or some of the responses to multifoliaterose’s infinite lab universes post. I’m under the impression that the Buddhists talk about this kind of obsession with drama in the context of Manjushri’s sword. Anyway, policy debate makes people stupid, but instead of going up a few meta levels and dealing with that stupidity directly, they choose to make the context of their stupidity a dramatic and emotionally charged one. I have no aim in complaining about this besides maybe highlighting the behavior such that people can realize if they’re starting to slip into it.
It’s funny how people always complain about death, but not about inferential distance. Inferential distance is a much blacker plague upon the world than death, and the technology to eliminate it is probably about as difficult to engineer as strong anti-aging tech. Technologies that improve communication are game-breaking technologies. E.g. language, writing, printing press, the internet, and the mind-blowing stuff you learn about once you’re of high enough rank in the Bayesian Conspiracy.
You’re clearly a smart guy and have interesting things to say, but your posts give off a strong crank vibe. I’ve noticed this in your comments before, so I don’t think it’s an isolated issue. Perhaps this doesn’t show up in your social interactions elsewhere so it’s not a serious issue for you, but if it does I think it would be well worth your while to pay attention to.
Here’s some speculations about what it is that sends off my crank alert:
You use a lot of references/vocabulary that will be opaque to lots of people
Thank you for your helpful response. It would take a long time to explain the psychology involved on my part, but I do indeed have a fairly thorough understanding of the social psychology involved on the part of others. Sometimes I legitimately expect persons to understand what I am saying and am surprised when they do not, but most often I do not anticipate that folk will understand what I am saying and am unsurprised when they do not. I often comment anyway for three reasons. First, because it would be prohibitively motivationally expensive for me to fully explain each point, and yet I figure there’s some non-negligible chance that someone will find something I say to be interesting despite the lack of clarity. Second, because I can use the little bit of motivation I get from the thought of someone potentially getting some insight from something I say, as inspiration to write some of my thoughts down, which I usually find very psychologically taxing. Third, because of some sort of unvirtuous passive-aggression or frustration caused by people being uncharitable in interpreting me, and thus a desire to defect in communication as repayment. The latter comes from a sort of contempt, ’cuz I’ve been working on my rationalist skillz for a while now as a sort of full-time endeavor and I can see many ways in which Less Wrong is deficient. I am completely aware that such contempt—like all contempt—is useless and possibly inaccurate in many ways. I might start cutting back on my Less Wrong commenting soon. I have an alternative account where I make only clear and high quality comments, I might as well just use that one only. Again, thanks for taking the time to give feedback.
Third, because of some sort of unvirtuous passive-aggression or frustration caused by people being uncharitable in interpreting me, and thus a desire to defect in communication as repayment.
You know this causes them to defect in turn by actively not-trying to understand you, right?
I will non-negligibly update if someone can show me a good experiment demonstrating something like “learned helplessness” in non-hominids or non-things-that-hunted-in-packs-for-a-long-time-then-were-artificially-molded-into-hominid-companions.
Thanks for the link, I’ll check it out soon. It’s funny, just a few days before I read your comment I noticed that I was confused by elephants. What is up with elephants? They’re weird. Anyway, thanks.
I just returned to the parent comment by way of comment-stalking muflax, and got even more out of it this time. You live in an interesting place, Will; and I do enjoy visiting.
Still not sure where the “dovetailing” of Leibniz comes in; or what the indefinite untrustworthy basement layers of Ken Thompson have to do with Elysium; but perhaps I’ll get it on my next reading.
Nerfhammer’s excellent wikipedia contributions reminded me of your disdain for the heuristics and biases literature. The disdain seems justified (for example, the rhyme-as-reason effect depends on Bayesian evidence: a guideline immortalized in verse has likely been considered longer than the average prose observation); but, are there any alternatives for working toward more effective thinking?
You live in an interesting place, Will; and I do enjoy visiting.
It’s gotten about twice as interesting since I wrote that comment. E.g. I’ve learned a potentially very powerful magick spell in the meantime.
“Reality-Warping Elysium” was a Terence McKenna reference; I don’t remember its rationale but I don’t think it was a very good one.
Nerfhammer’s excellent wikipedia contributions reminded me of your disdain for the heuristics and biases literature.
I think I may overstate my case sometimes; I’m a very big Gigerenzer fan, and he’s one of the most cited H&B researchers. (Unlike most psychologists, Gigerenzer is a very competent statistician.) But unfortunately the researchers who are most cited by LessWrong types, e.g. Kahneman, are those whose research is of quite dubious utility. What’s frustrating is that Eliezer knows of and appreciates Gigerenzer and must know of his critiques of Kahneman and his (overzealous semi-Bayesian) style of research, but he almost never cites that side of the H&B research. Kaj Sotala, a cognitive science student, has pointed out some of these things to LessWrong and yet the arguments don’t seem to have entered into the LessWrong memeplex.
The two hallmarks of LessWrong are H&B and Bayesian probability: the latter is often abused, especially in the form of algorithmic probability, and decision theorists have shown that it’s not as fundamental as Eliezer thought it was; and the H&B literature, like all psychology literature, is filled with premature conclusions, misinterpretations, questionable and contradictory results, and generally an overall lack of much that can be used to bolster rationality. (It’s interesting and frustrating to see many papers demonstrating “biases” in opposite directions on roughly the same kind of problem, with only vague and ad hoc attempts to reconcile them.) If there’s a third hallmark of LessWrong then it’s microeconomics and game theory, especially Schelling’s style of game theory, but unfortunately it gets relatively neglected and the posts applying Schellingian and Bayesian reasoning to complex problems of social signaling hermeneutics are very few and far-between.
I may have adjusted too much, but… Before I read a 1980s(?) version of Dawes’ “Rational Choice in an Uncertain World” I had basically the standard LessWrong opinion of H&B, namely that it’s flawed like all other science but you could basically take its bigger results for granted as true and meaningful; but as I read Dawes’ book I felt betrayed: the research was clearly so flawed, brittle, and easily misinterpreted that there’s no way building an edifice of “rationality” on top of it could be justifiable. A lot of interesting research has surely gone on since that book was written, but even so, that the foundations of the field are so shoddy indicates that the field in general might be non-negligibly cargo cult science. (Dawes even takes a totally uncalled for and totally incorrect potshot at Christians in the middle of the book; this seems relatively innocuous, but remember that Eliezer’s naive readers are doing the same thing when they try to apply H&B results to the reasoning of normal/superstitious/religious folk. It’s the same failure mode; you have these seemingly solid results, now you can clearly demonstrate how your enemies’ reasoning is wrong and contemptible, right? It’s disturbing that this attitude is held even by some of the most-respected researchers in the field.)
I remain stressed and worried about Eliezer, Anna, and Julia’s new organization for similar reasons; I’ve seen people (e.g. myself) become much better thinkers due to hanging out with skilled thinkers like Anna, Steve Rayhawk, Peter de Blanc, Michael Vassar, et cetera; but this improvement had nothing to do with “debiasing” as such, and had everything to do with spending a lot of time in interesting conversations. I have little idea why Eliezer et al think they can give people anything more than social connections and typical self-help improvements that could be gotten from anywhere else, unless Eliezer et al plan on spending a lot of time actually talking to people about actual unsolved problems and demonstrating how rationality works in practice.
but, are there any alternatives for working toward more effective thinking?
Finding a mentor or at least some peers and talking to them a lot seems to work somewhat, having high intelligence seems pretty important, not being neurotypical seems as important as high intelligence, reading a ton seems very important but I’m not sure if it’s as useful for people who don’t start out schizotypal. I think that making oneself more schizotypal seems like a clear win but I don’t know how one would go about doing it; maybe doing a lot of nitrous or ketamine, but um, don’t take my word for it. There’s a fundamental skill of taking some things very seriously and other things not seriously at all that I don’t know how to describe or work on directly. Yeah, I dunno; but it seems a big thing that separates the men from the boys and that is clearly doable is just reading a ton of stuff and seeing how it’s connected, and building lots of models of the world based on what you read until you’re skilled at coming up with off-the-cuff hypotheses. That’s what I spend most of my time doing. I’m certain that getting good at chess helps your rationality skills and I think Michael Vassar agrees with me; I definitely notice that some of my chess-playing subskills for thinking about moves and counter-moves get used more generally when thinking about arguments and counter-arguments. (I’m rated like 1800 or something.)
If there’s a third hallmark of LessWrong then it’s microeconomics and game theory, especially Schelling’s style of game theory, but unfortunately it gets relatively neglected and the posts applying Schellingian and Bayesian reasoning to complex problems of social signaling hermeneutics are very few and far-between.
I blame the fact the Eliezer doesn’t have a sequence talking about them.
Less Wrong is mainly a set of beliefs and arguments selected for their persuasiveness in convincing people that creating friendly AI is of utmost importance (follow the link above if you think I am wrong).
The two hallmarks of LessWrong are H&B and Bayesian probability: the latter is often abused...
I believe that the most abused field is artificial intelligence. The ratio of evidence to claims about artificial intelligence is extremely low.
If you possessed a talent for writing decent prose, you could be the next Lovecraft. Mind, Lovecraft’s prose was less-than-decent, but that is beside the point.
My default position is that suffering as we know it is fundamentally tied in with extremely important and extremely complex social decision theoretic game theoretic calculus modeling stuff, and also all that metaphysics stuff. I will non-negligibly update if someone can show me a good experiment demonstrating something like “learned helplessness” in non-hominids or non-things-that-hunted-in-packs-for-a-long-time-then-were-artificially-molded-into-hominid-companions. That high-citation rat study looked like positive bias upon brief inspection, but maybe that was positive bias.
Aside from this paragraph, I am almost entirely unsure what you were stating in that post. However, it produced feelings of interest and dread.
By chance, do you have any capacity to summarize it? If this is the case, would you please be willing to do so?
Do note that much brain research is exaggeration and lies, especially about the ACC, as I had the unfortunate pleasure of discovering recently.
Mind expanding on that?
Also, “Recursive Universal Dovetailing Measure-Utility Inequality Theorem” is an extremely awesome name. Phrased like this I actually finally got why you’re raving so much about Leibniz. Gotta try re-reading him from that perspective. Your comments really should come with a challenge rating and “prerequisites: Kolmogorov level 5, feat Bicameral Mind” list.
Dawkins is normally a much sharper thinker than this, his arguments could have been made much more compelling. Anyway, I am going to sidestep the moral issue and look at the epistemic question.
Evolutionarily speaking the fundamental non-obvious insight is that there’s little advantage to be had in signalling weakness and vulnerability if you don’t happen to be a social and therefore intelligent animal with a helpful tribe close by. There’s no reason to wire pain signals halfway ‘round the brain and back just to suffer in more optimal ways if there’s no one around to take advantage of thereby. We can strengthen this argument with a complementary but disjunctive mechanistic analysis. It is important to look at humans’ cingulate cortex (esp. ACC), insula, pain asymbolia and related insular oddities, reward signal propagation, et cetera. This would be a decent paper to read but I’m too lazy to read it, or this one for that matter. Do note that much brain research is exaggeration and lies, especially about the ACC, as I had the unfortunate pleasure of discovering recently.
Philosophy is perhaps better suited to this question. Metaphysically speaking it must be acknowledged that animals are obviously not as perfect as humans, and are therefore less Godlike, and therefore less sentient, as can all be proven in the same vein as Leibniz’s famous Recursive Universal Dovetailing Measure-Utility Inequality Theorem. His arguments are popularly referred to as the “No Free Haha-God-Is-Evil” theorems, though most monads are skeptical of the results’ practical applicability to monads in most monads. Theologians admit that they are puzzled by the probably impossible logical possibility of an acausal algorithm employing some variation on Thompson’s “Reality-Warping Elysium” process, but unfortunately any progress towards getting any bits about a relevant Chaitin’s omega results in its immediate diagonalization out of space, time, and all mathematically interesting axiom sets. This qua “this” can also be proven by “Goedel’s ontological proof” if you happen to be Goedel (naturally).
My default position is that suffering as we know it is fundamentally tied in with extremely important and extremely complex social decision theoretic game theoretic calculus modeling stuff, and also all that metaphysics stuff. I will non-negligibly update if someone can show me a good experiment demonstrating something like “learned helplessness” in non-hominids or non-things-that-hunted-in-packs-for-a-long-time-then-were-artificially-molded-into-hominid-companions. That high-citation rat study looked like positive bias upon brief inspection, but maybe that was positive bias.
On the meta level though, the nicest thing about going sufficiently meta is that you don’t have to worry about enlightened aqua versus turquoise policy debates. Which by the way continues to reliably invoke the primal forces of insanity. It’s like using a tall metal rod as a totem pole for spiritual practice, in a lightning storm, while your house burns down, with the entire universe inside it, and also the love of your life, who is incredibly attractive. Maybe a cool post would be “Policy is the Mind Killer”, about how all policy discussion should be at least 16 meta levels up, because basically everything anyone ever does is a lost purpose. (It has not yet been convincingly shown that humanity is not a lost purpose, but I think this is a timeful/timeless confusion and can be dissolved in short order with right view.) Talking about how to talk about thinking about morality is a decent place to start from and work our way up or down, and in the meantime posts like multifoliaterose’s one on Lab Pascals are decent mind-teasers maybe. But object level policy debates just entrench bad cognitive habits. Dramatic cognitive habits. Gauche weapons from a less civilized age… of literal weapons. Your strength as a rationalist is your ability to be understood by Douglas Hofstadter and no one else. Ideally that would include yourself. And don’t forget to cut through in the same motion, of course. Anyway this is just unsolicited advice aimed without purpose, and I acknowledge that debating lilac versus mauve can be fun some times. …I’m not gay, it’s just an extended metaphor extension.
Off-the-cuff hypothesis that I arrogantly deem more interesting than the discussion topic: The prefrontal cortex is exploiting executive oversight to rent-seek in the neural Darwinian economy, which results in egodystonic wireheading behaviors and self-defeating use of genetic, memetic, and behavioral selection pressure (a scarce resource), especially at higher levels of abstraction/organization where there is more room for bureaucratic shuffling and vague promises of “meta-optimization”, where the selection pressure actually goes towards the cortical substructural equivalent of hookers and blow. Analysis across all levels of organization could be given but is omitted due to space, time, and thermodynamic constraints. The pre-frontal cortex is basically a caricature of big government, but it spreads propagandistic memes claiming the contrary in the name of “science” which just happens to be largely funded by pre-frontal cortices. The bicameral system is actually very cooperative despite misleading research in the form of split-brain studies attempting to promote the contrary. In reality they are the lizards. This hypothesis is a possible explanation for hyperbolic discounting, akrasia, depression, Buddhism, free will, or come to think of it basically anything that at some point involved a human brain. This hypothesis can easily be falsified by a reasonable economic analysis.
Does anybody else understand this? “enlightened aqua versus turquoise policy debates”—is that a thing?
Folk ’round these parts are smart enough not to get dragged down in Blue versus Green political debate, most of the time. But there is much speculation about policy that is even more insane than Blue versus Green even if it does happen to be more sophisticated and subtle. (Mauve versus lilac is more sophisticated than blue versus green.) For example, it is a strong attractor in insanityspace for people to hear about el singularidad and say “Well obviously this means we should kill everyone in the world except the FAI team because that’s what utilitarianism says, I can’t believe you people are advocating such extreme measures, that is sick and I have contempt for you, and if you’re not advocating such extreme measures then you must be inconsistent and not actually believe anything you say! DRAMA DRAMA DRAMA!”. Or some of the responses to multifoliaterose’s infinite lab universes post. I’m under the impression that the Buddhists talk about this kind of obsession with drama in the context of Manjushri’s sword. Anyway, policy debate makes people stupid, but instead of going up a few meta levels and dealing with that stupidity directly, they choose to make the context of their stupidity a dramatic and emotionally charged one. I have no aim in complaining about this besides maybe highlighting the behavior such that people can realize if they’re starting to slip into it.
It’s funny how people always complain about death, but not about inferential distance. Inferential distance is a much blacker plague upon the world than death, and the technology to eliminate it is probably about as difficult to engineer as strong anti-aging tech. Technologies that improve communication are game-breaking technologies. E.g. language, writing, printing press, the internet, and the mind-blowing stuff you learn about once you’re of high enough rank in the Bayesian Conspiracy.
You’re clearly a smart guy and have interesting things to say, but your posts give off a strong crank vibe. I’ve noticed this in your comments before, so I don’t think it’s an isolated issue. Perhaps this doesn’t show up in your social interactions elsewhere so it’s not a serious issue for you, but if it does I think it would be well worth your while to pay attention to.
Here’s some speculations about what it is that sends off my crank alert:
You use a lot of references/vocabulary that will be opaque to lots of people
You jump between points rapidly
Thank you for your helpful response. It would take a long time to explain the psychology involved on my part, but I do indeed have a fairly thorough understanding of the social psychology involved on the part of others. Sometimes I legitimately expect persons to understand what I am saying and am surprised when they do not, but most often I do not anticipate that folk will understand what I am saying and am unsurprised when they do not. I often comment anyway for three reasons. First, because it would be prohibitively motivationally expensive for me to fully explain each point, and yet I figure there’s some non-negligible chance that someone will find something I say to be interesting despite the lack of clarity. Second, because I can use the little bit of motivation I get from the thought of someone potentially getting some insight from something I say, as inspiration to write some of my thoughts down, which I usually find very psychologically taxing. Third, because of some sort of unvirtuous passive-aggression or frustration caused by people being uncharitable in interpreting me, and thus a desire to defect in communication as repayment. The latter comes from a sort of contempt, ’cuz I’ve been working on my rationalist skillz for a while now as a sort of full-time endeavor and I can see many ways in which Less Wrong is deficient. I am completely aware that such contempt—like all contempt—is useless and possibly inaccurate in many ways. I might start cutting back on my Less Wrong commenting soon. I have an alternative account where I make only clear and high quality comments, I might as well just use that one only. Again, thanks for taking the time to give feedback.
I’d be very interested in seeing posts on specifics on how LW is deficient/could improve.
You know this causes them to defect in turn by actively not-trying to understand you, right?
Of course. “I do indeed have a fairly thorough understanding of the social psychology involved on the part of others.”
Interesting. Good context to have.
I would expect such contempt to be actively harmful to you (in that people will like you and listen to you less).
I hope I did not come off as adversarial.
Elephants?
Thanks for the link, I’ll check it out soon. It’s funny, just a few days before I read your comment I noticed that I was confused by elephants. What is up with elephants? They’re weird. Anyway, thanks.
I just returned to the parent comment by way of comment-stalking muflax, and got even more out of it this time. You live in an interesting place, Will; and I do enjoy visiting.
Still not sure where the “dovetailing” of Leibniz comes in; or what the indefinite untrustworthy basement layers of Ken Thompson have to do with Elysium; but perhaps I’ll get it on my next reading.
Nerfhammer’s excellent wikipedia contributions reminded me of your disdain for the heuristics and biases literature. The disdain seems justified (for example, the rhyme-as-reason effect depends on Bayesian evidence: a guideline immortalized in verse has likely been considered longer than the average prose observation); but, are there any alternatives for working toward more effective thinking?
It’s gotten about twice as interesting since I wrote that comment. E.g. I’ve learned a potentially very powerful magick spell in the meantime.
“Reality-Warping Elysium” was a Terence McKenna reference; I don’t remember its rationale but I don’t think it was a very good one.
I think I may overstate my case sometimes; I’m a very big Gigerenzer fan, and he’s one of the most cited H&B researchers. (Unlike most psychologists, Gigerenzer is a very competent statistician.) But unfortunately the researchers who are most cited by LessWrong types, e.g. Kahneman, are those whose research is of quite dubious utility. What’s frustrating is that Eliezer knows of and appreciates Gigerenzer and must know of his critiques of Kahneman and his (overzealous semi-Bayesian) style of research, but he almost never cites that side of the H&B research. Kaj Sotala, a cognitive science student, has pointed out some of these things to LessWrong and yet the arguments don’t seem to have entered into the LessWrong memeplex.
The two hallmarks of LessWrong are H&B and Bayesian probability: the latter is often abused, especially in the form of algorithmic probability, and decision theorists have shown that it’s not as fundamental as Eliezer thought it was; and the H&B literature, like all psychology literature, is filled with premature conclusions, misinterpretations, questionable and contradictory results, and generally an overall lack of much that can be used to bolster rationality. (It’s interesting and frustrating to see many papers demonstrating “biases” in opposite directions on roughly the same kind of problem, with only vague and ad hoc attempts to reconcile them.) If there’s a third hallmark of LessWrong then it’s microeconomics and game theory, especially Schelling’s style of game theory, but unfortunately it gets relatively neglected and the posts applying Schellingian and Bayesian reasoning to complex problems of social signaling hermeneutics are very few and far-between.
I may have adjusted too much, but… Before I read a 1980s(?) version of Dawes’ “Rational Choice in an Uncertain World” I had basically the standard LessWrong opinion of H&B, namely that it’s flawed like all other science but you could basically take its bigger results for granted as true and meaningful; but as I read Dawes’ book I felt betrayed: the research was clearly so flawed, brittle, and easily misinterpreted that there’s no way building an edifice of “rationality” on top of it could be justifiable. A lot of interesting research has surely gone on since that book was written, but even so, that the foundations of the field are so shoddy indicates that the field in general might be non-negligibly cargo cult science. (Dawes even takes a totally uncalled for and totally incorrect potshot at Christians in the middle of the book; this seems relatively innocuous, but remember that Eliezer’s naive readers are doing the same thing when they try to apply H&B results to the reasoning of normal/superstitious/religious folk. It’s the same failure mode; you have these seemingly solid results, now you can clearly demonstrate how your enemies’ reasoning is wrong and contemptible, right? It’s disturbing that this attitude is held even by some of the most-respected researchers in the field.)
I remain stressed and worried about Eliezer, Anna, and Julia’s new organization for similar reasons; I’ve seen people (e.g. myself) become much better thinkers due to hanging out with skilled thinkers like Anna, Steve Rayhawk, Peter de Blanc, Michael Vassar, et cetera; but this improvement had nothing to do with “debiasing” as such, and had everything to do with spending a lot of time in interesting conversations. I have little idea why Eliezer et al think they can give people anything more than social connections and typical self-help improvements that could be gotten from anywhere else, unless Eliezer et al plan on spending a lot of time actually talking to people about actual unsolved problems and demonstrating how rationality works in practice.
Finding a mentor or at least some peers and talking to them a lot seems to work somewhat, having high intelligence seems pretty important, not being neurotypical seems as important as high intelligence, reading a ton seems very important but I’m not sure if it’s as useful for people who don’t start out schizotypal. I think that making oneself more schizotypal seems like a clear win but I don’t know how one would go about doing it; maybe doing a lot of nitrous or ketamine, but um, don’t take my word for it. There’s a fundamental skill of taking some things very seriously and other things not seriously at all that I don’t know how to describe or work on directly. Yeah, I dunno; but it seems a big thing that separates the men from the boys and that is clearly doable is just reading a ton of stuff and seeing how it’s connected, and building lots of models of the world based on what you read until you’re skilled at coming up with off-the-cuff hypotheses. That’s what I spend most of my time doing. I’m certain that getting good at chess helps your rationality skills and I think Michael Vassar agrees with me; I definitely notice that some of my chess-playing subskills for thinking about moves and counter-moves get used more generally when thinking about arguments and counter-arguments. (I’m rated like 1800 or something.)
Well shoot, don’t tell us about it—our disbelief might stop it from working.
If I don’t tell you what it is or what it does then I think I’m okay. Admittedly I don’t have much experience in the field.
I blame the fact the Eliezer doesn’t have a sequence talking about them.
Less Wrong has been created with the goal in mind of getting people to support SIAI.
Less Wrong is mainly a set of beliefs and arguments selected for their persuasiveness in convincing people that creating friendly AI is of utmost importance (follow the link above if you think I am wrong).
I believe that the most abused field is artificial intelligence. The ratio of evidence to claims about artificial intelligence is extremely low.
Your last graf is tantalizing but incomprehensible. Expound, please.
If you possessed a talent for writing decent prose, you could be the next Lovecraft. Mind, Lovecraft’s prose was less-than-decent, but that is beside the point.
Aside from this paragraph, I am almost entirely unsure what you were stating in that post. However, it produced feelings of interest and dread.
By chance, do you have any capacity to summarize it? If this is the case, would you please be willing to do so?
Mind expanding on that?
Also, “Recursive Universal Dovetailing Measure-Utility Inequality Theorem” is an extremely awesome name. Phrased like this I actually finally got why you’re raving so much about Leibniz. Gotta try re-reading him from that perspective. Your comments really should come with a challenge rating and “prerequisites: Kolmogorov level 5, feat Bicameral Mind” list.