I’m a little puzzled as to why the question contains the phrase “how you came to identify as rationalist.” My introduction to what I think this site means by rationalism (not the “rationalism” of Descartes, Leibniz and Spinoza I HOPE) was through Robert Anton Wilson’s books Quantum Psychology and Prometheus Rising (although this just watered some seeds planted earlier). R.A.W. led me to Korzybski and his famous “is of identity” polemics. So why does a site which attributes much of its influence to Korzybksi as me a question that (apparently) flies in the face of Korzybskian rationality? I’m not trying to be contentious. I’m sure there’s a perfectly REASONABLE explanation which I’ll patiently await.
Korzybski and E-Prime are not well-known on LW. LW’s ideal of rationalism is an AI which reasons perfectly using the available evidence and whose actions really are optimal for its particular goals. The Sequences are full of introspective tips and behavioral tests for telling whether you’re on the right track, so in that sense the philosophy has been given a human form, but the rational ideal which LW humans seek to approximate is described mathematically and computationally, in formulae due to Bayes, Solomonoff, and others. It’s a different culture and a different sensibility to what you find in RAW.
By the way, Descartes, Leibniz, and Spinoza aren’t so bad, especially if you remember that they created the ideas that we now associate with them. Who wouldn’t want to be a creator and a discoverer on that level? Their shortcomings are your opportunity.
It seemed like everywhere I went on this site yesterday talked about maps and territories. I don’t recall exactly where, but I thought it was rightly attributed to Alfred Korzybski (AK). The map and territory heuristic is, AFIK, AK’s coinage, and I just assumed all the map and territory references alluded to a strong Korzybskian foundation.
E-prime was the invention of someone else (I forget his name-easily Googleable or Wikipediable) but closely followed AK. I find it impractical for language, but more helpful for reasoning.
I know about Korzybski and his general semantics, but very little about the actual substance of the stuff. Beyond E-Prime, which seems cutesy but too flimsy for any serious lifting. My brain keeps wanting to slot him up in that weird corner of mid-20th century American ideaspace that spawned Ayn Rand and L. Ron Hubbard. He has the pulp science fiction connection, the cranky outsider contrarian fans who think the system as the end-all of philosophy, and yet his stuff seems mostly ignored by contemporary academia. None of this is an actual indictment, but with no evidence to the contrary, it does keep me from being very interested.
the cranky outsider contrarian fans who think the system as the end-all of philosophy, and yet his stuff seems mostly ignored by contemporary academia.
i haven’t heard that end-all of philosophy bit (could come from his strong following of Wittgenstein) , but I do know he is considered to be a principle predecessor of self-help psychology, which might explain the anti-academic bias...i would not stereotype him with likes of Rand or Hubbard (yikes!)
The only academic I can recall talking to him about was my Learning and History & Systems of Psych. prof. who knew who he was (he had dual Ph.Ds in psych. and philosophy) but expressed being baffled as to why I liked him...however, this is the same guy who also said stuff like you don’t need to read Wittgenstein to know language is a game, and, “Philosophy’s a bunch of bullshit and Kant’s the biggest bullshitter of them all,” and who when I lent him my copy of RAW’s Quantum Psychology held it up to the whole class the next day and lectured on why you shouldn’t read books like that. He also was a cranky (outsider-ish) contrarian...but maybe he was right...maybe you don’t need to read RAW or AK to know the map’s not the territory
i would not stereotype him with likes of Rand or Hubbard (yikes!)
Yeah, there’s the difference between deciding that his stuff is actually the same kind of stuff as some very iffy stuff, and then skipping it, and just noting a vague and very possibly unfair surface resemblance to iffy stuff, and then not bothering to investigate further since the stuff is 70 years old and there should be more people saying it’s important if it really is.
I lent him my copy of RAW’s Quantum Psychology held it up to the whole class the next day and lectured on why you shouldn’t read books like that.
What should I know about this one? I know that when a book has “quantum” in the title and is not a physics book, the odds are that it really is a book you shouldn’t be reading. If my quick-and-unfair pattern match for Korzybski was Hubbard+Rand, my quick-and-unfair pattern match for something titled “Quantum Psychology” is The Secret.
Then again, I do know that RAW should be more interesting than that, though I also have the suspicion that his stuff may be a bit too stuck in the counterculture of the 60s and 70s to really have aged well.
It’s a different culture and a different sensibility to what you find in RAW.
I don’t know enough about LW’s culture to say yet, but for a site—and correct me if I’m wrong—whose “mission” includes taking the “curse” out of “singularity” Robert Anton Wilson’s technological optimism strikes me as a great support for such a pursuit...no?
RAW was chronically skeptical of everything, LW believes very strongly in the “reality-tunnel” of natural science.
RAW was very interested in parapsychology and the “eight-circuit model”, to LW that’s all pseudoscience and crackpottery.
RAW had an interest in mystical states of consciousness and nondualist ontology, LW in mind-as-computation and atheist naturalism.
Eliezer’s general ideas are the sort of thing that Wilson would have partly assimilated into his personal mix (he would have loved the site’s name), and partly rejected as “fundamentalist materialism”. Also, LW has a specific futurist eschatology, in which the fate of the world is decided by the value system of the first AI to bootstrap its way beyond human intelligence. There are people here who seriously aspire to determine safe initial conditions for such an event, and related concepts such as “paperclip maximizer” and “timeless decision theory” (look them up in the LW wiki) are just as pervasive here, as are the distinctive concepts of LW discourse about general rationality.
RAW was very interested in parapsychology and the “eight-circuit model”, to LW that’s all pseudoscience and crackpottery.
How do you and/or LWers distinguish among science, pseudoscience and crackpottery?
RAW had an interest in mystical states of consciousness and nondualist ontology, LW in mind-as-computation and atheist naturalism.
How do you and/or LWers distinguish mystical mental states from mind-as-computation mental states (that looks like cognitive reductionism from my perspective).
Have you read his Nature’s God? One could make a case for a naturalistic atheism from that and his similar works?
How do you and/or LWers distinguish among science, pseudoscience and crackpottery?
Such a question demands a serious and principled answer, which I won’t give. But it’s a cultural fact about this place that parapsychology (and all other standard skeptics’ whipping-boys) will be regarded as pseudoscience, and something like the eight-circuit model as too incoherent to even count as pseudoscience. There are thousands of people here, so there are all sorts of ideological minorities lurking in the woodwork, but the preferred view of the universe is scientifically orthodox, laced with a computer scientist’s version of platonism, and rounded out with a Ray Kurzweil concept of the future.
How do you and/or LWers distinguish mystical mental states from mind-as-computation mental states (that looks like cognitive reductionism from my perspective)
Mysticism isn’t a topic that LW has paid any attention to. I think it would mostly be filed under “religious mental disorder”, except that, because of the inevitable forays into reality-as-computer-program and all-is-mathematics, people keep reinventing propositions and attitudes which sound “mystical”. This is a place where people try to understand their subjectivity in terms of computation, and it’s natural that they would also do this for mystical subjectivity, and they might even regard an evocative computational metaphor as a plausible theory for the cognitive neuroscience of mysticism. For example… maybe mystical states are what happens when your global cognitive workspace is populated with nothing but null pointers! You could turn that into a physical proposition about cortical columns and neural activation patterns. That’s the sort of “theory of mysticism” I would expect a LWer to invent if they took up the topic.
These are topics in which I deviate somewhat from the LW norm. My trademark spiel is all about qualia-structures in quantum biology, not universe as Turing machine. Also, LW isn’t all scientific reductionism, there are many other things happening here at the same time. In framing RAW vs LW as tolerance for mystical nondualism versus preference for atheist naturalism, I’m just singling out the biggest difference in sensibility.
Would you please refer me to the discussions on meditation you’re thinking of?
This is a sticky subject. “Meditation” and “mysticism” differ from context to context. E.g., Christian mysticism (the telos of which is union with God) and what Crowley meant by mysticism are fundamentally different (the latter sharing more in common with Hindu yogi praxis where union or samādhi is not necessarily restricted to a Diety; and in Buddhist mediation the purpose of samādhi is subsumed under a different goal altogether.). Meditation can refer to so many different things the term is basically useless unless one gets very specific. But I’m not sure if that serve LW’s purposes so I’ll hold off saying anything else for now.
Here are the two main posts tagged with Meditation, and here are the three discussion posts. Also see DavidM’s posts (1 and 2, 3 appears to have never been written) and a more recent thread about it. I missed a few posts you can find by searching for meditation.
The impression I get is that there are several people who find it interesting/useful, but it hasn’t penetrated deeply enough to become part of the LW core. (I personally don’t meditate, after a few initial tests suggested that noticeable effects would take far more time input than I was willing to give it.)
Fair enough. I like your sense of humour and you (and pretty much everyone I’ve interacted with here) are very polite and civil which I appreciate a bunch. I’ve spent some substantial time on some internet forums and shit can get pretty heated in a hurry. I’m sure people go to battle here occasionally, but I haven’t encountered anything to volatile (yet?).
Anyway, just my way of saying thanks.
Besides, I’m not here to make sure LW fits into to my perceptions about RAW et al. I’m here to learn more about rationality.
This mis-characterizes him. He was too optimistic about humanity, technology and the future for this to be true. Furthermore, he preferred zeteticism over skepticism.
...nondualist ontology...
please detail what you mean by this...I think I know but want to be sure before I proceed
.
“AI” as in artificial intelligence? Please link me to the explanation of that on this site. Thanks (if I don’t find it myself first). I’m still reluctant to use phrasing like “LW humans” as that type of definitionalism sends up “group think” red flags. I’m not saying it’s bull but that I need some persuading and time to snoop around (this site is HUGE).
I didn’t mean to say I’m entirely dismissive of rationalism, just that I want to be clear on what it means at LW. Epistemologically, I’ve generally been an empiricist, but have changed my mind on that, as some of my experiences with Buddhist practice has made me at least be open to the possibility that at least some of our knowledge comes from something other than “sense experience.”
They mean ‘rationalist’ in the sense of following a rational approach, which we loosely associate with Bayesian thought. As for AI, this seems like the most relevant connection and also mentions a limitation of pure Bayesian reasoning. Then there’s the middle icon at the top right of the page.
Since, I didn’t write this post post I can’t answer your main question, but I can shed some light on:
what I think this site means by rationalism (not the “rationalism” of Descartes, Leibniz and Spinoza I HOPE)
We’re entirely about rationality, not rationalism. I’ve mentioned that this can be confusing, unfortunately we couldn’t think of a better alternative. This should clear up what we mean by rationality.
I’m a little puzzled as to why the question contains the phrase “how you came to identify as rationalist.” My introduction to what I think this site means by rationalism (not the “rationalism” of Descartes, Leibniz and Spinoza I HOPE) was through Robert Anton Wilson’s books Quantum Psychology and Prometheus Rising (although this just watered some seeds planted earlier). R.A.W. led me to Korzybski and his famous “is of identity” polemics. So why does a site which attributes much of its influence to Korzybksi as me a question that (apparently) flies in the face of Korzybskian rationality? I’m not trying to be contentious. I’m sure there’s a perfectly REASONABLE explanation which I’ll patiently await.
DeeElf
Korzybski and E-Prime are not well-known on LW. LW’s ideal of rationalism is an AI which reasons perfectly using the available evidence and whose actions really are optimal for its particular goals. The Sequences are full of introspective tips and behavioral tests for telling whether you’re on the right track, so in that sense the philosophy has been given a human form, but the rational ideal which LW humans seek to approximate is described mathematically and computationally, in formulae due to Bayes, Solomonoff, and others. It’s a different culture and a different sensibility to what you find in RAW.
By the way, Descartes, Leibniz, and Spinoza aren’t so bad, especially if you remember that they created the ideas that we now associate with them. Who wouldn’t want to be a creator and a discoverer on that level? Their shortcomings are your opportunity.
Well, I mean… “the map is not the territory” is Korzybski. Eliezer just sucked at citing clearly.
It seemed like everywhere I went on this site yesterday talked about maps and territories. I don’t recall exactly where, but I thought it was rightly attributed to Alfred Korzybski (AK). The map and territory heuristic is, AFIK, AK’s coinage, and I just assumed all the map and territory references alluded to a strong Korzybskian foundation.
E-prime was the invention of someone else (I forget his name-easily Googleable or Wikipediable) but closely followed AK. I find it impractical for language, but more helpful for reasoning.
I know about Korzybski and his general semantics, but very little about the actual substance of the stuff. Beyond E-Prime, which seems cutesy but too flimsy for any serious lifting. My brain keeps wanting to slot him up in that weird corner of mid-20th century American ideaspace that spawned Ayn Rand and L. Ron Hubbard. He has the pulp science fiction connection, the cranky outsider contrarian fans who think the system as the end-all of philosophy, and yet his stuff seems mostly ignored by contemporary academia. None of this is an actual indictment, but with no evidence to the contrary, it does keep me from being very interested.
i haven’t heard that end-all of philosophy bit (could come from his strong following of Wittgenstein) , but I do know he is considered to be a principle predecessor of self-help psychology, which might explain the anti-academic bias...i would not stereotype him with likes of Rand or Hubbard (yikes!)
The only academic I can recall talking to him about was my Learning and History & Systems of Psych. prof. who knew who he was (he had dual Ph.Ds in psych. and philosophy) but expressed being baffled as to why I liked him...however, this is the same guy who also said stuff like you don’t need to read Wittgenstein to know language is a game, and, “Philosophy’s a bunch of bullshit and Kant’s the biggest bullshitter of them all,” and who when I lent him my copy of RAW’s Quantum Psychology held it up to the whole class the next day and lectured on why you shouldn’t read books like that. He also was a cranky (outsider-ish) contrarian...but maybe he was right...maybe you don’t need to read RAW or AK to know the map’s not the territory
Yeah, there’s the difference between deciding that his stuff is actually the same kind of stuff as some very iffy stuff, and then skipping it, and just noting a vague and very possibly unfair surface resemblance to iffy stuff, and then not bothering to investigate further since the stuff is 70 years old and there should be more people saying it’s important if it really is.
What should I know about this one? I know that when a book has “quantum” in the title and is not a physics book, the odds are that it really is a book you shouldn’t be reading. If my quick-and-unfair pattern match for Korzybski was Hubbard+Rand, my quick-and-unfair pattern match for something titled “Quantum Psychology” is The Secret.
Then again, I do know that RAW should be more interesting than that, though I also have the suspicion that his stuff may be a bit too stuck in the counterculture of the 60s and 70s to really have aged well.
I don’t know enough about LW’s culture to say yet, but for a site—and correct me if I’m wrong—whose “mission” includes taking the “curse” out of “singularity” Robert Anton Wilson’s technological optimism strikes me as a great support for such a pursuit...no?
Yes but:
RAW was chronically skeptical of everything, LW believes very strongly in the “reality-tunnel” of natural science.
RAW was very interested in parapsychology and the “eight-circuit model”, to LW that’s all pseudoscience and crackpottery.
RAW had an interest in mystical states of consciousness and nondualist ontology, LW in mind-as-computation and atheist naturalism.
Eliezer’s general ideas are the sort of thing that Wilson would have partly assimilated into his personal mix (he would have loved the site’s name), and partly rejected as “fundamentalist materialism”. Also, LW has a specific futurist eschatology, in which the fate of the world is decided by the value system of the first AI to bootstrap its way beyond human intelligence. There are people here who seriously aspire to determine safe initial conditions for such an event, and related concepts such as “paperclip maximizer” and “timeless decision theory” (look them up in the LW wiki) are just as pervasive here, as are the distinctive concepts of LW discourse about general rationality.
How do you and/or LWers distinguish among science, pseudoscience and crackpottery?
How do you and/or LWers distinguish mystical mental states from mind-as-computation mental states (that looks like cognitive reductionism from my perspective). Have you read his Nature’s God? One could make a case for a naturalistic atheism from that and his similar works?
Such a question demands a serious and principled answer, which I won’t give. But it’s a cultural fact about this place that parapsychology (and all other standard skeptics’ whipping-boys) will be regarded as pseudoscience, and something like the eight-circuit model as too incoherent to even count as pseudoscience. There are thousands of people here, so there are all sorts of ideological minorities lurking in the woodwork, but the preferred view of the universe is scientifically orthodox, laced with a computer scientist’s version of platonism, and rounded out with a Ray Kurzweil concept of the future.
Mysticism isn’t a topic that LW has paid any attention to. I think it would mostly be filed under “religious mental disorder”, except that, because of the inevitable forays into reality-as-computer-program and all-is-mathematics, people keep reinventing propositions and attitudes which sound “mystical”. This is a place where people try to understand their subjectivity in terms of computation, and it’s natural that they would also do this for mystical subjectivity, and they might even regard an evocative computational metaphor as a plausible theory for the cognitive neuroscience of mysticism. For example… maybe mystical states are what happens when your global cognitive workspace is populated with nothing but null pointers! You could turn that into a physical proposition about cortical columns and neural activation patterns. That’s the sort of “theory of mysticism” I would expect a LWer to invent if they took up the topic.
These are topics in which I deviate somewhat from the LW norm. My trademark spiel is all about qualia-structures in quantum biology, not universe as Turing machine. Also, LW isn’t all scientific reductionism, there are many other things happening here at the same time. In framing RAW vs LW as tolerance for mystical nondualism versus preference for atheist naturalism, I’m just singling out the biggest difference in sensibility.
Meditation has received some attention here, though I can’t think of other sorts of mysticism. Perhaps Crowley’s writings.
Would you please refer me to the discussions on meditation you’re thinking of?
This is a sticky subject. “Meditation” and “mysticism” differ from context to context. E.g., Christian mysticism (the telos of which is union with God) and what Crowley meant by mysticism are fundamentally different (the latter sharing more in common with Hindu yogi praxis where union or samādhi is not necessarily restricted to a Diety; and in Buddhist mediation the purpose of samādhi is subsumed under a different goal altogether.). Meditation can refer to so many different things the term is basically useless unless one gets very specific. But I’m not sure if that serve LW’s purposes so I’ll hold off saying anything else for now.
Here are the two main posts tagged with Meditation, and here are the three discussion posts. Also see DavidM’s posts (1 and 2, 3 appears to have never been written) and a more recent thread about it. I missed a few posts you can find by searching for meditation.
The impression I get is that there are several people who find it interesting/useful, but it hasn’t penetrated deeply enough to become part of the LW core. (I personally don’t meditate, after a few initial tests suggested that noticeable effects would take far more time input than I was willing to give it.)
Fair enough. I like your sense of humour and you (and pretty much everyone I’ve interacted with here) are very polite and civil which I appreciate a bunch. I’ve spent some substantial time on some internet forums and shit can get pretty heated in a hurry. I’m sure people go to battle here occasionally, but I haven’t encountered anything to volatile (yet?). Anyway, just my way of saying thanks. Besides, I’m not here to make sure LW fits into to my perceptions about RAW et al. I’m here to learn more about rationality.
This mis-characterizes him. He was too optimistic about humanity, technology and the future for this to be true. Furthermore, he preferred zeteticism over skepticism.
please detail what you mean by this...I think I know but want to be sure before I proceed .
“AI” as in artificial intelligence? Please link me to the explanation of that on this site. Thanks (if I don’t find it myself first). I’m still reluctant to use phrasing like “LW humans” as that type of definitionalism sends up “group think” red flags. I’m not saying it’s bull but that I need some persuading and time to snoop around (this site is HUGE).
I didn’t mean to say I’m entirely dismissive of rationalism, just that I want to be clear on what it means at LW. Epistemologically, I’ve generally been an empiricist, but have changed my mind on that, as some of my experiences with Buddhist practice has made me at least be open to the possibility that at least some of our knowledge comes from something other than “sense experience.”
They mean ‘rationalist’ in the sense of following a rational approach, which we loosely associate with Bayesian thought. As for AI, this seems like the most relevant connection and also mentions a limitation of pure Bayesian reasoning. Then there’s the middle icon at the top right of the page.
Since, I didn’t write this post post I can’t answer your main question, but I can shed some light on:
We’re entirely about rationality, not rationalism. I’ve mentioned that this can be confusing, unfortunately we couldn’t think of a better alternative. This should clear up what we mean by rationality.
Eliezer got some early influence from the General Semantics-inspired Null-A books by A.E. van Vogt.
(I’m leaving two versions of this comment in different threads because buybuydandavis also asked about Korzybski and LW.)
Edit: I realized that this comment doesn’t make much sense as a direct reply to yours. Consider it an addendum to Mitchell_Porter’s comment.