Where else on the internet are people willing to change their minds?
Many scientists are willing to change their minds. Even normal people change their minds often. People become atheists or start voting for a different party. How many members here can you actually name who changed their mind about something dramatic?
Someone who is rather cynical about Less Wrong could go a step further and conclude that Less Wrong appears to be about changing your mind, but that it mainly attracts people who already tend to agree with ideas put forth on Less Wrong, who take ideas seriously. Everyone else turns his back on it or gets filtered out quickly. And those that already agree are not going to change their mind again, because they are not entitled to the particular proof necessary to change their mind, as most of the controversial ideas are either framed as a prediction or logical implication that is not subject to empirical criticism. What is left over is too vague or unsubstantial to change your mind about it one way or the other.
Someone even more cynical might say that lesswrong only departs from mainstream skeptical scientific consensus in ways that coincidentally line up exactly with the views of eliezer yudkowsky, and that it’s basically an echo chamber.
That said, rational thinking is a great ideal, and I think it’s awesome that lesswrong even TRIES to live up to it.
When I discovered less wrong, there were things I disagreed with. There are actually still things discussed here that I disagree with the apparent consensus on. But I’ve changed my mind on a large number of things. When I joined less wrong, my understanding of cryonics was that it was a scam for new-agers. I had heard of such concepts as transhumanism and singularitarianism, but had no exposure to individuals who actually held such beliefs. After reading a few of the sequences, I went to EY’s website, and found this. I finished that article, thought about it for approximately a minute, and said “Yep. That makes sense.” Fast forward one week, and I’m persuading other people to sign up for cryonics. That was a pretty dramatic shift for me.
it mainly attracts people who already tend to agree with ideas put forth on Less Wrong
Hm, in my case I have to say that reading LessWrong changed almost all my beliefs:
Roughly 9 months ago I was a socialist, an anti-reductionist, an agnostic leaning towards deism, new-age-minded guy who loved psychedlic drugs and marijuana. I was proud of my existential angst and read like-minded philosophy and literature. I had no idea of transhumanism, the singularity, existential risks or the FAI-problem.
I don’t believe that I’m the only one, who changed his mind after reading the sequences, I’m not that special!
That’s interesting, I didn’t expect that since I thought that most people who could benefit a lot from LW are most likely not going to read it or understand it. But maybe I am wrong, I seldom encounter such stories as yours.
But that you went all the way from new-age to the Singualrity and FAI troubles me a bit. Not that it isn’t better than new-age stuff, but can you tell me what exactly convinced you of risks from AI?
Well, to be clear, I didn’t believe in homeopathy or astrology or other obviously false crackpot theories. In fact, some of my heroes were skeptics like Bertrand Russell and Richard Dawkins.
But I also believed in some objective, transcendental morality stuff, and I tried to combine some mystic,mysterious quantum physics interpretations with Buddhist philosophy ( you know the Atman is the Brahman, etc..) . Just like Schrödinger, Bohm and so on. And I ( wanted to) belief in the sort of free will proposed by Kant. I didn’t understand what I was thinking and I had the feeling that something was wrong or inconsistent with my beliefs. When I was younger I were much more confident in materialism and atheism, but some drug-experiences disturbed me and I began to question my worldviews.
Anyway, let’s say I believed in enlightened, deeply wise-sounding, new-age-gibberish. I know, I know, It’s embarrassing, but hopefully not that embarrassing.
Well, some essays of Bostrom and mainly the sequences convinced me of the risks of AI. I’m not as sure about it as e.g. Yudkowsky ( In fact I probably think that it is more likely than not, that his scenario is false) but, if we assign a 25% probability to the Yudkowskian AI-Foom-scenario it still seems absurdly important, right? And Yudkowsky makes more sense to me than Hanson or Goertzel, and folks like Kurzweil, and especially De Garis seem to be off base.
I am just starting out here, but I feel as if I’m about to change my mind in the same way you did. I was interested in Utopia (ending suffering) and that got me pulled into Buddhism and all the other paraspychological weirdness.
I’m fairly enthusiastic about LW, but I think that
it mainly attracts people who already tend to agree with ideas put forth on Less Wrong, who take ideas seriously. Everyone else turns his back on it or gets filtered out quickly.
Many scientists are willing to change their minds. Even normal people change their minds often. People become atheists or start voting for a different party. How many members here can you actually name who changed their mind about something dramatic?
Someone who is rather cynical about Less Wrong could go a step further and conclude that Less Wrong appears to be about changing your mind, but that it mainly attracts people who already tend to agree with ideas put forth on Less Wrong, who take ideas seriously. Everyone else turns his back on it or gets filtered out quickly. And those that already agree are not going to change their mind again, because they are not entitled to the particular proof necessary to change their mind, as most of the controversial ideas are either framed as a prediction or logical implication that is not subject to empirical criticism. What is left over is too vague or unsubstantial to change your mind about it one way or the other.
Someone even more cynical might say that lesswrong only departs from mainstream skeptical scientific consensus in ways that coincidentally line up exactly with the views of eliezer yudkowsky, and that it’s basically an echo chamber.
That said, rational thinking is a great ideal, and I think it’s awesome that lesswrong even TRIES to live up to it.
When I discovered less wrong, there were things I disagreed with. There are actually still things discussed here that I disagree with the apparent consensus on. But I’ve changed my mind on a large number of things. When I joined less wrong, my understanding of cryonics was that it was a scam for new-agers. I had heard of such concepts as transhumanism and singularitarianism, but had no exposure to individuals who actually held such beliefs. After reading a few of the sequences, I went to EY’s website, and found this. I finished that article, thought about it for approximately a minute, and said “Yep. That makes sense.” Fast forward one week, and I’m persuading other people to sign up for cryonics. That was a pretty dramatic shift for me.
Hm, in my case I have to say that reading LessWrong changed almost all my beliefs: Roughly 9 months ago I was a socialist, an anti-reductionist, an agnostic leaning towards deism, new-age-minded guy who loved psychedlic drugs and marijuana. I was proud of my existential angst and read like-minded philosophy and literature. I had no idea of transhumanism, the singularity, existential risks or the FAI-problem.
I don’t believe that I’m the only one, who changed his mind after reading the sequences, I’m not that special!
That’s interesting, I didn’t expect that since I thought that most people who could benefit a lot from LW are most likely not going to read it or understand it. But maybe I am wrong, I seldom encounter such stories as yours.
But that you went all the way from new-age to the Singualrity and FAI troubles me a bit. Not that it isn’t better than new-age stuff, but can you tell me what exactly convinced you of risks from AI?
Well, to be clear, I didn’t believe in homeopathy or astrology or other obviously false crackpot theories. In fact, some of my heroes were skeptics like Bertrand Russell and Richard Dawkins. But I also believed in some objective, transcendental morality stuff, and I tried to combine some mystic,mysterious quantum physics interpretations with Buddhist philosophy ( you know the Atman is the Brahman, etc..) . Just like Schrödinger, Bohm and so on. And I ( wanted to) belief in the sort of free will proposed by Kant. I didn’t understand what I was thinking and I had the feeling that something was wrong or inconsistent with my beliefs. When I was younger I were much more confident in materialism and atheism, but some drug-experiences disturbed me and I began to question my worldviews. Anyway, let’s say I believed in enlightened, deeply wise-sounding, new-age-gibberish. I know, I know, It’s embarrassing, but hopefully not that embarrassing.
Well, some essays of Bostrom and mainly the sequences convinced me of the risks of AI. I’m not as sure about it as e.g. Yudkowsky ( In fact I probably think that it is more likely than not, that his scenario is false) but, if we assign a 25% probability to the Yudkowskian AI-Foom-scenario it still seems absurdly important, right? And Yudkowsky makes more sense to me than Hanson or Goertzel, and folks like Kurzweil, and especially De Garis seem to be off base.
I am just starting out here, but I feel as if I’m about to change my mind in the same way you did. I was interested in Utopia (ending suffering) and that got me pulled into Buddhism and all the other paraspychological weirdness.
I’m fairly enthusiastic about LW, but I think that
has a big effect.