it mainly attracts people who already tend to agree with ideas put forth on Less Wrong
Hm, in my case I have to say that reading LessWrong changed almost all my beliefs:
Roughly 9 months ago I was a socialist, an anti-reductionist, an agnostic leaning towards deism, new-age-minded guy who loved psychedlic drugs and marijuana. I was proud of my existential angst and read like-minded philosophy and literature. I had no idea of transhumanism, the singularity, existential risks or the FAI-problem.
I don’t believe that I’m the only one, who changed his mind after reading the sequences, I’m not that special!
That’s interesting, I didn’t expect that since I thought that most people who could benefit a lot from LW are most likely not going to read it or understand it. But maybe I am wrong, I seldom encounter such stories as yours.
But that you went all the way from new-age to the Singualrity and FAI troubles me a bit. Not that it isn’t better than new-age stuff, but can you tell me what exactly convinced you of risks from AI?
Well, to be clear, I didn’t believe in homeopathy or astrology or other obviously false crackpot theories. In fact, some of my heroes were skeptics like Bertrand Russell and Richard Dawkins.
But I also believed in some objective, transcendental morality stuff, and I tried to combine some mystic,mysterious quantum physics interpretations with Buddhist philosophy ( you know the Atman is the Brahman, etc..) . Just like Schrödinger, Bohm and so on. And I ( wanted to) belief in the sort of free will proposed by Kant. I didn’t understand what I was thinking and I had the feeling that something was wrong or inconsistent with my beliefs. When I was younger I were much more confident in materialism and atheism, but some drug-experiences disturbed me and I began to question my worldviews.
Anyway, let’s say I believed in enlightened, deeply wise-sounding, new-age-gibberish. I know, I know, It’s embarrassing, but hopefully not that embarrassing.
Well, some essays of Bostrom and mainly the sequences convinced me of the risks of AI. I’m not as sure about it as e.g. Yudkowsky ( In fact I probably think that it is more likely than not, that his scenario is false) but, if we assign a 25% probability to the Yudkowskian AI-Foom-scenario it still seems absurdly important, right? And Yudkowsky makes more sense to me than Hanson or Goertzel, and folks like Kurzweil, and especially De Garis seem to be off base.
I am just starting out here, but I feel as if I’m about to change my mind in the same way you did. I was interested in Utopia (ending suffering) and that got me pulled into Buddhism and all the other paraspychological weirdness.
Hm, in my case I have to say that reading LessWrong changed almost all my beliefs: Roughly 9 months ago I was a socialist, an anti-reductionist, an agnostic leaning towards deism, new-age-minded guy who loved psychedlic drugs and marijuana. I was proud of my existential angst and read like-minded philosophy and literature. I had no idea of transhumanism, the singularity, existential risks or the FAI-problem.
I don’t believe that I’m the only one, who changed his mind after reading the sequences, I’m not that special!
That’s interesting, I didn’t expect that since I thought that most people who could benefit a lot from LW are most likely not going to read it or understand it. But maybe I am wrong, I seldom encounter such stories as yours.
But that you went all the way from new-age to the Singualrity and FAI troubles me a bit. Not that it isn’t better than new-age stuff, but can you tell me what exactly convinced you of risks from AI?
Well, to be clear, I didn’t believe in homeopathy or astrology or other obviously false crackpot theories. In fact, some of my heroes were skeptics like Bertrand Russell and Richard Dawkins. But I also believed in some objective, transcendental morality stuff, and I tried to combine some mystic,mysterious quantum physics interpretations with Buddhist philosophy ( you know the Atman is the Brahman, etc..) . Just like Schrödinger, Bohm and so on. And I ( wanted to) belief in the sort of free will proposed by Kant. I didn’t understand what I was thinking and I had the feeling that something was wrong or inconsistent with my beliefs. When I was younger I were much more confident in materialism and atheism, but some drug-experiences disturbed me and I began to question my worldviews. Anyway, let’s say I believed in enlightened, deeply wise-sounding, new-age-gibberish. I know, I know, It’s embarrassing, but hopefully not that embarrassing.
Well, some essays of Bostrom and mainly the sequences convinced me of the risks of AI. I’m not as sure about it as e.g. Yudkowsky ( In fact I probably think that it is more likely than not, that his scenario is false) but, if we assign a 25% probability to the Yudkowskian AI-Foom-scenario it still seems absurdly important, right? And Yudkowsky makes more sense to me than Hanson or Goertzel, and folks like Kurzweil, and especially De Garis seem to be off base.
I am just starting out here, but I feel as if I’m about to change my mind in the same way you did. I was interested in Utopia (ending suffering) and that got me pulled into Buddhism and all the other paraspychological weirdness.