Which paper or post outlines those core claims? I am not sure what they are.
Mostly just:
AGI might be created within my lifetime
When AGI is created, it will eventually take control of humanity’s future
It will be very hard to create AGI in such a way that it won’t destroy almost everything that we hold valuable
I find it very hard to pinpoint when and how I changed my mind about what. I’d be interested to hear some examples to compare my own opinion on those issues, thanks.
Off the top of my head:
I stopped being religious (since then I’ve alternated between various degrees of “religion is idiotic” and “religion is actually kinda reasonable”)
I think it was around this time that I did a pretty quick heel-turn from being a strong supporter of the current copyright system to wanting to see the whole system drastically reformed (been refining and changing my exact opinions on the subject since then)
I used to be very strongly socialist (in the Scandinavian sense) and thought libertarians were pretty much crazy, since then I’ve come to see that they do have a lot of good points
I used to be very frustrated by people behaving in seemingly stupid and irrational ways; these days I’m a lot less frustrated, since I’ve come to see the method in the madness. (E.g. realizing the reason for some of the (self-)signaling behavior outlined in this post makes me a lot more understanding of people engaging in it.)
What do you mean by that? What does it mean for you to believe in the issue. What facts?
Things like:
Thinking that “oh, this value problem can’t really be that hard, I’m sure it’ll be solved” and then realizing that no, the value problem really is quite hard.
Thinking that “well, maybe there’s no hard takeoff, Moore’s law levels off and society will gradually adapt to control the AGIs” and then realizing that even if there were no hard takeoff at first, it would only be a matter of time before the AGIs broke free of human control. Things might be fine and under control for thirty years, say, and just when everyone is getting complacent, some computing breakthrough suddenly lets the AGIs run ten times as fast and then humans are out of the loop.
Thinking that “well, even if AGIs are going to break free of human control, at least we can play various AGIs against each other” and then realizing that this will only get humans caught in the crossfire; various human factions fighting each other hasn’t allowed the chimpanzees to play us against each other very well.
Mostly just:
AGI might be created within my lifetime
When AGI is created, it will eventually take control of humanity’s future
It will be very hard to create AGI in such a way that it won’t destroy almost everything that we hold valuable
Off the top of my head:
I stopped being religious (since then I’ve alternated between various degrees of “religion is idiotic” and “religion is actually kinda reasonable”)
I think it was around this time that I did a pretty quick heel-turn from being a strong supporter of the current copyright system to wanting to see the whole system drastically reformed (been refining and changing my exact opinions on the subject since then)
I used to be very strongly socialist (in the Scandinavian sense) and thought libertarians were pretty much crazy, since then I’ve come to see that they do have a lot of good points
I used to be very frustrated by people behaving in seemingly stupid and irrational ways; these days I’m a lot less frustrated, since I’ve come to see the method in the madness. (E.g. realizing the reason for some of the (self-)signaling behavior outlined in this post makes me a lot more understanding of people engaging in it.)
Things like:
Thinking that “oh, this value problem can’t really be that hard, I’m sure it’ll be solved” and then realizing that no, the value problem really is quite hard.
Thinking that “well, maybe there’s no hard takeoff, Moore’s law levels off and society will gradually adapt to control the AGIs” and then realizing that even if there were no hard takeoff at first, it would only be a matter of time before the AGIs broke free of human control. Things might be fine and under control for thirty years, say, and just when everyone is getting complacent, some computing breakthrough suddenly lets the AGIs run ten times as fast and then humans are out of the loop.
Thinking that “well, even if AGIs are going to break free of human control, at least we can play various AGIs against each other” and then realizing that this will only get humans caught in the crossfire; various human factions fighting each other hasn’t allowed the chimpanzees to play us against each other very well.