I’m curious to know for anyone that has read a lot of Yudkowsky’s and Scott Alexander’s writings (I read them for entertainment even) how are they feeling about the advancements of AI—all happening so fast and in such magnitude.
Yudkowsky’s views can now be found mostly on Twitter. He is very pessimistic, for reasons described in detail in his List of Lethalities and better summarized by Zvi. I’m curious about Alexander’s current views—I don’t keep up on Astral Codex Ten.
None of those address one of his important reasons for pessimism: humans have so far shown themselves to be just terrible at taking the dangers of AGI and the difficulties of alignment seriously. Here I think EY is too pessimistic; humans are short-sighted and argumentative as hell, but they are capable of taking serious issues seriously when they’re staring them in the face. Attitudes will change when AI is obviously important, and our likely timelines are long enough for that to make at least some difference.
I’m curious to know for anyone that has read a lot of Yudkowsky’s and Scott Alexander’s writings (I read them for entertainment even) how are they feeling about the advancements of AI—all happening so fast and in such magnitude.
Yudkowsky’s views can now be found mostly on Twitter. He is very pessimistic, for reasons described in detail in his List of Lethalities and better summarized by Zvi. I’m curious about Alexander’s current views—I don’t keep up on Astral Codex Ten.
To me it seems that Yudkowsky’s reasons for pessimism are all good ones, but do not stack up to nearly the 99%+ p(doom) he’s espoused. I’ve attempted to capture why that is in essentially all of my posts, but in brief form in Cruxes of disagreement on alignment difficulty, The (partial) fallacy of dumb superintelligence and in a little more detail on one important point of disagreement in Conflating value alignment and intent alignment is causing confusion.
None of those address one of his important reasons for pessimism: humans have so far shown themselves to be just terrible at taking the dangers of AGI and the difficulties of alignment seriously. Here I think EY is too pessimistic; humans are short-sighted and argumentative as hell, but they are capable of taking serious issues seriously when they’re staring them in the face. Attitudes will change when AI is obviously important, and our likely timelines are long enough for that to make at least some difference.
Read ~all the sequences. Read all of SSC (don’t keep up with ACX).
Pessimistic about survival, but attempting to be aggresively open-minded about what will happen instead of confirmation biasing my views from 2015.