This morning I was thinking about trying to find some sort of written account of the best versions and/or most charitable interpretations of the views and arguments of the “Not-worried-about-x-risk” people. But written by someone who is concerned about X-risk, because when non-x-risk people try to explain what they think, I genuinely feel like they are speaking a different language. And this causes me a reasonable amount of stress, because so many people who I would consider significantly smarter than me and better than me at thinking about things… aren’t worried about x-risk. But I can’t understand them.
So, when I saw the title of this post and read the first sentence, I was pretty excited, because I thought it had a good chance of being exactly what I was looking for. But after reading it, I think it just increased my feeling of not understanding. Anytime I try to imagine myself holding or defending these views, I always come to the conclusion that my primary motivation would be “I want these things to be true”. But I also know that most of these people are very capable of recognizing when they believe something just because they want to, and I don’t really think that’s compelling as a complete explanation for their position.
I don’t even know if this is a “complaint” about the explanation presented here, or the views themselves. Because I don’t understand the views themselves well enough to separate the two.
I also don’t buy these arguments and would be interested in AI X-Risk skeptics helping me steelman further / add more categories of argument to this list.
However, as someone in a similar position, “trying to find some sort of written account of the best versions and/or most charitable interpretations of the views and arguments of the “Not-worried-about-x-risk” people,” I decided to try and do this myself as a starting point.
I don’t want it to sound like this wasn’t useful or worth reading. My negativity is pretty much entirely due to me really wanting a moment of clarity and not getting it. I think you did a good job of capturing what they actually do say, and I’ll probably come back to it a few times.
Comment on:
This morning I was thinking about trying to find some sort of written account of the best versions and/or most charitable interpretations of the views and arguments of the “Not-worried-about-x-risk” people. But written by someone who is concerned about X-risk, because when non-x-risk people try to explain what they think, I genuinely feel like they are speaking a different language. And this causes me a reasonable amount of stress, because so many people who I would consider significantly smarter than me and better than me at thinking about things… aren’t worried about x-risk. But I can’t understand them.
So, when I saw the title of this post and read the first sentence, I was pretty excited, because I thought it had a good chance of being exactly what I was looking for. But after reading it, I think it just increased my feeling of not understanding. Anytime I try to imagine myself holding or defending these views, I always come to the conclusion that my primary motivation would be “I want these things to be true”. But I also know that most of these people are very capable of recognizing when they believe something just because they want to, and I don’t really think that’s compelling as a complete explanation for their position.
I don’t even know if this is a “complaint” about the explanation presented here, or the views themselves. Because I don’t understand the views themselves well enough to separate the two.
That’s a completely fair point/criticism.
I also don’t buy these arguments and would be interested in AI X-Risk skeptics helping me steelman further / add more categories of argument to this list.
However, as someone in a similar position, “trying to find some sort of written account of the best versions and/or most charitable interpretations of the views and arguments of the “Not-worried-about-x-risk” people,” I decided to try and do this myself as a starting point.
I don’t want it to sound like this wasn’t useful or worth reading. My negativity is pretty much entirely due to me really wanting a moment of clarity and not getting it. I think you did a good job of capturing what they actually do say, and I’ll probably come back to it a few times.