That last paragraph seems important. There’s a type of person that doesn’t have an opinion yet in AI discourse, which is new, and will bounce off the “side” that appears most hostile to them—which, if they have misguided ideas, might be the truth-seeking side that gently criticizes. (Not saying that’s the case for the author of this post!)
It’s really hard to change the mind of someone who’s found their side in AI. But not to have them join one in the first place!
Despite being “into” AI safety for a while, I haven’t picked a side. I do believe it’s extremely important and deserves more attention and I believe that AI actually could kill everyone in less 5 years.
But any effort spent on pinning down one’s “p(doom)” is not spent usefully on things like: how to actually make AI safe, how AI works, how to approach this problem as a civilization/community, how to think about this problem. And, as was my intention with this article, “how to think about things in general, and how to make philosophical progress”.
That last paragraph seems important. There’s a type of person that doesn’t have an opinion yet in AI discourse, which is new, and will bounce off the “side” that appears most hostile to them—which, if they have misguided ideas, might be the truth-seeking side that gently criticizes. (Not saying that’s the case for the author of this post!)
It’s really hard to change the mind of someone who’s found their side in AI. But not to have them join one in the first place!
Despite being “into” AI safety for a while, I haven’t picked a side. I do believe it’s extremely important and deserves more attention and I believe that AI actually could kill everyone in less 5 years.
But any effort spent on pinning down one’s “p(doom)” is not spent usefully on things like: how to actually make AI safe, how AI works, how to approach this problem as a civilization/community, how to think about this problem. And, as was my intention with this article, “how to think about things in general, and how to make philosophical progress”.