Despite being “into” AI safety for a while, I haven’t picked a side. I do believe it’s extremely important and deserves more attention and I believe that AI actually could kill everyone in less 5 years.
But any effort spent on pinning down one’s “p(doom)” is not spent usefully on things like: how to actually make AI safe, how AI works, how to approach this problem as a civilization/community, how to think about this problem. And, as was my intention with this article, “how to think about things in general, and how to make philosophical progress”.
Despite being “into” AI safety for a while, I haven’t picked a side. I do believe it’s extremely important and deserves more attention and I believe that AI actually could kill everyone in less 5 years.
But any effort spent on pinning down one’s “p(doom)” is not spent usefully on things like: how to actually make AI safe, how AI works, how to approach this problem as a civilization/community, how to think about this problem. And, as was my intention with this article, “how to think about things in general, and how to make philosophical progress”.