I recently came across a post on LinkedIn, and I have to admit the brilliance of the arguments, the coherent and frankly bulletproof ontology displayed, I was blown away and immediately had to do a major update to p(doom).
I think that the magnitude of the AI alignment problem has been ridiculously overblown & our ability to solve it widely underestimated.
I’ve been publicly called stupid before, but never as often as by the “AI is a significant existential risk” crowd.
I recently came across a post on LinkedIn, and I have to admit the brilliance of the arguments, the coherent and frankly bulletproof ontology displayed, I was blown away and immediately had to do a major update to p(doom).