I’m writing a book about epistemology. It’s about The Problem of the Criterion, why it’s important, and what it has to tell us about how we approach knowing the truth.
I’ve also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.
I can’t help but wonder if part of the answer is that they seem dangerous and people are selecting out of producing them.
Like I’m not an expert but creating AI agents seems extremely fun and appealing, and I’m intentionally working on it none because it seems safer not to build them. (Whether you think my contributions to trying to build them would matter or not is another question.)