For what it’s worth, this is half of why I’m writing a book about epistemology. My initial goal was to, when it’s done, do what I can to get it into the hands of AI researchers to nudge them in the direction of better understanding some important ideas in epistemology on the theory that this will lead to them being more cautions about how they build AI and more open to many rationalist ideas that I think are core to the project of AI safety.
My side goal, which LLMs have made more important, is to write things that will help AI understand epistemology better and hopefully be less likely to make naive mistakes (because they are the naive mistakes that most humans make).
For what it’s worth, this is half of why I’m writing a book about epistemology. My initial goal was to, when it’s done, do what I can to get it into the hands of AI researchers to nudge them in the direction of better understanding some important ideas in epistemology on the theory that this will lead to them being more cautions about how they build AI and more open to many rationalist ideas that I think are core to the project of AI safety.
My side goal, which LLMs have made more important, is to write things that will help AI understand epistemology better and hopefully be less likely to make naive mistakes (because they are the naive mistakes that most humans make).