Was a philosophy PhD student, left to work at AI Impacts, then Center on Long-Term Risk, then OpenAI. Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI. Now executive director of the AI Futures Project. I subscribe to Crocker’s Rules and am especially interested to hear unsolicited constructive criticism. http://sl4.org/crocker.html
Some of my favorite memes:
(by Rob Wiblin)
(xkcd)
My EA Journey, depicted on the whiteboard at CLR:
(h/t Scott Alexander)
Indeed it did! I remember saying to someone (maybe Beth? I forget) that seeing that graph made me feel slightly relieved. When we started writing AI-2027, my median AGI arrival date was 2027; now it’s 2028. (Though most of the update happened for other reasons besides the METR graph)