Jessica Taylor. CS undergrad and Master’s at Stanford; former research fellow at MIRI.
I work on decision theory, social epistemology, strategy, naturalized agency, mathematical foundations, decentralized networking systems and applications, theory of mind, and functional programming languages.
Blog: unstableontology.com
Twitter: https://twitter.com/jessi_cata
I would totally agree they were directionally correct, I under-estimated AI progress. I think Paul Christiano got it about right.
I’m not sure I agree about the use of hyperbolic words being “correct” here; surely, “hyperbolic” contradicts the straightforward meaning of “correct”.
Partially the state I was in around 2017 was, there are lots of people around me saying “AGI in 20 years”, by which they mean a thing that shortly after FOOMs and eats the sun or something, and I thought this was wrong and a strange set of belief updates (which was not adequately justified, and where some discussions were suppressed because “maybe it shortens timelines”). And I stand by “no FOOM by 2037”.
The people I know these days who seem most thoughtful about the AI that’s around and where it might go (“LLM whisperer” / cyborgism cluster) tend to think “AGI already, or soon” plus “no FOOM, at least for a long time”. I think there is a bunch of semantic confusion around “AGI” that makes people’s beliefs less clear, with “AGI is what makes us $100 billion” as a hilarious example of “obviously economically/politically motivated narratives about what AGI is”.
So, I don’t see these people as validating “FOOM soon” even if they’re validating “AGI soon”, and the local rat-community thing I was objecting to was something that would imply “FOOM soon”. (Although, to be clear, I was still under-estimating AI progress.)