And what other EAs reading it are thinking, I expect, is plain old Robin-Hanson-style reference class tennis of “Why would you expect intelligence to scale differently from bridges, where are all the big bridges?”
I find these sorts of characterizations very strange, since I feel like I know quite a lot of EAs, but approximately nobody that’s really into that sort of reference class forecasting (at least not more so than where Paul and Eliezer agree that superforecaster-style methodology is sound). I’m curious who specifically you’re thinking of other than Robin Hanson (who afaik wouldn’t describe himself as an EA), but feel free not to answer if you don’t want to call anyone out publicly. I think it’s worth flagging, though, that I find this characterization quite strange and at odds with my experience of EAs generally being very into gears-level/inside-view modeling.
I find these sorts of characterizations very strange, since I feel like I know quite a lot of EAs, but approximately nobody that’s really into that sort of reference class forecasting (at least not more so than where Paul and Eliezer agree that superforecaster-style methodology is sound). I’m curious who specifically you’re thinking of other than Robin Hanson (who afaik wouldn’t describe himself as an EA), but feel free not to answer if you don’t want to call anyone out publicly. I think it’s worth flagging, though, that I find this characterization quite strange and at odds with my experience of EAs generally being very into gears-level/inside-view modeling.