I think the last one is probably the best as overview, but none provide like a great overview. Here’s Christiano’s blog on the topic which was written in 2018, so if its predictions hold up then its evidence for it. (But it is very much not in favor of FOOM… although you really have to read it to see what that actually means.)
Yeah, unfortunately ‘somewhat argue for foom’ is exactly what I’m not looking for, rather a simple and concrete model that can aid communication with people who don’t have time to read the 700-page Hanson-Yudkowsky debate. (Which I did read, for the record.)
There are a lot of places which somewhat argue for FOOM—i.e., very fast intelligence growth in the future, probably not preceded by smooth growth—but they tend to be deeply out of date ( Yud-Hanson Debate and Intelligence Explosion Microeconomics ) or really cursory (Yud’s pararaph in List of Lethalities ) or a dialogue between two people being confused at each other (Christiano / Yud Discussion ).
I think the last one is probably the best as overview, but none provide like a great overview. Here’s Christiano’s blog on the topic which was written in 2018, so if its predictions hold up then its evidence for it. (But it is very much not in favor of FOOM… although you really have to read it to see what that actually means.)
Yeah, unfortunately ‘somewhat argue for foom’ is exactly what I’m not looking for, rather a simple and concrete model that can aid communication with people who don’t have time to read the 700-page Hanson-Yudkowsky debate. (Which I did read, for the record.)
If that’s what you’re interested in, I’d suggest: What a compute-centric framework says about AI takeoff speeds—draft report
This is the closest thing yet! Thank you. Maybe that is it.