Lots of in-the-weeds updates about theory, maybe most interestingly that “tell me what I want to hear” models are a large fraction of long-term (i.e. not-resolved-with-scale-and-diversity) generalization problems than I’d been imagining.
I’ve increased my probability on fast takeoff in the sense of successive doublings being 4-8x faster instead of 2x faster, by taking more seriously the possibility “if you didn’t hit diminishing-marginal-returns in areas like solar panels, robotics, and software, current trends would actually imply faster-than-industrial-revolution takeoff even without AI weirdness.” That’s not really a bayesian update, just a change in beliefs.
Lots of in-the-weeds updates about theory, maybe most interestingly that “tell me what I want to hear” models are a large fraction of long-term (i.e. not-resolved-with-scale-and-diversity) generalization problems than I’d been imagining.
I’ve increased my probability on fast takeoff in the sense of successive doublings being 4-8x faster instead of 2x faster, by taking more seriously the possibility “if you didn’t hit diminishing-marginal-returns in areas like solar panels, robotics, and software, current trends would actually imply faster-than-industrial-revolution takeoff even without AI weirdness.” That’s not really a bayesian update, just a change in beliefs.