This also is related to the crux between me and Ajeya Cotra, between me and Paul Christiano, between me and Rohin Shah… I think their view is that the “2020 AGI/TAI training requirements” variable is a lot higher than I think (they are thinking something like 1e36 FLOP, I’m thinking something like 1e29) because they are thinking you’ll need to do lots and lots of long-horizon training to get systems that are good at long-horizon tasks, whereas I’m thinking you’ll be able to get away with mostly training on shorter tasks and then a bit of fine-tuning on longer tasks.
This also is related to the crux between me and Ajeya Cotra, between me and Paul Christiano, between me and Rohin Shah… I think their view is that the “2020 AGI/TAI training requirements” variable is a lot higher than I think (they are thinking something like 1e36 FLOP, I’m thinking something like 1e29) because they are thinking you’ll need to do lots and lots of long-horizon training to get systems that are good at long-horizon tasks, whereas I’m thinking you’ll be able to get away with mostly training on shorter tasks and then a bit of fine-tuning on longer tasks.