I don’t see why humanity can make rapid progress on fields like ML while not having the ability to make progress on AI alignment.
The reason normally given is that AI capability is much easier to test and optimise than AI safety. Much like philosophy, it’s very unclear when you are making progress, and sometimes unclear if progress is even possible. It doesn’t help that AI alignment isn’t particularly profitable in the short term.
I’m surprised to see so little discussion of educational attainment and it’s relation to birth order here. It seems that a lot of the discussion is around biological differences. Did I miss something?
Families may only have enough money to send one child to school or university, and this is commonly the first born. As a result, I’d expect to see a trend of more first-borns in academic fields like mathematics, as well as on LessWrong.
As a quick example to back up this hunch, this paper seems to reach the same conclusion:
https://www.sciencedirect.com/science/article/abs/pii/S0272775709001368
“birth order turns out to have a significant negative effect on educational attainment. This decline in years of schooling with birth order turns out to be approximately linear.”
I’d be interested if the effect still exists if we control for educational attendance/ resources somehow.