That largely depends on where AI safety’s talent has been going, and could go.
I’m thinking that most of the smarter quant thinkers have been doing AI alignment 8 hours a day and probably won’t succeed, especially without access to AI architectures that haven’t been invented yet, and most of the people research policy and cooperation weren’t our best.
If our best quant thinkers are doing alignment research for 8 hours a day with systems that probably aren’t good enough to extrapolate to the crunch time systems, and our best thinkers haven’t been researching policy and coordination (e.g. historically unprecedented coordination takeoffs), then the expected hope from policy and coordination is much higher, and our best quant thinkers should be doing policy and coordination during this time period; even if we’re 4 years away, they can mostly do human research for freshman and sophomore year and go back to alignment research for junior and senior year. Same if we’re two years away.
That largely depends on where AI safety’s talent has been going, and could go.
I’m thinking that most of the smarter quant thinkers have been doing AI alignment 8 hours a day and probably won’t succeed, especially without access to AI architectures that haven’t been invented yet, and most of the people research policy and cooperation weren’t our best.
If our best quant thinkers are doing alignment research for 8 hours a day with systems that probably aren’t good enough to extrapolate to the crunch time systems, and our best thinkers haven’t been researching policy and coordination (e.g. historically unprecedented coordination takeoffs), then the expected hope from policy and coordination is much higher, and our best quant thinkers should be doing policy and coordination during this time period; even if we’re 4 years away, they can mostly do human research for freshman and sophomore year and go back to alignment research for junior and senior year. Same if we’re two years away.