You might switch from building ‘career capital’ and useful skills to working directly on prosaic alignment, if you now consider it plausible that “attention is all you need for AGI”.
Before OpenAI’s various models, prosaic alignment looked more like an important test run / field-building exercise so we’d be well placed to shape the next AI/ML paradigm around something like MIRI’s Agent Foundations work. Now it looks like prosaic alignment might be the only kind we get, and the deadline might be very early indeed.
You might switch from building ‘career capital’ and useful skills to working directly on prosaic alignment, if you now consider it plausible that “attention is all you need for AGI”.
Before OpenAI’s various models, prosaic alignment looked more like an important test run / field-building exercise so we’d be well placed to shape the next AI/ML paradigm around something like MIRI’s Agent Foundations work. Now it looks like prosaic alignment might be the only kind we get, and the deadline might be very early indeed.