If timelines are short, where does the remaining value live? Some fairly Babble-ish ideas:
Alignment-by-default
Both outer alignment and inner by default
With full alignment by default, there’s nothing to do, I think! One could be an accelerationist, but the reduction in suffering and lives lost now doesn’t seem large enough for the cost in probability of aligned AI
Possibly value could be lost if values aren’t sufficiently cosmopolitan? One could try and promote cosmopolitan values
Inner alignment by default
Focus on tools for getting good estimates of human values, or an intent-aligned AI
Ought’s work is a good example
Possibly trying to experiment with governance / elicitation structures, like quadratic voting
Also thinking about how to get good governance structures actually used
Acausal trade
In particular, expand the ideas in this post. (I understand Paul to be claiming he argues for tractability somewhere in that post, but I couldn’t find it)
Work through the details of UDT games, and how we could effect proper acausal trade. Figure out how to get the relevant decision makers on board
Strong, fairly late institutional responses
Work on making, for example, states strong enough to (coordinately) restrict or stop AI development
Other things that seem useful:
Learn the current hot topics in ML. If timelines are short, it’s probably the case that AGI will use extensions of the current frontier
Invest in leveraging AI tools for direct work / getting those things that money cannot buy. This maybe a little early, but if the takeoff is at all soft, maybe there are still >10 years left of 2020-level intellectual work before 2030 if you’re using the right tools
Thanks! I found this the most helpful of the answers so far. I’d be interested to hear more about leveraging AI tools for direct work; can you say more?
I see people on twitter, for example, doing things like having GPT-3 provide autocomplete or suggestions while they’re writing, or doing grunt work of producing web apps. Plausibly, figuring out how to get the most value out of future AI developments for improving productivity is important.
There’s an issue that it’s not very obvious exactly how to prepare for various AI tools in the future. One piece of work could be thinking more about how to flexibly prepare for AI tools with unknown capabilities, or predicting what the capabilities will be.
Other things that come to mind are:
Practice getting up to speed in new tool setups. If you are very bound to a setup that you like, you might have a hard time leveraging these advances as they come along. Alternatively, try and be sure you can extend your current workflow
Increase the attention you pay to new (AI) tools. Get used to trying them out, both for the reasons above and because it may be important to act fast in picking up very helpful new tools
To be clear, it’s not super clear to me how much value there is in this direction. It is pretty plausible to me that AI tooling will be essential for competitive future productivity, but maybe there’s not much of an opportunity to bet on that
If timelines are short, where does the remaining value live? Some fairly Babble-ish ideas:
Alignment-by-default
Both outer alignment and inner by default
With full alignment by default, there’s nothing to do, I think! One could be an accelerationist, but the reduction in suffering and lives lost now doesn’t seem large enough for the cost in probability of aligned AI
Possibly value could be lost if values aren’t sufficiently cosmopolitan? One could try and promote cosmopolitan values
Inner alignment by default
Focus on tools for getting good estimates of human values, or an intent-aligned AI
Ought’s work is a good example
Possibly trying to experiment with governance / elicitation structures, like quadratic voting
Also thinking about how to get good governance structures actually used
Acausal trade
In particular, expand the ideas in this post. (I understand Paul to be claiming he argues for tractability somewhere in that post, but I couldn’t find it)
Work through the details of UDT games, and how we could effect proper acausal trade. Figure out how to get the relevant decision makers on board
Strong, fairly late institutional responses
Work on making, for example, states strong enough to (coordinately) restrict or stop AI development
Other things that seem useful:
Learn the current hot topics in ML. If timelines are short, it’s probably the case that AGI will use extensions of the current frontier
Invest in leveraging AI tools for direct work / getting those things that money cannot buy. This maybe a little early, but if the takeoff is at all soft, maybe there are still >10 years left of 2020-level intellectual work before 2030 if you’re using the right tools
Thanks! I found this the most helpful of the answers so far. I’d be interested to hear more about leveraging AI tools for direct work; can you say more?
Sure!
I see people on twitter, for example, doing things like having GPT-3 provide autocomplete or suggestions while they’re writing, or doing grunt work of producing web apps. Plausibly, figuring out how to get the most value out of future AI developments for improving productivity is important.
There’s an issue that it’s not very obvious exactly how to prepare for various AI tools in the future. One piece of work could be thinking more about how to flexibly prepare for AI tools with unknown capabilities, or predicting what the capabilities will be.
Other things that come to mind are:
Practice getting up to speed in new tool setups. If you are very bound to a setup that you like, you might have a hard time leveraging these advances as they come along. Alternatively, try and be sure you can extend your current workflow
Increase the attention you pay to new (AI) tools. Get used to trying them out, both for the reasons above and because it may be important to act fast in picking up very helpful new tools
To be clear, it’s not super clear to me how much value there is in this direction. It is pretty plausible to me that AI tooling will be essential for competitive future productivity, but maybe there’s not much of an opportunity to bet on that