It seems like transformative AI might be coming fairly soon. By transformative AI I just mean AI that will rapidly accelerate economic and technological progress. Of course, I am not ruling out a true singularity either. I am assuming such technology can be created using variants of current deep learning techniques.
Paul Christiano has written up arguments for a ‘slow takeoff’ where “There will be a complete 4-year interval in which world output doubles, before the first 1-year interval in which world output doubles.”. It is unclear to me whether that is more or less likely than a rapid and surprising singularity. But it certainly seems much easier to prepare for. I don’t think we have a good model of what exactly will happen but we should prepare for as many winnable scenarios as we can.
What should we do now if think big changes are coming soon? Here are some ideas:
Work on quickly useable AI safety theory: Iterated Amplification and Distillation—Assuming timelines are short we might not have time for provably safe AI. We need ai-safety theory that can be applied quickly to neural nets. Any techniques that can quickly be used to align GPT-style AI are very high value. If you have the ability, work on them now.
IDA is a good framework to bet on imo. OpenAI seems to be betting on IDA. Here is an explanation. Here is a lesswrong discussion. If you are mathematically inclined and understand the basics of deep learning now might be a great time to read the IDA papers and see if you can contribute.
Get capital while you can—Money is broadly useful and can be quickly converted into other resources in a critical moment. At the very least money can be converted into time. Be frugal, you might need your resources soon.
Besides, the value of human capital might fall. If you have a lucrative position (ex: finance or tech) now is a good time to focus on making money. The value of human capital might fall. Investing in your human capital by going back to school is a bad idea.
Invest Capital in companies that will benefit from AI technology—Tech stocks are already expensive so great deals will be hard to find. But if things get crazy you want your capital to grow rapidly. I would especially recommend hedging ‘transformative aI’ if you will get rich anyway if nothing crazy happens.
I am doing something like the following portfolio:
ARKQ − 27% Botz − 9% Microsoft − 9% Amazon − 9% Alphabet − 8% (ARKQ is ~4% alphabet)
Facebook − 7% Tencent − 6% Baidu − 6% Apple − 5% IBM − 4%
Tesla − 0 (ArkQ is 10% Tesla) Nvidia − 2% (both Botz and ARKQ hold Nvidia) Intel − 3% Salesforce − 2% Twilio − 1.5% Alteryx − 1.5%
BOTZ and ARKQ are ETFs. They have pretty high expense ratios. You can replicate them if you want to save 68-75 basis points. Botz is pretty easy to replicate with only ~10K.
Several people think that land will remain valuable in many scenarios. But I don’t see a good way to operationalize a bet on land. Some people have suggested buying options since it is easier to get leverage and the upside is higher. But getting the timing right seems tricky to me. But if you think you can time things, buy options.
Physical and Emotional Preparation—You don’t want your body or mind to fail you during the critical period. Invest in keeping yourself as healthy as possible. If you have issues with RSI work on fixing them now so you can give future developments your full attention.
You can also invest in mental preparation. Meditation is high value for many people. A systematic study of rationality techniques could be useful. But keep in mind that it is easy to waste time if you casually approach training. Track your results and have a system!
In general, you want to make these investments now while you still have time. Keep in mind these investments may conflict with attempts to increase your monetary capital. I would prioritize keeping yourself healthy. Make sure you are getting good returns on more speculative investments (and remember many self-improvement plans fail).
Political Organizing and Influence—Technological progress does not intrinsically help people. Current technology can be used for good ends. But they can also be used to control people on a huge scale. One can interpret the rise of humanity as singularity 1.0. By the standards of the previous eras change accelerated a huge amount. ‘Singularity 1.0’ did not go so well for the animals in factory farms. Even if align AI, we need to make the right choices or singularity 2.0 might not go so well for most inhabitants of the Earth.
In a slow takeoff, human governments are likely to be huge players. As Milton Friedman said, “Only a crisis—actual or perceived—produces real change”. If there is a crisis coming there may be large political changes coming soon. Influencing these changes might be of high value. Politics can be influenced from both the outside and the inside. Given the political situation, I find it unlikely an AI arms race can be averted for too long. But various sorts of intergovernmental cooperation might be possible and increasing the odds of these deals could be high value.
Capabilities Research—This is a sketchy and rather pessimistic idea. But imagine that GPT-3 has already triggered an arms race or at least that GPT-4 will. In this case, it might make sense to help a relatively values-aligned organization win (such as OpenAI as opposed to the CCP). If you are, or could be, very talented at deep learning you might have to grapple with this option.
What ideas do other people have for dealing with short timelines?
Engaging Seriously with Short Timelines
It seems like transformative AI might be coming fairly soon. By transformative AI I just mean AI that will rapidly accelerate economic and technological progress. Of course, I am not ruling out a true singularity either. I am assuming such technology can be created using variants of current deep learning techniques.
Paul Christiano has written up arguments for a ‘slow takeoff’ where “There will be a complete 4-year interval in which world output doubles, before the first 1-year interval in which world output doubles.”. It is unclear to me whether that is more or less likely than a rapid and surprising singularity. But it certainly seems much easier to prepare for. I don’t think we have a good model of what exactly will happen but we should prepare for as many winnable scenarios as we can.
What should we do now if think big changes are coming soon? Here are some ideas:
Work on quickly useable AI safety theory: Iterated Amplification and Distillation—Assuming timelines are short we might not have time for provably safe AI. We need ai-safety theory that can be applied quickly to neural nets. Any techniques that can quickly be used to align GPT-style AI are very high value. If you have the ability, work on them now.
IDA is a good framework to bet on imo. OpenAI seems to be betting on IDA. Here is an explanation. Here is a lesswrong discussion. If you are mathematically inclined and understand the basics of deep learning now might be a great time to read the IDA papers and see if you can contribute.
Get capital while you can—Money is broadly useful and can be quickly converted into other resources in a critical moment. At the very least money can be converted into time. Be frugal, you might need your resources soon.
Besides, the value of human capital might fall. If you have a lucrative position (ex: finance or tech) now is a good time to focus on making money. The value of human capital might fall. Investing in your human capital by going back to school is a bad idea.
Invest Capital in companies that will benefit from AI technology—Tech stocks are already expensive so great deals will be hard to find. But if things get crazy you want your capital to grow rapidly. I would especially recommend hedging ‘transformative aI’ if you will get rich anyway if nothing crazy happens.
I am doing something like the following portfolio:
ARKQ − 27%
Botz − 9%
Microsoft − 9%
Amazon − 9%
Alphabet − 8% (ARKQ is ~4% alphabet)
Facebook − 7%
Tencent − 6%
Baidu − 6%
Apple − 5%
IBM − 4%
Tesla − 0 (ArkQ is 10% Tesla)
Nvidia − 2% (both Botz and ARKQ hold Nvidia)
Intel − 3%
Salesforce − 2%
Twilio − 1.5%
Alteryx − 1.5%
BOTZ and ARKQ are ETFs. They have pretty high expense ratios. You can replicate them if you want to save 68-75 basis points. Botz is pretty easy to replicate with only ~10K.
Several people think that land will remain valuable in many scenarios. But I don’t see a good way to operationalize a bet on land. Some people have suggested buying options since it is easier to get leverage and the upside is higher. But getting the timing right seems tricky to me. But if you think you can time things, buy options.
Physical and Emotional Preparation—You don’t want your body or mind to fail you during the critical period. Invest in keeping yourself as healthy as possible. If you have issues with RSI work on fixing them now so you can give future developments your full attention.
You can also invest in mental preparation. Meditation is high value for many people. A systematic study of rationality techniques could be useful. But keep in mind that it is easy to waste time if you casually approach training. Track your results and have a system!
In general, you want to make these investments now while you still have time. Keep in mind these investments may conflict with attempts to increase your monetary capital. I would prioritize keeping yourself healthy. Make sure you are getting good returns on more speculative investments (and remember many self-improvement plans fail).
Political Organizing and Influence—Technological progress does not intrinsically help people. Current technology can be used for good ends. But they can also be used to control people on a huge scale. One can interpret the rise of humanity as singularity 1.0. By the standards of the previous eras change accelerated a huge amount. ‘Singularity 1.0’ did not go so well for the animals in factory farms. Even if align AI, we need to make the right choices or singularity 2.0 might not go so well for most inhabitants of the Earth.
In a slow takeoff, human governments are likely to be huge players. As Milton Friedman said, “Only a crisis—actual or perceived—produces real change”. If there is a crisis coming there may be large political changes coming soon. Influencing these changes might be of high value. Politics can be influenced from both the outside and the inside. Given the political situation, I find it unlikely an AI arms race can be averted for too long. But various sorts of intergovernmental cooperation might be possible and increasing the odds of these deals could be high value.
Capabilities Research—This is a sketchy and rather pessimistic idea. But imagine that GPT-3 has already triggered an arms race or at least that GPT-4 will. In this case, it might make sense to help a relatively values-aligned organization win (such as OpenAI as opposed to the CCP). If you are, or could be, very talented at deep learning you might have to grapple with this option.
What ideas do other people have for dealing with short timelines?
Cross posted from my blog: Short Timelines