I go back in time to the year 1992. I bring with me as much information about AI, AI risk, AI safety, etc. as I can. I also bring back the sort of information that will help me make loads of money—technologies and startups to invest in, for example. The code for Ethereum or Bitcoin or something. The code for Transformers and LSTMs and whatnot.
Basically, my plan involves repeated applications of the following two steps:
Make money on the stock market (later, as a venture capital firm, and later still, as a giant tech company)
Identify extraordinary people who in my timeline were sympathetic to AI risk (e.g. Nick Bostrom, EY, Elon Musk) and convince them that I’m from the future and I’m here to help etc., then convince them of AI risk and get them to join my conspiracy/corporation.
By 2020, I’d have a team of some of the world’s most capable people, at the helm of a megacorporation the size of Alphabet, Facebook, Amazon, Tesla, and SpaceX combined. The rationalist and EA movements would also exist, but they’d have more funding and they would have started ten years earlier. We’d all be very concerned about AI safety and we’d have a budget of a hundred billion dollars per year. We’d have tens of thousands of the world’s smartest researchers working for us. I feel fairly confident that we would then figure out what needs to be done and do it.
I go back in time to the year 1992. I bring with me as much information about AI, AI risk, AI safety, etc. as I can. I also bring back the sort of information that will help me make loads of money—technologies and startups to invest in, for example. The code for Ethereum or Bitcoin or something. The code for Transformers and LSTMs and whatnot.
Basically, my plan involves repeated applications of the following two steps:
Make money on the stock market (later, as a venture capital firm, and later still, as a giant tech company)
Identify extraordinary people who in my timeline were sympathetic to AI risk (e.g. Nick Bostrom, EY, Elon Musk) and convince them that I’m from the future and I’m here to help etc., then convince them of AI risk and get them to join my conspiracy/corporation.
By 2020, I’d have a team of some of the world’s most capable people, at the helm of a megacorporation the size of Alphabet, Facebook, Amazon, Tesla, and SpaceX combined. The rationalist and EA movements would also exist, but they’d have more funding and they would have started ten years earlier. We’d all be very concerned about AI safety and we’d have a budget of a hundred billion dollars per year. We’d have tens of thousands of the world’s smartest researchers working for us. I feel fairly confident that we would then figure out what needs to be done and do it.