Any thoughts or feedback on how to approach this kind of investigation, or what existing foresight frameworks you think would be particularly helpful here are very much appreciated!
As I mentioned in the post, I think the Canadian and Singapore governments are both the best governments in this space, to my knowledge.
Fortunately, some organizations have created rigorous foresight methods. The top contenders I came across were Policy Horizons Canada within the Canadian Federal Government and the Centre for Strategic Futures within the Singaporean Government.
As part of this kind of work, you want to be doing scenario planning multiple levels down. How does AI interact with VR? Once you have that, how does it interact with security and defence? How does this impact offensive work? What are the geopolitical factors that work their way in? Does public sentiment through job loss impact the development of these technologies in some specific ways? For example, you might have more powerful pushback from industries with more distinguished, intelligent, heavily regulated industries with strong union support.
Aside from that, you might want to reach out to the Foresight Institute, though I’m a bit more skeptical that their methodology will help here (though I’m less familiar with it and like the organizers overall).
I also think that looking at the Malicious AI Report from a few years ago for some inspiration would be helpful, particularly because they held a workshop with people of different backgrounds. There might be some better, more recent work I’m unaware of.
Additionally, I’d like to believe that this post was a precursor to Vitalik’s post on d/acc (defensive accelerationism), so I’d encourage you to look at that.
Another thing to look into are companies that are in the cybersecurity space. I think we’ll be getting more AI Safety pilled orgs in this area soon. Lekara is an example of this, I met two employees and they essentially told me that the vision is to embed themselves into companies and then continue to figure out how to make AI safer and the world more robust once they are in that position.
There are also more organizations popping up, like the Center for AI Policy, and my understanding is that Cate Hall is starting an org that focuses on sensemaking (and grantmaking) for AI Safety.
If you or anyone is interested in continuing this kind of work, send me a DM. I’d be happy to help provide guidance in the best way I can.
Lastly, I will note that I think people have generally avoided this kind of work because “if you have a misaligned AGI, well, you are dead no matter how robust you make the world or wtv you plan around it.” I think this view is misguided and I think you can potentially make our situation a lot better by doing this kind of work. I think recent discussions on AI Control (rather than Alignment) are useful in questioning previous assumptions.
As I mentioned in the post, I think the Canadian and Singapore governments are both the best governments in this space, to my knowledge.
As part of this kind of work, you want to be doing scenario planning multiple levels down. How does AI interact with VR? Once you have that, how does it interact with security and defence? How does this impact offensive work? What are the geopolitical factors that work their way in? Does public sentiment through job loss impact the development of these technologies in some specific ways? For example, you might have more powerful pushback from industries with more distinguished, intelligent, heavily regulated industries with strong union support.
Aside from that, you might want to reach out to the Foresight Institute, though I’m a bit more skeptical that their methodology will help here (though I’m less familiar with it and like the organizers overall).
I also think that looking at the Malicious AI Report from a few years ago for some inspiration would be helpful, particularly because they held a workshop with people of different backgrounds. There might be some better, more recent work I’m unaware of.
Additionally, I’d like to believe that this post was a precursor to Vitalik’s post on d/acc (defensive accelerationism), so I’d encourage you to look at that.
Another thing to look into are companies that are in the cybersecurity space. I think we’ll be getting more AI Safety pilled orgs in this area soon. Lekara is an example of this, I met two employees and they essentially told me that the vision is to embed themselves into companies and then continue to figure out how to make AI safer and the world more robust once they are in that position.
There are also more organizations popping up, like the Center for AI Policy, and my understanding is that Cate Hall is starting an org that focuses on sensemaking (and grantmaking) for AI Safety.
If you or anyone is interested in continuing this kind of work, send me a DM. I’d be happy to help provide guidance in the best way I can.
Lastly, I will note that I think people have generally avoided this kind of work because “if you have a misaligned AGI, well, you are dead no matter how robust you make the world or wtv you plan around it.” I think this view is misguided and I think you can potentially make our situation a lot better by doing this kind of work. I think recent discussions on AI Control (rather than Alignment) are useful in questioning previous assumptions.