Hi! Thank you for this project, I’ll attempt to fill the survey.
My apologies if you already encountered the following extra sources I think are relevant to this post:
the Modeling Transformative AI Risk (MTAIR) project (an attempt to map out the relationships between key hypotheses and cruxes involved in debates about catastrophic risks from advanced AI);
Hi! I appreciate you taking a look. I’m new to the topic and enjoy developing this out and learning some new potential useful approaches.
The survey is rather ambiguous and I’ve received a ton of feedback and lessons learned; as it is my first attempt at a survey, whether I wanted one or not, I am getting a Ph.D. on what NOT to do with surveys certainly. A learning experience to say the least.
The MTAIR guys I’m tracking and have been working with them as able with the hope that our projects can complement each other. Although, MTAIR is a more substantive long-term project which I should be clear to focus on that in the next few months. The scenario mapping project—at least with the first stage (depending on if there’s further development)--will be complete more or less in three months. A Short project, unfortunately (which has interfered with changing/rescoping). But I’m hoping there will be some interesting results using the GMA methodology.
And “Turchin & Derkenberger” piece is the closest classification scheme I’ve come across that’s similar to what I’m working on. Thanks for flagging that one.
If it looks reasonable to expand and refine and conduct another iteration with a workshop perhaps That could be useful. Hard to do a project like this in a 6mos timeframe.
Hi! Thank you for this project, I’ll attempt to fill the survey.
My apologies if you already encountered the following extra sources I think are relevant to this post:
the Modeling Transformative AI Risk (MTAIR) project (an attempt to map out the relationships between key hypotheses and cruxes involved in debates about catastrophic risks from advanced AI);
Turchin & Derkenberger’s Classification of global catastrophic risks connected with artificial intelligence (lists and categorizes a wide range of catastrophic scenarios, from narrow or general AI, near-term or long-term, misuse or accidents, and many other factors, with references);
Sotala’s Disjunctive Scenarios of Catastrophic AI Risk.
Hi! I appreciate you taking a look. I’m new to the topic and enjoy developing this out and learning some new potential useful approaches.
The survey is rather ambiguous and I’ve received a ton of feedback and lessons learned; as it is my first attempt at a survey, whether I wanted one or not, I am getting a Ph.D. on what NOT to do with surveys certainly. A learning experience to say the least.
The MTAIR guys I’m tracking and have been working with them as able with the hope that our projects can complement each other. Although, MTAIR is a more substantive long-term project which I should be clear to focus on that in the next few months. The scenario mapping project—at least with the first stage (depending on if there’s further development)--will be complete more or less in three months. A Short project, unfortunately (which has interfered with changing/rescoping). But I’m hoping there will be some interesting results using the GMA methodology.
And “Turchin & Derkenberger” piece is the closest classification scheme I’ve come across that’s similar to what I’m working on. Thanks for flagging that one.
If it looks reasonable to expand and refine and conduct another iteration with a workshop perhaps That could be useful. Hard to do a project like this in a 6mos timeframe.