For anyone who’d like to see questions of this type on Metaculus as well, there’s this thread. For certain topics (alignment very much included), we’ll often do the legwork of operationalizing suggested questions and posting them on the platform.
Side note: we’re working on spinning up what is essentially an AI forecasting research program; part of that will involve predicting the level of resources allocated to, and the impact of, different approaches to alignment. I’d be very glad to hear ideas from alignment researchers as to how to best go about this, and how we can make its outputs as useful as possible. John, if you’d like to chat about this, please DM me and we can set up a call.
For anyone who’d like to see questions of this type on Metaculus as well, there’s this thread. For certain topics (alignment very much included), we’ll often do the legwork of operationalizing suggested questions and posting them on the platform.
Side note: we’re working on spinning up what is essentially an AI forecasting research program; part of that will involve predicting the level of resources allocated to, and the impact of, different approaches to alignment. I’d be very glad to hear ideas from alignment researchers as to how to best go about this, and how we can make its outputs as useful as possible. John, if you’d like to chat about this, please DM me and we can set up a call.