I think a valuable use of forecasting is predicting which of the top thinkers in technical alignment will successfully pivot towards making big progress/discoveries in AI governance, because thinking of solutions to the current global situation with AI is probably worth a much larger portion of their day than 0-1% (maybe an hour a day or 2 hours per week).
The initial barriers to having good takes on governments are significant, e.g. there are a lot of terrible information sources on global affairs that depict themselves as good while actually being dogshit e.g. news corporations and many documentaries (for example, the military doesn’t actually hand over the nuclear arsenal to a new person every 4 or 8 years because they won an election, that’s obvious propaganda). I think these barriers to entry are causing our best people to “bounce off” off the most valuable approaches.
I’m currently aware of solid potential for John Wentworth, Jan Kulviet, and Andrew Critch. Yudkowsky already resolved as “YES”. These people should be made ready to make big discoveries ASAP since the world is moving fast.
I think a valuable use of forecasting is predicting which of the top thinkers in technical alignment will successfully pivot towards making big progress/discoveries in AI governance, because thinking of solutions to the current global situation with AI is probably worth a much larger portion of their day than 0-1% (maybe an hour a day or 2 hours per week).
The initial barriers to having good takes on governments are significant, e.g. there are a lot of terrible information sources on global affairs that depict themselves as good while actually being dogshit e.g. news corporations and many documentaries (for example, the military doesn’t actually hand over the nuclear arsenal to a new person every 4 or 8 years because they won an election, that’s obvious propaganda). I think these barriers to entry are causing our best people to “bounce off” off the most valuable approaches.
Those barriers to entry are worth overcoming, I currently haven’t yet thought of a good way to do that without flying people to DC (the bay area has a strong libertarian cultural background, which results in failure modes like assuming that all parts of all intelligence agencies are as incompetent as FDA bureaucrats).
I’m currently aware of solid potential for John Wentworth, Jan Kulviet, and Andrew Critch. Yudkowsky already resolved as “YES”. These people should be made ready to make big discoveries ASAP since the world is moving fast.