Seems really interesting but I’m wondering how they can measure the accuracy of low probability but long term risks like “someone could release a hacked virus”. I look forward to reading your fleshed out post! The aligning incentives reminds me of this: https://www.lesserwrong.com/posts/a7pjErKGYHh7E9he8/the-unfriendly-superintelligence-next-door
Seems really interesting but I’m wondering how they can measure the accuracy of low probability but long term risks like “someone could release a hacked virus”. I look forward to reading your fleshed out post! The aligning incentives reminds me of this: https://www.lesserwrong.com/posts/a7pjErKGYHh7E9he8/the-unfriendly-superintelligence-next-door