Perhaps the research that we (Existential Risk Observatory) and others (e.g. Nik Samoylov, Koen Schoenmakers) have done on effectively communicating AI xrisk, could be something to build on. Here’s our first paper and threeblogposts (the second includes measurement of Eliezer’s TIME article effectiveness—its numbers are actually pretty good!). We’re currently working on a base rate public awareness update and further research.
Congratulations on a great prioritization!
Perhaps the research that we (Existential Risk Observatory) and others (e.g. Nik Samoylov, Koen Schoenmakers) have done on effectively communicating AI xrisk, could be something to build on. Here’s our first paper and three blog posts (the second includes measurement of Eliezer’s TIME article effectiveness—its numbers are actually pretty good!). We’re currently working on a base rate public awareness update and further research.
Best of luck and we’d love to cooperate!