Now that SI has been rebranded into MIRI, I’ve had “figure out new framing for AI risk talk, and concise answers to common questions” on my to-do list for several months, but haven’t gotten to it yet. I would certainly appreciate your help with that, if you’re willing.
Partly, I’m using “Effective Altruism and the End of the World” as a tool for testing out different framings of some of the key issues. I’ll be giving the talk many times, and I’m iterating the talk between each presentation, and taking notes on which questions people ask most frequently, and which framings and explanations seem to get the best response.
Christiano has been testing different framings of things, too, mostly with the upper crust of cognitive ability. Maybe we should have a side-meeting about framing issues when you’re in town for MIRI’s September workshop?
Now that SI has been rebranded into MIRI, I’ve had “figure out new framing for AI risk talk, and concise answers to common questions” on my to-do list for several months, but haven’t gotten to it yet. I would certainly appreciate your help with that, if you’re willing.
Partly, I’m using “Effective Altruism and the End of the World” as a tool for testing out different framings of some of the key issues. I’ll be giving the talk many times, and I’m iterating the talk between each presentation, and taking notes on which questions people ask most frequently, and which framings and explanations seem to get the best response.
Christiano has been testing different framings of things, too, mostly with the upper crust of cognitive ability. Maybe we should have a side-meeting about framing issues when you’re in town for MIRI’s September workshop?
Taped for non-CA folks?
Eventually, once it’s good enough.