We have a number of charities that are working on different aspects of AGI risk
- The theory of the alignment problem (MIRI/FHI/more)
- How to think about problems well (CFAR)
However we don’t have body dedicated to making and testing a coherent communication strategy to help postpone the development of dangerous AIs.
I’m organising an on-line discussion around what we should do about this issue next saturday.
In order to find out when people can do it, I’ve created a doodle here. I’m trusting that doodle works well with timezones. The time slots should be between 1200 and 2300 UTC , let me know if they are not.
On-line google hangout on approaches to communication around agi risk (2017/5/27 20:00 UTC)
We have a number of charities that are working on different aspects of AGI risk
- The theory of the alignment problem (MIRI/FHI/more)
- How to think about problems well (CFAR)
However we don’t have body dedicated to making and testing a coherent communication strategy to help postpone the development of dangerous AIs.
I’m organising an on-line discussion around what we should do about this issue next saturday.
In order to find out when people can do it, I’ve created a doodle here. I’m trusting that doodle works well with timezones. The time slots should be between 1200 and 2300 UTC , let me know if they are not.
We’ll be using the optimal brainstorming methodology
Give me a message if you want an invite, once the time has been decided.
I will take notes and post them here again.