I share your concerns, as do some people I know. I also feel like thoughts along these lines have become more common in the community over the past few months, but nobody I was aware of had yet tried to take steps to do something about it and take the first steps to try and galvanise a new effort on AGI risk outreach.
We’re not sure yet what we’re aiming for exactly, but we all think that current efforts at communicating AGI risk to the wider world are lacking, and we need AGI danger to be widely understood and politically planned for, like climate change is.
Personally, I’d like for our efforts to end up creating something like a new umbrella EA org specialised in wide outreach on AGI risk, from grand strategy conceptualising to planning and carrying out concrete campaigns to networking existing outreach efforts.
This isn’t 2012. AGI isn’t as far mode as it used to be. You can show people systems like PaLM, and how good their language understanding and reasoning capabilities are getting. I think that while it may be very hard, normal people could indeed be made to see that this is going somewhere dangerous, so long as their jobs do not directly depend on believing otherwise.
I share your concerns, as do some people I know. I also feel like thoughts along these lines have become more common in the community over the past few months, but nobody I was aware of had yet tried to take steps to do something about it and take the first steps to try and galvanise a new effort on AGI risk outreach.
So we did: https://forum.effectivealtruism.org/posts/DS3frSuoNynzvjet4/agi-safety-communications-initiative
We’re not sure yet what we’re aiming for exactly, but we all think that current efforts at communicating AGI risk to the wider world are lacking, and we need AGI danger to be widely understood and politically planned for, like climate change is.
Personally, I’d like for our efforts to end up creating something like a new umbrella EA org specialised in wide outreach on AGI risk, from grand strategy conceptualising to planning and carrying out concrete campaigns to networking existing outreach efforts.
This isn’t 2012. AGI isn’t as far mode as it used to be. You can show people systems like PaLM, and how good their language understanding and reasoning capabilities are getting. I think that while it may be very hard, normal people could indeed be made to see that this is going somewhere dangerous, so long as their jobs do not directly depend on believing otherwise.
If you’d like to join up, do follow the link.