Pretty sobering, and it’s pretty clear that it’s beyond the time we got serious about this. I might put together a post of my own calling for any creative ideas that ordinary people can implement to help the cause, but the most obvious thing is to raise awareness. I hope Yudkowsky gets the chance to do a lot more interviews like this.
Yudkowsky, if you ever see this, please don’t give up hope. Crazy breakthroughs do happen, and more people are getting into alignment as time goes on.
I can’t even get a good answer of “What’s the GiveWell of AI Safety” so I can quickly donate to a very reputable and agreed upon option with little thinking without at best getting old lists to a ton of random small orgs and giving up. I’m not very optimistic ordinary less convinced people who want to help are having an easier time.
I’ve thought for awhile here that the primary problem in alarmism (I’m one) is painting a concise, believable picture. It takes a willful effort and open mind to build a “realistic” picture of this here-to-fore unknown mechanism/organism for oneself, and is near impossible to do for someone who is skeptical or opposed to the potential conclusion.
Being a web dev I’ve brainstormed on occasion ways to build short, crowd-ranked chains of causal steps which people could evaluate for themselves, with various doubts and supporting evidence given to each. It’s still a vague vision which is why I raise it here, to see if anyone else wants to get involved on that abstract design level. ( hmu )
I think the general problem of painting a believable picture could be solved by a number of different mediums though. Unfortunately we’re drowning in inaccurate or indulgent dystopias which end up creating a “boy who cried wolf” effect for earnest ones.
Pretty sobering, and it’s pretty clear that it’s beyond the time we got serious about this. I might put together a post of my own calling for any creative ideas that ordinary people can implement to help the cause, but the most obvious thing is to raise awareness. I hope Yudkowsky gets the chance to do a lot more interviews like this.
Yudkowsky, if you ever see this, please don’t give up hope. Crazy breakthroughs do happen, and more people are getting into alignment as time goes on.
I can’t even get a good answer of “What’s the GiveWell of AI Safety” so I can quickly donate to a very reputable and agreed upon option with little thinking without at best getting old lists to a ton of random small orgs and giving up. I’m not very optimistic ordinary less convinced people who want to help are having an easier time.
Long-Term Future Fund. They give grants to AI safety orgs and researchers, especially the kind that Yudkowsky would be less sad about.
I’d encourage you to do that.
https://www.lesswrong.com/posts/CqmDWHLMwybSDTNFe/fighting-for-our-lives-what-ordinary-people-can-do
I’ve thought for awhile here that the primary problem in alarmism (I’m one) is painting a concise, believable picture. It takes a willful effort and open mind to build a “realistic” picture of this here-to-fore unknown mechanism/organism for oneself, and is near impossible to do for someone who is skeptical or opposed to the potential conclusion.
Being a web dev I’ve brainstormed on occasion ways to build short, crowd-ranked chains of causal steps which people could evaluate for themselves, with various doubts and supporting evidence given to each. It’s still a vague vision which is why I raise it here, to see if anyone else wants to get involved on that abstract design level. ( hmu )
I think the general problem of painting a believable picture could be solved by a number of different mediums though. Unfortunately we’re drowning in inaccurate or indulgent dystopias which end up creating a “boy who cried wolf” effect for earnest ones.
https://www.lesswrong.com/posts/CqmDWHLMwybSDTNFe/fighting-for-our-lives-what-ordinary-people-can-do
Any ideas you have for overcoming the problems of alarmism would be good.