Eliezer Yudkowsky replies (https://twitter.com/ESYudkowsky/status/1629932013712187395): “Remember that many things you could do to relieve your angst are actively counterproductive! Don’t give into the fallacy of “needing to do something” even if that makes things worse! Prove the prediction markets wrong about you!”
EDIT 5: From this Reuters article. Elon Musk: “I’m a little worried about the AI stuff [...] We need some kind of, like, regulatory authority or something overseeing AI development [...] make sure it’s operating in the public interest. It’s quite dangerous technology. I fear I may have done some things to accelerate it.”
I think that the magnitude of the AI alignment problem has been ridiculously overblown & our ability to solve it widely underestimated. I’ve been publicly called stupid before, but never as often as by the “AI is a significant existential risk” crowd. That’s OK, I’m used to it.
This post was published shortly before Elon Musk responded to the podcast that featured Eliezer, and Eliezer also replied to Elon Musk’s response. You can find Elon Musk’s tweet at: https://twitter.com/elonmusk/status/1628086895686168613
Also, there’s a follow-up to the podcast, still featuring Eliezer, here: https://twitter.com/i/spaces/1PlJQpZogzVGE
EDIT to update: Elon Musk is no longer following Eliezer Yudkowsky: https://twitter.com/BigTechAlert/status/1628389659649736707
EDIT 2: Lex Fridman tweets “I’d love to talk to @ESYudkowsky. I think it’ll be a great conversation!” https://twitter.com/lexfridman/status/1620251244463022081
EDIT 3: Sam Altman posts a selfie with Eliezer and Grimes: https://twitter.com/sama/status/1628974165335379973
EDIT 4:
Elon Musk: “Having a bit of AI Existential angst today” https://twitter.com/elonmusk/status/1629901954234105857
Eliezer Yudkowsky replies (https://twitter.com/ESYudkowsky/status/1629932013712187395): “Remember that many things you could do to relieve your angst are actively counterproductive! Don’t give into the fallacy of “needing to do something” even if that makes things worse! Prove the prediction markets wrong about you!”
EDIT 5: From this Reuters article. Elon Musk: “I’m a little worried about the AI stuff [...] We need some kind of, like, regulatory authority or something overseeing AI development [...] make sure it’s operating in the public interest. It’s quite dangerous technology. I fear I may have done some things to accelerate it.”
EDIT 6: Eliezer: “I should probably try another podcast [...] YES FINE I’LL INQUIRE OF LEX FRIDMAN” https://twitter.com/ESYudkowsky/status/1632140761679675392
EDIT 7: Elon Musk: “In my case, I guess it would be the Luigi effect”: https://twitter.com/elonmusk/status/1632487656742420483
EDIT 8: Another exchange between Elon Musk and Eliezer: https://twitter.com/elonmusk/status/1637176761220833281
EDIT 9: Elon Musk tweets: “Maximum truth-seeking is my best guess for AI safety”: https://twitter.com/elonmusk/status/1637371603561398276
Edit 10: Yan LeCunn on Twitter:
https://twitter.com/ylecun/status/1637883960578682883