I translated this discussion to Russian.
https://vk.com/@makikoty-obsuzhdenie-agi-s-eliezerom-udkovskim
Thank you, Tapatakt! :)
I feel like https://www.lesswrong.com/s/n945eovrA3oDueqtq could be even more useful to have in foreign languages, though that’s a larger project.
A general thought about translations: If I were translating this stuff, I’d plausibly go down a list like this, translating later stuff once the earlier stuff was covered:
Rationality: A-Z
Inadequate Equilibria (+ Hero Licensing)
Scout Mindset (if there aren’t legal obstacles to distribution)
Superintelligence (if there aren’t legal obstacles to distribution)
There’s No Fire Alarm for Artificial General Intelligence
AI Alignment: Why It’s Hard, and Where to Start (and/or Ensuring Smarter-Than-Human Intelligence Has a Positive Outcome)
Security Mindset and Ordinary Paranoia + Security Mindset and the Logistic Success Curve
The Rocket Alignment Problem
Selections from Arbital’s AI alignment “explore” page (unfortunately not well-organized or fully edited)
Late 2021 MIRI Conversations
Thank you; got it! Will use your list.
(btw, the Sequences are already about 90% translated to Russian (not by me), mostly several years ago)
I translated this discussion to Russian.
https://vk.com/@makikoty-obsuzhdenie-agi-s-eliezerom-udkovskim
Thank you, Tapatakt! :)
I feel like https://www.lesswrong.com/s/n945eovrA3oDueqtq could be even more useful to have in foreign languages, though that’s a larger project.
A general thought about translations: If I were translating this stuff, I’d plausibly go down a list like this, translating later stuff once the earlier stuff was covered:
Rationality: A-Z
Inadequate Equilibria (+ Hero Licensing)
Scout Mindset (if there aren’t legal obstacles to distribution)
Superintelligence (if there aren’t legal obstacles to distribution)
There’s No Fire Alarm for Artificial General Intelligence
AI Alignment: Why It’s Hard, and Where to Start (and/or Ensuring Smarter-Than-Human Intelligence Has a Positive Outcome)
Security Mindset and Ordinary Paranoia + Security Mindset and the Logistic Success Curve
The Rocket Alignment Problem
Selections from Arbital’s AI alignment “explore” page (unfortunately not well-organized or fully edited)
Late 2021 MIRI Conversations
Thank you; got it! Will use your list.
(btw, the Sequences are already about 90% translated to Russian (not by me), mostly several years ago)