“Now, I’ve seen AI movies and robot revolution movies, but what was interesting about what Musk was saying was that he was saying this is a really very real possibility, and it led me to look into it.
There was a couple of books that had come out on the subject. One is Superintelligence by Nick Bostrom, and the other one is Our Final Invention by James Barratt, which was another terrific book. There was another individual, Eliezer Yudkowsky, who is leading a whole seminar on the on the internet about this subject.”
I really liked the first episode! Great show, and a lot of content seemed directly inspired from the AI safety community! I’m impressed!
mentions the risk that a nuclear bomb would have ignited the atmosphere
mentions the unilateralist curse (but not by name)
references to Musk, Hawking, and Gates concerned with AI safety
explanation and illustration of an intelligence explosion
explanation and (surprisingly realistic) illustration of escaping the box
explain how the AI isn’t malicious, we’re just in the way (just like ants are to us)
talks about how most people say that AI concerns “is just sci-fi” instead of engaging from first principles
illustration of misalignment of a company’s goals with human goals
“Hello, I’m NeXt”. NeXt is not just zir name, ze’s also the next entity, which will replace humanity.
From an interview (https://www.cbr.com/foxs-next-creator-introduces-his-demonic-ai-with-deadly-methods/) with the show’s creator:
oh wow, nice!