It’s seems noteworthy just how little the AI pioneers from the 40s-80s seemed to care about AI risk. There is no obvious reason why a book like “Superintelligence” wasn’t written in the 1950s, but for some reason that didn’t happen… any thoughts on why this was the case?
I can think of three possible reasons for this:
1. They actually DID care and published extensively about AI risk, but I’m simply not well enough schooled on the history of AI research.
2. Deep down, people involved in early AI research knew that they were still a long long way from achieving significantly powerful AI, despite the optimistic public proclamations that were made at that time.
3. AI risks are highly counter-intuitive and it simply required another 60 years of thinking to understand.
Anyone have any thoughts on this question?
Would you say it’s taken particularly seriously NOW? There are some books about it, and some researchers focusing on it. A very tiny portion of the total thought put into the topic of machine intelligence.
I think:
1) about the same percentage of publishing on the overall topic went to risks, then as now. There’s a ton more on AI risks now, because there are 3 orders of magnitude more overall thought and writing on AI generally.
2) This may still be true. Humans aren’t good at long-term risk analysis.
3) Perhaps more than 60 years of thinking will be required. We’re beginning to ask the right questions (I hope).