It’s seems noteworthy just how little the AI pioneers from the 40s-80s seemed to care about AI risk. There is no obvious reason why a book like “Superintelligence” wasn’t written in the 1950s, but for some reason that didn’t happen… any thoughts on why this was the case?
I can think of three possible reasons for this:
1. They actually DID care and published extensively about AI risk, but I’m simply not well enough schooled on the history of AI research.
2. Deep down, people involved in early AI research knew that they were still a long long way from achieving significantly powerful AI, despite the optimistic public proclamations that were made at that time.
3. AI risks are highly counter-intuitive and it simply required another 60 years of thinking to understand.
[Question] Did AI pioneers not worry much about AI risks?
It’s seems noteworthy just how little the AI pioneers from the 40s-80s seemed to care about AI risk. There is no obvious reason why a book like “Superintelligence” wasn’t written in the 1950s, but for some reason that didn’t happen… any thoughts on why this was the case?
I can think of three possible reasons for this:
1. They actually DID care and published extensively about AI risk, but I’m simply not well enough schooled on the history of AI research.
2. Deep down, people involved in early AI research knew that they were still a long long way from achieving significantly powerful AI, despite the optimistic public proclamations that were made at that time.
3. AI risks are highly counter-intuitive and it simply required another 60 years of thinking to understand.
Anyone have any thoughts on this question?