Would you say it’s taken particularly seriously NOW? There are some books about it, and some researchers focusing on it. A very tiny portion of the total thought put into the topic of machine intelligence.
I think:
1) about the same percentage of publishing on the overall topic went to risks, then as now. There’s a ton more on AI risks now, because there are 3 orders of magnitude more overall thought and writing on AI generally.
2) This may still be true. Humans aren’t good at long-term risk analysis.
3) Perhaps more than 60 years of thinking will be required. We’re beginning to ask the right questions (I hope).
Would you say it’s taken particularly seriously NOW? There are some books about it, and some researchers focusing on it. A very tiny portion of the total thought put into the topic of machine intelligence.
I think:
1) about the same percentage of publishing on the overall topic went to risks, then as now. There’s a ton more on AI risks now, because there are 3 orders of magnitude more overall thought and writing on AI generally.
2) This may still be true. Humans aren’t good at long-term risk analysis.
3) Perhaps more than 60 years of thinking will be required. We’re beginning to ask the right questions (I hope).