Yet another New York Times piece on AI. A non-AI safety friend sent it to me saying “This is the scariest article I’ve read so far. I’m afraid I haven’t been taking it very seriously”. I’m noting this because I’m always curious to observe what moves people, what’s out there that has the power to change minds. In the past few months, there’s been increasing public attention to AI and all sorts of hot and cold takes, e.g., about intelligence, consciousness, sentience, etc. But this might be one of the articles that convey the AI risk message in a language that helps inform and think about AI safety.
The following is what stood out to me and made me think that it’s time for philosophy of science to also take AI risk seriously and revisit the idea of scientific explanation given the success of deep learning:
I cannot emphasize this enough: We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.
“If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations,” writes Meghan O’Gieblyn in her brilliant book, “God, Human, Animal, Machine,” “an ‘explanation’ that would be impossible to understand.”
On taking AI risk seriously
Link post
Crossposted from the EA Forum: https://forum.effectivealtruism.org/posts/pKG5fsfrgDSQtssfu/on-taking-ai-risk-seriously
Yet another New York Times piece on AI. A non-AI safety friend sent it to me saying “This is the scariest article I’ve read so far. I’m afraid I haven’t been taking it very seriously”. I’m noting this because I’m always curious to observe what moves people, what’s out there that has the power to change minds. In the past few months, there’s been increasing public attention to AI and all sorts of hot and cold takes, e.g., about intelligence, consciousness, sentience, etc. But this might be one of the articles that convey the AI risk message in a language that helps inform and think about AI safety.
The following is what stood out to me and made me think that it’s time for philosophy of science to also take AI risk seriously and revisit the idea of scientific explanation given the success of deep learning: