I mean to say that if you understand the work of Russell or other AI researchers, you understand just how large the gap is between what we know and what we could possibly apply friendliness to. Friendliness research is purely aspirational and highly speculative. It’s far more pie-in-the-sky than anti-aging research, even. Nothing wrong with Russell calling for pie-in-the-sky research, of course, but I think most people don’t understand the gulf.
When somebody says something like “Google should be careful they don’t develop Skynet” they’re demonstrating the misunderstanding that we even have the faintest notion of how to develop Skynet (and happily that means AI safety isn’t much of a problem).
I’ve read AIMA, but aren’t really up speed on the last 20 years of cutting edge AI research, which it addresses less. I don’t have the same intuition about AGI concerns being significantly more hypothetical than anti-aging stuff. For me that would mean something like “any major AGI development before 2050 or so is so improbably it’s not worth considering”, given how I’m not very optimistic on quick progress in anti-aging.
This would be my intuition if I could be sure the problem looks something like “engineer a system at least as complex as a complete adult brain”. The problem is that an AGI solution could also be “engineer a learning system that will learn to behave at human level or above intelligence at human life timespan or faster”, and I have much shakier intuitions about what the minimal required invention is for that to happen. It’s probably still ways out, but I have nothing like the same certainty of it being ways out as I have for the “directly engineer an adult human brain equivalent system” case.
So given how this whole thread is about knowing the literature better, what should I go read to build better intuition on how to estimate limits for the necessary initial complexity of learning systems?
I mean to say that if you understand the work of Russell or other AI researchers, you understand just how large the gap is between what we know and what we could possibly apply friendliness to. Friendliness research is purely aspirational and highly speculative. It’s far more pie-in-the-sky than anti-aging research, even. Nothing wrong with Russell calling for pie-in-the-sky research, of course, but I think most people don’t understand the gulf.
When somebody says something like “Google should be careful they don’t develop Skynet” they’re demonstrating the misunderstanding that we even have the faintest notion of how to develop Skynet (and happily that means AI safety isn’t much of a problem).
I’ve read AIMA, but aren’t really up speed on the last 20 years of cutting edge AI research, which it addresses less. I don’t have the same intuition about AGI concerns being significantly more hypothetical than anti-aging stuff. For me that would mean something like “any major AGI development before 2050 or so is so improbably it’s not worth considering”, given how I’m not very optimistic on quick progress in anti-aging.
This would be my intuition if I could be sure the problem looks something like “engineer a system at least as complex as a complete adult brain”. The problem is that an AGI solution could also be “engineer a learning system that will learn to behave at human level or above intelligence at human life timespan or faster”, and I have much shakier intuitions about what the minimal required invention is for that to happen. It’s probably still ways out, but I have nothing like the same certainty of it being ways out as I have for the “directly engineer an adult human brain equivalent system” case.
So given how this whole thread is about knowing the literature better, what should I go read to build better intuition on how to estimate limits for the necessary initial complexity of learning systems?