I have updated my respect for MIRI significantly based on Stu Russell signing that article. (Russell is a prominent mainstream computer scientist working on related issues; as a result his opinion I think has substantially more credibility here than the physicists.)
I have updated my respect for MIRI significantly based on Stu Russell signing that article.
If you don’t think that MIRI’s arguments are convincing, then I don’t see how one outlier could significantly shift your perception, if this person does not provide additional arguments.
I would give up most of my skepticism regarding AI risks if a significant subset of experts agreed with MIRI, even if they did not provide further arguments (although a consensus would be desirable). But one expert does clearly not suffice to make up for a lack of convincing arguments.
Also note that Peter Norvig, who coauthored ‘Artificial Intelligence: A Modern Approach’ with Russell, does not appear to be too worried.
I mean to say that if you understand the work of Russell or other AI researchers, you understand just how large the gap is between what we know and what we could possibly apply friendliness to. Friendliness research is purely aspirational and highly speculative. It’s far more pie-in-the-sky than anti-aging research, even. Nothing wrong with Russell calling for pie-in-the-sky research, of course, but I think most people don’t understand the gulf.
When somebody says something like “Google should be careful they don’t develop Skynet” they’re demonstrating the misunderstanding that we even have the faintest notion of how to develop Skynet (and happily that means AI safety isn’t much of a problem).
I’ve read AIMA, but aren’t really up speed on the last 20 years of cutting edge AI research, which it addresses less. I don’t have the same intuition about AGI concerns being significantly more hypothetical than anti-aging stuff. For me that would mean something like “any major AGI development before 2050 or so is so improbably it’s not worth considering”, given how I’m not very optimistic on quick progress in anti-aging.
This would be my intuition if I could be sure the problem looks something like “engineer a system at least as complex as a complete adult brain”. The problem is that an AGI solution could also be “engineer a learning system that will learn to behave at human level or above intelligence at human life timespan or faster”, and I have much shakier intuitions about what the minimal required invention is for that to happen. It’s probably still ways out, but I have nothing like the same certainty of it being ways out as I have for the “directly engineer an adult human brain equivalent system” case.
So given how this whole thread is about knowing the literature better, what should I go read to build better intuition on how to estimate limits for the necessary initial complexity of learning systems?
So for example when Stuart Russell is saying that we really should get more serious about doing Friendly AI research, it’s probably because he’s a bit naive and not that familiar with the actual state of real-world AI?
I have updated my respect for MIRI significantly based on Stu Russell signing that article. (Russell is a prominent mainstream computer scientist working on related issues; as a result his opinion I think has substantially more credibility here than the physicists.)
If you don’t think that MIRI’s arguments are convincing, then I don’t see how one outlier could significantly shift your perception, if this person does not provide additional arguments.
I would give up most of my skepticism regarding AI risks if a significant subset of experts agreed with MIRI, even if they did not provide further arguments (although a consensus would be desirable). But one expert does clearly not suffice to make up for a lack of convincing arguments.
Also note that Peter Norvig, who coauthored ‘Artificial Intelligence: A Modern Approach’ with Russell, does not appear to be too worried.
I mean to say that if you understand the work of Russell or other AI researchers, you understand just how large the gap is between what we know and what we could possibly apply friendliness to. Friendliness research is purely aspirational and highly speculative. It’s far more pie-in-the-sky than anti-aging research, even. Nothing wrong with Russell calling for pie-in-the-sky research, of course, but I think most people don’t understand the gulf.
When somebody says something like “Google should be careful they don’t develop Skynet” they’re demonstrating the misunderstanding that we even have the faintest notion of how to develop Skynet (and happily that means AI safety isn’t much of a problem).
I’ve read AIMA, but aren’t really up speed on the last 20 years of cutting edge AI research, which it addresses less. I don’t have the same intuition about AGI concerns being significantly more hypothetical than anti-aging stuff. For me that would mean something like “any major AGI development before 2050 or so is so improbably it’s not worth considering”, given how I’m not very optimistic on quick progress in anti-aging.
This would be my intuition if I could be sure the problem looks something like “engineer a system at least as complex as a complete adult brain”. The problem is that an AGI solution could also be “engineer a learning system that will learn to behave at human level or above intelligence at human life timespan or faster”, and I have much shakier intuitions about what the minimal required invention is for that to happen. It’s probably still ways out, but I have nothing like the same certainty of it being ways out as I have for the “directly engineer an adult human brain equivalent system” case.
So given how this whole thread is about knowing the literature better, what should I go read to build better intuition on how to estimate limits for the necessary initial complexity of learning systems?