I’m going to throw out some more questions. You are by no means obligated to answer.
In your AI Safety Engineering paper you say, “We propose that AI research review boards are set up, similar to those employed in review of medical research proposals. A team of experts in artificial intelligence should evaluate each research proposal and decide if the proposal falls under the standard AI – limited domain system or may potentially lead to the development of a full blown AGI.”
But would we really want to do this today? I mean, in the near future—say the next five years—AGI seems pretty hard to imagine. So might this be unnecessary?
Or, what if later on when AGI could happen, some random country throws the rules out? Do you think that promoting global cooperation now is a useful way to address this problem, as I assert in this shamelessly self-promoted blog post?
The general question I am after is, How do we balance the risks and benefits of AI research?
Finally you say in your interview, “Conceivable yes, desirable NO” on the question of relinquishment. But are you not essentially proposing relinquishment/prevention?
Just because you can’t imaging AGI in the next 5 years, doesn’t mean that in four years someone will not propose a perfectly workable algorithm for achieving it. So yes, it is necessary. Once everyone sees how obvious AGI design is, it will be too late. Random countries don’t develop cutting edge technology; it is always done by the same Superpowers (USA, Russia, etc.). I didn’t read your blog post so can’t comment on “global cooperation”. As to the general question you are asking, you can get most conceivable benefits from domain expert AI without any need for AGI. Finally, I do think that relinquishment/delaying is a desirable thing, but I don’t think it is implementable in practice.
you can get most conceivable benefits from domain expert AI without any need for AGI.
Is there a short form of where you see the line between these two types of systems? For example, what is the most “AGI-like” AI you can conceive of that is still “really a domain-expert AI” (and therefore putatively safe to develop), or vice-versa?
My usual sense is that these are fuzzy terms people toss around to point to very broad concept-clusters, which is perfectly fine for most uses, but if we’re really getting to the point of trying to propose policy based on these categories, it’s probably good to have a clearer shared understanding of what we mean by the terms.
That said, I haven’t read your paper; if this distinction is explained further there, that’s fine too.
Great question. To me a system is domain specific if it can’t be switched to a different domain without re-designing it. I can’t take Deep Blue and use it to sort mail instead. I can’t take Watson and use it to drive cars. An AGI (for which I have no examples) would be capable of switching domains. If we take humans as an example of general intelligence, you can take an average person and make them work as a cook, driver, babysitter, etc, without any need for re-designing them. You might need to spend some time teaching that person a new skill, but they can learn efficiently and perhaps just by looking at how it should be done. I can’t do this with domain expert AI. Deep Blue will not learn to sort mail regardless of how many times I demonstrate that process.
I’m going to throw out some more questions. You are by no means obligated to answer.
In your AI Safety Engineering paper you say, “We propose that AI research review boards are set up, similar to those employed in review of medical research proposals. A team of experts in artificial intelligence should evaluate each research proposal and decide if the proposal falls under the standard AI – limited domain system or may potentially lead to the development of a full blown AGI.”
But would we really want to do this today? I mean, in the near future—say the next five years—AGI seems pretty hard to imagine. So might this be unnecessary?
Or, what if later on when AGI could happen, some random country throws the rules out? Do you think that promoting global cooperation now is a useful way to address this problem, as I assert in this shamelessly self-promoted blog post?
The general question I am after is, How do we balance the risks and benefits of AI research?
Finally you say in your interview, “Conceivable yes, desirable NO” on the question of relinquishment. But are you not essentially proposing relinquishment/prevention?
Just because you can’t imaging AGI in the next 5 years, doesn’t mean that in four years someone will not propose a perfectly workable algorithm for achieving it. So yes, it is necessary. Once everyone sees how obvious AGI design is, it will be too late. Random countries don’t develop cutting edge technology; it is always done by the same Superpowers (USA, Russia, etc.). I didn’t read your blog post so can’t comment on “global cooperation”. As to the general question you are asking, you can get most conceivable benefits from domain expert AI without any need for AGI. Finally, I do think that relinquishment/delaying is a desirable thing, but I don’t think it is implementable in practice.
Is there a short form of where you see the line between these two types of systems? For example, what is the most “AGI-like” AI you can conceive of that is still “really a domain-expert AI” (and therefore putatively safe to develop), or vice-versa?
My usual sense is that these are fuzzy terms people toss around to point to very broad concept-clusters, which is perfectly fine for most uses, but if we’re really getting to the point of trying to propose policy based on these categories, it’s probably good to have a clearer shared understanding of what we mean by the terms.
That said, I haven’t read your paper; if this distinction is explained further there, that’s fine too.
Great question. To me a system is domain specific if it can’t be switched to a different domain without re-designing it. I can’t take Deep Blue and use it to sort mail instead. I can’t take Watson and use it to drive cars. An AGI (for which I have no examples) would be capable of switching domains. If we take humans as an example of general intelligence, you can take an average person and make them work as a cook, driver, babysitter, etc, without any need for re-designing them. You might need to spend some time teaching that person a new skill, but they can learn efficiently and perhaps just by looking at how it should be done. I can’t do this with domain expert AI. Deep Blue will not learn to sort mail regardless of how many times I demonstrate that process.
(nods) That’s fair. Thanks for clarifying.