To summarize how I see the current state of the debate over “tool AI”:
Eliezer and I have differing intuitions about the likely feasibility, safety and usefulness of the “tool” framework relative to the “Friendliness theory” framework, as laid out in this exchange. This relates mostly to Eliezer’s point #2 in the original post. We are both trying to make predictions about a technology for which many of the details are unknown, and at this point I don’t see a clear way forward for resolving our disagreements, though I did make one suggestion in that thread.
Eliezer has also made two arguments (#1 and #4 in the original post) that appear to be of the form, “Even if the ‘tool’ approach is most promising, the Singularity Institute still represents a strong giving opportunity.” A couple of thoughts on this point:
One reason I find the “tool” approach relevant in the context of SI is that it resembles what I see as the traditional approach to software development. My view is that it is likely to be both safer and more efficient for developing AGI than the “Friendliness theory” approach. If this is the case, it seems that the safety of AGI will largely be a function of the competence and care with which its developers execute on the traditional approach to software development, and the potential value-added of a third-party team of “Friendliness specialists” is unclear.
That said, I recognize that SI has multiple conceptually possible paths to impact, including developing AGI itself and raising awareness of the risks of AGI. I believe that the more the case for SI revolves around activities like these rather than around developing “Friendliness theory,” the higher the bar for SI’s general impressiveness (as an organization and team) becomes; I will elaborate on this when I respond to Luke’s response to me.
Regarding Eliezer’s point #3 - I think this largely comes down to how strong one finds the argument for “tool A.I.” I agree that one shouldn’t expect SI to respond to every possible critique of its plans. But I think it’s reasonable to expect it to anticipate and respond to the stronger possible critiques.
I’d also like to address two common objections to the “tool AI” framework that came up in comments, though neither of these objections appears to have been taken up in official SI responses.
Some have argued that the idea of “tool AI” is incoherent, or is not distinct from the idea of “Oracle AI,” or is conceptually impossible. I believe these arguments to be incorrect, though my ability to formalize and clarify my intuitions on this point has been limited. For those interested in reading attempts to better clarify the concept of “tool AI” following my original post, I recommend jsalvatier’s comments on the discussion post devoted to this topic as well as my exchange with Eliezer elsewhere on this thread.
Some have argued that “agents” are likely to be more efficient and powerful than “tools,” since they are not bottlenecked by human input, and thus that the “tool” concept is unimportant. I anticipated this objection in my original post and expanded on my response in my exchange with Eliezer elsewhere on this thread. In a nutshell, I believe the “tool” framework is likely to be a faster and more efficient way of developing a capable and useful AGI than the sort of framework for which “Friendliness theory” would be relevant; and if it isn’t, that the sort of work SI is doing on “Friendliness theory” is likely to be of little value. (Again, I recognize that SI has multiple conceptually possible paths to impact other than development of “Friendliness theory” and will address these in a future comment.)
Regarding Eliezer’s point #3 - I think this largely comes down to how strong one finds the argument for “tool A.I.” I agree that one shouldn’t expect SI to respond to every possible critique of its plans. But I think it’s reasonable to expect it to anticipate and respond to the stronger possible critiques.
If, as you say, “Tool” AI is different to “Oracle” AI, you are the first person to suggest it AFAICT. Regardless of it’s strength, it appears to be very difficult to invent; it seems unreasonable to expect someone to anticipate an argument when their detractors have also universally failed to do so (apart from you.)
Some have argued that “agents” are likely to be more efficient and powerful than “tools,” since they are not bottlenecked by human input, and thus that the “tool” concept is unimportant.
Currently machines are enslaved by humans. It’s a common delusion that we’ll be able to keep them that way.
All plans start off with machines as tools. Only unrealistic plans have machines winding up as tools.
To summarize how I see the current state of the debate over “tool AI”:
Eliezer and I have differing intuitions about the likely feasibility, safety and usefulness of the “tool” framework relative to the “Friendliness theory” framework, as laid out in this exchange. This relates mostly to Eliezer’s point #2 in the original post. We are both trying to make predictions about a technology for which many of the details are unknown, and at this point I don’t see a clear way forward for resolving our disagreements, though I did make one suggestion in that thread.
Eliezer has also made two arguments (#1 and #4 in the original post) that appear to be of the form, “Even if the ‘tool’ approach is most promising, the Singularity Institute still represents a strong giving opportunity.” A couple of thoughts on this point:
One reason I find the “tool” approach relevant in the context of SI is that it resembles what I see as the traditional approach to software development. My view is that it is likely to be both safer and more efficient for developing AGI than the “Friendliness theory” approach. If this is the case, it seems that the safety of AGI will largely be a function of the competence and care with which its developers execute on the traditional approach to software development, and the potential value-added of a third-party team of “Friendliness specialists” is unclear.
That said, I recognize that SI has multiple conceptually possible paths to impact, including developing AGI itself and raising awareness of the risks of AGI. I believe that the more the case for SI revolves around activities like these rather than around developing “Friendliness theory,” the higher the bar for SI’s general impressiveness (as an organization and team) becomes; I will elaborate on this when I respond to Luke’s response to me.
Regarding Eliezer’s point #3 - I think this largely comes down to how strong one finds the argument for “tool A.I.” I agree that one shouldn’t expect SI to respond to every possible critique of its plans. But I think it’s reasonable to expect it to anticipate and respond to the stronger possible critiques.
I’d also like to address two common objections to the “tool AI” framework that came up in comments, though neither of these objections appears to have been taken up in official SI responses.
Some have argued that the idea of “tool AI” is incoherent, or is not distinct from the idea of “Oracle AI,” or is conceptually impossible. I believe these arguments to be incorrect, though my ability to formalize and clarify my intuitions on this point has been limited. For those interested in reading attempts to better clarify the concept of “tool AI” following my original post, I recommend jsalvatier’s comments on the discussion post devoted to this topic as well as my exchange with Eliezer elsewhere on this thread.
Some have argued that “agents” are likely to be more efficient and powerful than “tools,” since they are not bottlenecked by human input, and thus that the “tool” concept is unimportant. I anticipated this objection in my original post and expanded on my response in my exchange with Eliezer elsewhere on this thread. In a nutshell, I believe the “tool” framework is likely to be a faster and more efficient way of developing a capable and useful AGI than the sort of framework for which “Friendliness theory” would be relevant; and if it isn’t, that the sort of work SI is doing on “Friendliness theory” is likely to be of little value. (Again, I recognize that SI has multiple conceptually possible paths to impact other than development of “Friendliness theory” and will address these in a future comment.)
If, as you say, “Tool” AI is different to “Oracle” AI, you are the first person to suggest it AFAICT. Regardless of it’s strength, it appears to be very difficult to invent; it seems unreasonable to expect someone to anticipate an argument when their detractors have also universally failed to do so (apart from you.)
Currently machines are enslaved by humans. It’s a common delusion that we’ll be able to keep them that way.
All plans start off with machines as tools. Only unrealistic plans have machines winding up as tools.