FAI research is not AGI research, at least not at present, when we still don’t know what it is exactly that our AGI will need to work towards, how to formally define human preference.
So, my impression is that you and Eliezer have different views of this matter. My impression is that Eliezer’s goal is for SIAI to actually build an AGI unilaterally. That’s where my low probability was coming from.
It seems much more feasible to develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.
As I’ve said, I find your position sophisticated and respect it. I have to think more about your present point—reflecting on it may indeed alter my thinking about this matter.
So, my impression is that you and Eliezer have different views of this matter. My impression is that Eliezer’s goal is for SIAI to actually build an AGI unilaterally.
Still, build AGI eventually, and not now. Expertise in AI/AGI is of low relevance at present.
It seems much more feasible to develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.
It seems obviously infeasible to me that governments will chance upon this level of rationality. Also, we are clearly not on the same page if you say things like “implement in any AI”. Friendliness is not to be “installed in AIs”, Friendliness is the AI (modulo initial optimizations necessary to get the algorithm going and self-optimizing, however fast or slow that’s possible). The AGI part of FAI is exclusively about optimizing the definition of Friendliness (as an algorithm), not about building individual AIs with standardized goals.
See also this post for a longer explanation of why weak-minded AIs are not fit to carry the definition of Friendliness. In short, such AIs are (in principle) as much an existential danger as human AI researchers.
It seems obviously infeasible to me that governments will chance upon this level of rationality.
I wonder if we systematically underestimate the level of rationality of major governments. Historically, they haven’t done that badly. From an article about RAND:
Futurology was the magic word in the years after the Second World War, and because the Army and later the Air Force didn’t want to lose the civilian scientists to the private sector, Project Reasearch and Development, RAND in short, was founded in 1945 together with the aircraft manufacturer Douglas and in 1948 was converted into a Corporation. RAND established forecasts for the coming, cold future and developed, towards this end, the ‘delphi’ method.
Rand worshipped rationality as a god and attempted to quantify the unpredictable, to calculate it mathematically, to bring the fear within its grasp and under control—something that seemed to many Americans spooky and made the soviet Prawda call RAND the “American academy of death and destruction.”
(Huh, this is the first time I’ve heard of the Delphi Method.) Many of the big names in game theory (von Neumann, Nash, Shapley, Schelling) worked for RAND at some point, and developed their ideas there.
RAND has a lot of good work (I like their recent reports on Iran), but keep in mind that big misses can undo a lot of their credit; for example, even RAND acknowledges (in their retrospective published this year or last) that they screwed up massively with Vietnam.
I wonder if we systematically underestimate the level of rationality of major governments. Historically, they haven’t done that badly. From an article about RAND:
This is not really a relevant example in the context of Vladimir_Nesov’s comment. Certain government funded groups (often within the military interestingly) have on occasion shown decent levels of rationality.
The suggestion to “develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.” that he was replying to requires rational government policy making / law making rather than rare pockets of rationality within government funded institutions however. That is something that is essentially non-existent in modern democracies.
It’s not adequate to “get governments to mandate that [Friendliness] be implemented in any AI”, because Friendliness is not a robot-building standard—refer the rest of my comment. The statement about government rationality was more tangential, about governments doing anything at all concerning such a strange topic, and wasn’t meant to imply that this particular decision would be rational.
“Something like that” could be for a government funded group to implement an FAI, which, judging from my example, seems within the realm of feasibility (conditioning on FAI being feasible at all).
I wonder if we systematically underestimate the level of rationality of major governments.
Data point: the internet is almost completely a creation of government. Some say entrepreneurs and corporations played a large role, but except for corporations that specialize in doing contracts for the government, they did not begin to exert a significant effect till 1993 whereas government spending on research that led to the internet began in 1960, and the direct predacessor to internet (the ARPAnet) became operational in 1969.
Both RAND and the internet were created by the part of the government most involved in an enterprise (namely, the arms race during the Cold War) on which depended the long-term survival of the nation in the eyes of most decision makers (including voters and juries).
EDIT: significant backpedalling in response to downvotes in my second paragraph.
Still, build AGI eventually, and not now. Expertise in AI/AGI is of low relevance at present.
Yes, this is the point that I had not considered and which is worthy of further consideration.
It seems obviously infeasible to me that governments will chance upon this level of rationality.
Possibly what I mention could be accomplished with lobbying.
Also, we are clearly not on the same page if you say things like “implement in any AI”. Friendliness is not to be “installed in AIs”, Friendliness is the AI (modulo initial optimizations necessary to get the algorithm going and self-optimizing, however fast or slow that’s possible). The AGI part of FAI is exclusively about optimizing the definition of Friendliness (as an algorithm), not about building individual AIs with standardized goals.
See also this post for a longer explanation of why weak-minded AIs are not fit to carry the definition of Friendliness. In short, such AIs are (in principle) as much an existential danger as human AI researchers.
Okay, so to clarify, I myself am not personally interested in Friendly AI research (which is why the points that you’re mentioning were not in my mind before), but I’m glad that there are some people (like you) who are.
The main point that I’m trying to make is that I think that SIAI should be transparent, accountable, and place high emphasis on credibility. I think that these things would result in SIAI having much more impact than it presently is.
FAI research is not AGI research, at least not at present, when we still don’t know what it is exactly that our AGI will need to work towards, how to formally define human preference.
So, my impression is that you and Eliezer have different views of this matter. My impression is that Eliezer’s goal is for SIAI to actually build an AGI unilaterally. That’s where my low probability was coming from.
It seems much more feasible to develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.
As I’ve said, I find your position sophisticated and respect it. I have to think more about your present point—reflecting on it may indeed alter my thinking about this matter.
Still, build AGI eventually, and not now. Expertise in AI/AGI is of low relevance at present.
It seems obviously infeasible to me that governments will chance upon this level of rationality. Also, we are clearly not on the same page if you say things like “implement in any AI”. Friendliness is not to be “installed in AIs”, Friendliness is the AI (modulo initial optimizations necessary to get the algorithm going and self-optimizing, however fast or slow that’s possible). The AGI part of FAI is exclusively about optimizing the definition of Friendliness (as an algorithm), not about building individual AIs with standardized goals.
See also this post for a longer explanation of why weak-minded AIs are not fit to carry the definition of Friendliness. In short, such AIs are (in principle) as much an existential danger as human AI researchers.
I wonder if we systematically underestimate the level of rationality of major governments. Historically, they haven’t done that badly. From an article about RAND:
(Huh, this is the first time I’ve heard of the Delphi Method.) Many of the big names in game theory (von Neumann, Nash, Shapley, Schelling) worked for RAND at some point, and developed their ideas there.
RAND has a lot of good work (I like their recent reports on Iran), but keep in mind that big misses can undo a lot of their credit; for example, even RAND acknowledges (in their retrospective published this year or last) that they screwed up massively with Vietnam.
This is not really a relevant example in the context of Vladimir_Nesov’s comment. Certain government funded groups (often within the military interestingly) have on occasion shown decent levels of rationality.
The suggestion to “develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.” that he was replying to requires rational government policy making / law making rather than rare pockets of rationality within government funded institutions however. That is something that is essentially non-existent in modern democracies.
It’s not adequate to “get governments to mandate that [Friendliness] be implemented in any AI”, because Friendliness is not a robot-building standard—refer the rest of my comment. The statement about government rationality was more tangential, about governments doing anything at all concerning such a strange topic, and wasn’t meant to imply that this particular decision would be rational.
“Something like that” could be for a government funded group to implement an FAI, which, judging from my example, seems within the realm of feasibility (conditioning on FAI being feasible at all).
Data point: the internet is almost completely a creation of government. Some say entrepreneurs and corporations played a large role, but except for corporations that specialize in doing contracts for the government, they did not begin to exert a significant effect till 1993 whereas government spending on research that led to the internet began in 1960, and the direct predacessor to internet (the ARPAnet) became operational in 1969.
Both RAND and the internet were created by the part of the government most involved in an enterprise (namely, the arms race during the Cold War) on which depended the long-term survival of the nation in the eyes of most decision makers (including voters and juries).
EDIT: significant backpedalling in response to downvotes in my second paragraph.
Yes, this is the point that I had not considered and which is worthy of further consideration.
Possibly what I mention could be accomplished with lobbying.
Okay, so to clarify, I myself am not personally interested in Friendly AI research (which is why the points that you’re mentioning were not in my mind before), but I’m glad that there are some people (like you) who are.
The main point that I’m trying to make is that I think that SIAI should be transparent, accountable, and place high emphasis on credibility. I think that these things would result in SIAI having much more impact than it presently is.