A lot of people who are unfamiliar with AI dismiss ideas inherent in the strong AGI argument. I think it’s always good to include the “G” or to qualify your explanation, with something like “the AGI formulation of AI, also known as ‘strong AI.’”
The risks of artificial intelligence are strongly tied with the AI’s intelligence.
AGI’s intelligence. AI such as Numenta’s grok can possess unbelievable neocortical intelligence, but without a reptile brain and a hippocampus and thalamus that shifts between goals, it “just follows orders.” In fact, what does the term “just following orders” remind you of? I’m not sure that we want a limited-capacity AGI that follows human goal structures. What if those humans are sociopaths?
I think, as does Peter Voss, that AGI is likely to improve human morality, rather than to threaten it.
There are reasons to suspect a true AI could become extremely smart and powerful.
Agreed, and well-representing MIRI’s position. MIRI is a little light on “bottom up” paths to AGI that are likely to be benevolent, such as those who are “raised as human children.” I think Voss is even more right about these, given sufficient care, respect, and attention.
Most AI motivations and goals become dangerous when the AI becomes powerful.
I disagree here, for the same reasons Voss disagrees. I think “most” overstates the case for most responsible pathways forward. One pathway that does generate a lot of sociopathic (lacking mirror neurons and human connectivity) options is the “algorithmic design” or “provably friendly, top-down design” approach. This is possibly highly ironic.
Does most of MIRI agree with this point? I know Eliezer has written about reasons why this is likely the case, but there appears to be a large “biological school” or “firm takeoff” school on MIRI as well. …And I’m not just talking about Voss’s adherents, either. Some of Moravec’s ideas are similar, as are some of Rodney Brooks’ ideas. (And Philip K. Dick’s “The Second Variety” is a more realistic version of this kind of dystopia than “the Terminator.”)
It is very challenging to program an AI with safe motivations.
Agreed there. Well-worded. And this should get the journalists thinking at least at the level of Omohundro’s introductory speech.
Mere intelligence is not a guarantee of safe interpretation of its goals.
Also good.
A dangerous AI will be motivated to seem safe in any controlled training setting.
I prefer “might be” or “will likely be” or “has several reasons to be” to the words “will be.” I don’t think LW can predict the future, but I think they can speak very intelligently about predictable risks the future might hold.
Not enough effort is currently being put into designing safe AIs.
I think everyone here agrees with this statement, but there are a few more approaches that I believe are likely to be valid, beyond the “intentionally-built-in-safety” approach. Moreover, these approaches, as noted fearfully by Yudkowsky, have less “overhead” than the “intentionally-built-in-safety” approach. However, I believe this is equally as likely to save us as it is to doom us. I think Voss agrees with this, but I don’t know for sure.
I know that evolution had a tendency to weed out sociopaths that were very frequent indeed. Without that inherent biological expiration date, a big screwup could be an existential risk. I’d like a sentence that kind of summed this last point up, because I think it might get the journalists thinking at a higher level. This is Hans Moravec’s primary point, when he urges us to become a “sea faring people” as the “tide of machine intelligence rises.”
If the AGI is “nanoteched,” it could be militarily superior to all humans, without much effort, in a few days after achieving super-intelligence.
Your comment that MIRI is little light on Child Machines and Social Machines is a little light… but thats getting away fro whether the arcticle is a good summary towards whether MIRI is right.
A lot of people who are unfamiliar with AI dismiss ideas inherent in the strong AGI argument. I think it’s always good to include the “G” or to qualify your explanation, with something like “the AGI formulation of AI, also known as ‘strong AI.’”
AGI’s intelligence. AI such as Numenta’s grok can possess unbelievable neocortical intelligence, but without a reptile brain and a hippocampus and thalamus that shifts between goals, it “just follows orders.” In fact, what does the term “just following orders” remind you of? I’m not sure that we want a limited-capacity AGI that follows human goal structures. What if those humans are sociopaths?
I think, as does Peter Voss, that AGI is likely to improve human morality, rather than to threaten it.
Agreed, and well-representing MIRI’s position. MIRI is a little light on “bottom up” paths to AGI that are likely to be benevolent, such as those who are “raised as human children.” I think Voss is even more right about these, given sufficient care, respect, and attention.
I disagree here, for the same reasons Voss disagrees. I think “most” overstates the case for most responsible pathways forward. One pathway that does generate a lot of sociopathic (lacking mirror neurons and human connectivity) options is the “algorithmic design” or “provably friendly, top-down design” approach. This is possibly highly ironic.
Does most of MIRI agree with this point? I know Eliezer has written about reasons why this is likely the case, but there appears to be a large “biological school” or “firm takeoff” school on MIRI as well. …And I’m not just talking about Voss’s adherents, either. Some of Moravec’s ideas are similar, as are some of Rodney Brooks’ ideas. (And Philip K. Dick’s “The Second Variety” is a more realistic version of this kind of dystopia than “the Terminator.”)
Agreed there. Well-worded. And this should get the journalists thinking at least at the level of Omohundro’s introductory speech.
Also good.
I prefer “might be” or “will likely be” or “has several reasons to be” to the words “will be.” I don’t think LW can predict the future, but I think they can speak very intelligently about predictable risks the future might hold.
I think everyone here agrees with this statement, but there are a few more approaches that I believe are likely to be valid, beyond the “intentionally-built-in-safety” approach. Moreover, these approaches, as noted fearfully by Yudkowsky, have less “overhead” than the “intentionally-built-in-safety” approach. However, I believe this is equally as likely to save us as it is to doom us. I think Voss agrees with this, but I don’t know for sure.
I know that evolution had a tendency to weed out sociopaths that were very frequent indeed. Without that inherent biological expiration date, a big screwup could be an existential risk. I’d like a sentence that kind of summed this last point up, because I think it might get the journalists thinking at a higher level. This is Hans Moravec’s primary point, when he urges us to become a “sea faring people” as the “tide of machine intelligence rises.”
If the AGI is “nanoteched,” it could be militarily superior to all humans, without much effort, in a few days after achieving super-intelligence.
Your comment that MIRI is little light on Child Machines and Social Machines is a little light… but thats getting away fro whether the arcticle is a good summary towards whether MIRI is right.