The amount of Less Wrong jargon, links to Less Wrong posts explaining that jargon, and the Yudkowsky “proclamation” in this paragraph is all a bit squicky, alienating and potentially condescending.
Yes. Well said. The deeper issue though is the underlying causes of said squicky, alienating paragraphs. Surface recognition of potentially condescending paragraphs is probably insufficient.
Anyway, biting Pei’s bullet for a moment, if building an AI isn’t safe, if it’s, like Pei thinks, similar to educating a child (except, presumably, with a few orders of magnitude more uncertainty about the outcome) that sounds like a really bad thing to be trying to do.
Its unclear that Pei would agree with your presumption that educating an AGI will entail “a few orders of magnitude more uncertainty about the outcome”. We can control every aspect of an AGI’s development and education to a degree unimaginable in raising human children. Examples: We can directly monitor their thoughts. We can branch successful designs. And perhaps most importantly, we can raise them in a highly controlled virtual environment. All of this suggests we can vastly decrease the variance in outcome compared to our current haphazard approach of creating human minds.
But we’re terrible at educating children.
Compared to what? Compared to an ideal education? Your point thus illustrates the room for improvement in educating AGI.
Children routinely grow up to be awful people.
Routinely? Nevertheless, this only shows the scope and potential for improvement. To simplify: if we can make AGI more intelligent, we can also make it less awful.
And this one lacks the predictable, well-defined drives and physical limits that let us predict how most humans will eventually act
An unfounded assumption. To the extent that humans have these “predictable, well-defined drives and physical limits” we can also endow AGI’s with these qualities.
Pei’s argument is a grand rebuttal of the proposal that humanity spend more time on AI safety (why fund something that isn’t possible?) but no argument at all against the second part of the proposal—defund AI capabilities research.
Which doesn’t really require much of an argument against. Who is going to defund AI capabilities research such that this would actually prevent global progress?
Yes. Well said. The deeper issue though is the underlying causes of said squicky, alienating paragraphs. Surface recognition of potentially condescending paragraphs is probably insufficient.
Its unclear that Pei would agree with your presumption that educating an AGI will entail “a few orders of magnitude more uncertainty about the outcome”. We can control every aspect of an AGI’s development and education to a degree unimaginable in raising human children. Examples: We can directly monitor their thoughts. We can branch successful designs. And perhaps most importantly, we can raise them in a highly controlled virtual environment. All of this suggests we can vastly decrease the variance in outcome compared to our current haphazard approach of creating human minds.
Compared to what? Compared to an ideal education? Your point thus illustrates the room for improvement in educating AGI.
Routinely? Nevertheless, this only shows the scope and potential for improvement. To simplify: if we can make AGI more intelligent, we can also make it less awful.
An unfounded assumption. To the extent that humans have these “predictable, well-defined drives and physical limits” we can also endow AGI’s with these qualities.
Which doesn’t really require much of an argument against. Who is going to defund AI capabilities research such that this would actually prevent global progress?