I agree. Friendly AI may be incoherent and impossible. In fact, it looks impossible right now. But that’s often how problems look right before we make a few key insights that make things clearer, and show us (e.g.) how we were asking a wrong question in the first place. The reason I advocate Friendly AI research (among other things) is because it may be the only way to secure a desirable future for humanity, (see “Complex Value Systems are Required to Realize Valuable Futures.”) even if it looks impossible. That is why Yudkowsky once proclaimed: “Shut Up and Do the Impossible!” When we don’t know how to make progress on a difficult problem, sometimes we need to hack away at the edges.
Just a suggestion for future dialogs: The amount of Less Wrong jargon, links to Less Wrong posts explaining that jargon, and the Yudkowsky “proclamation” in this paragraph is all a bit squicky, alienating and potentially condescending. And I think they muddle the point you’re making.
Anyway, biting Pei’s bullet for a moment, if building an AI isn’t safe, if it’s, like Pei thinks, similar to educating a child (except, presumably, with a few orders of magnitude more uncertainty about the outcome) that sounds like a really bad thing to be trying to do. He writes :
I don’t think a good education theory can be “proved” in advance, pure theoretically. Rather, we’ll learn most of it by interacting with baby AGIs, just like how many of us learn how to educate children.
There’s a very good chance he’s right. But we’re terrible at educating children. Children routinely grow up to be awful people. And this one lacks the predictable, well-defined drives and physical limits that let us predict how most humans will eventually act (pro-social, in fear of authority). It sounds deeply irresponsible, albeit, not of immediate concern. Pei’s argument is a grand rebuttal of the proposal that humanity spend more time on AI safety (why fund something that isn’t possible?) but no argument at all against the second part of the proposal—defund AI capabilities research.
Just a suggestion for future dialogs: The amount of Less Wrong jargon, links to Less Wrong posts explaining that jargon, and the Yudkowsky “proclamation” in this paragraph is all a bit squicky, alienating and potentially condescending.
Seconded; that bit—especially the “Yudkowky proclaimed”—stuck out for me.
On average, they grow up to be average people. They generally don’t grow up to be Genghis Khan or a James Bond villain, which is what the UFAI scenario predicts. FAI only needs to produce AIs that are as good as the average person, however bad that is in average terms.
The topic is intelligence. Some people have superhuman (well, more than 99.99% of humans) intelligence, and we are generally not afraid of them. We expect them to have ascended to the higher reaches of the Kohlberg hierarchy.
There doesn’t seem to be a problem of Unfriendly Natural Intelligence. We don’t kill off smart people on the basis that they might be a threat. We don’t refuse people education on the grounds that we don’t know what they will do with all that dangerous knowledge. (There may have been societies that worked that way, but they don’t seem to be around any more).
The amount of Less Wrong jargon, links to Less Wrong posts explaining that jargon, and the Yudkowsky “proclamation” in this paragraph is all a bit squicky, alienating and potentially condescending.
Yes. Well said. The deeper issue though is the underlying causes of said squicky, alienating paragraphs. Surface recognition of potentially condescending paragraphs is probably insufficient.
Anyway, biting Pei’s bullet for a moment, if building an AI isn’t safe, if it’s, like Pei thinks, similar to educating a child (except, presumably, with a few orders of magnitude more uncertainty about the outcome) that sounds like a really bad thing to be trying to do.
Its unclear that Pei would agree with your presumption that educating an AGI will entail “a few orders of magnitude more uncertainty about the outcome”. We can control every aspect of an AGI’s development and education to a degree unimaginable in raising human children. Examples: We can directly monitor their thoughts. We can branch successful designs. And perhaps most importantly, we can raise them in a highly controlled virtual environment. All of this suggests we can vastly decrease the variance in outcome compared to our current haphazard approach of creating human minds.
But we’re terrible at educating children.
Compared to what? Compared to an ideal education? Your point thus illustrates the room for improvement in educating AGI.
Children routinely grow up to be awful people.
Routinely? Nevertheless, this only shows the scope and potential for improvement. To simplify: if we can make AGI more intelligent, we can also make it less awful.
And this one lacks the predictable, well-defined drives and physical limits that let us predict how most humans will eventually act
An unfounded assumption. To the extent that humans have these “predictable, well-defined drives and physical limits” we can also endow AGI’s with these qualities.
Pei’s argument is a grand rebuttal of the proposal that humanity spend more time on AI safety (why fund something that isn’t possible?) but no argument at all against the second part of the proposal—defund AI capabilities research.
Which doesn’t really require much of an argument against. Who is going to defund AI capabilities research such that this would actually prevent global progress?
As someone who’s been on LW since before it was LW, that paragraph struck me as wonderfully clear. But posts should probably be written for newbies, with as little jargon as possible.
OTOH, writing for newbies should mean linking to explanations of jargon when the jargon is unavoidable.
The amount of Less Wrong jargon, links to Less Wrong posts explaining that jargon, and the Yudkowsky “proclamation” in this paragraph is all a bit squicky
I agree that those things are bad, but don’t actually see any “Less Wrong jargon” in that paragraph, with the possible exception of “Friendly AI”. “Wrong question” and “hack away at the edges” are not LW-specific notions.
Those phrases would be fine with me if they weren’t hyperlinked to Less Wrong posts. They’re not LW-specific notions so there shouldn’t be a reason to link a Artificial Intelligence professor to blog posts discussing them. Anyway, I’m just expressing my reaction to the paragraph. You can take it or leave it.
Right: the problem is the gratuitous hyperlinking, about which I feel much the same way as I think you do—it’s not a matter of jargon.
(I’m not sure what the purpose of your last two sentences is. Did you have the impression I was trying to dismiss everything you said, or put you down, or something? I wasn’t.)
Apologies if you’re merely joking, but: Obviously Jack’s (and my) problem with the hyperlinks here is not that academic-paper-style citations would be better but that attaching those references to terms like “wrong question” and “hack away at the edges” (by whatever means) gives a bad impression.
The point is that the ideas conveyed by “wrong question” and “hack away at the edges” in that paragraph are not particularly abstruse or original or surprising; that someone as smart as Pei Wang can reasonably be expected to be familiar with them already; and that the particular versions of those ideas found at the far ends of those hyperlinks are likewise not terribly special. Accordingly, linking to them suggests (1) that Luke thinks Pei Wang is (relative to what one might expect of a competent academic in his field) rather dim, and—less strongly -- (2) that Luke thinks that the right way to treat this dimness is for him to drink of the LW kool-aid.
But this dialog wasn’t just written for Pei Wang, it was written for public consumption. Some of the audience will not know these things.
And even smart academics don’t know every piece of jargon in existence. We tend to overestimate how much of what we know is stuff other people (or other people we see as our equals) know. This is related to Eliezer’s post “Explainers Shoot High, Aim Low!”
Not merely. I wouldn’t have included those particular links when writing an email—I would when writing a blog post. But I do make the point that the problem here is one of fashion, not one intrinsic to that being communicated. Most references included in most papers aren’t especially useful or novel—the reasons for including them aren’t about information at all.
The point is that the ideas conveyed by “wrong question” and “hack away at the edges” in that paragraph are not particularly abstruse or original or surprising;
You are wrong when it comes to “Wrong Question”. The phrase in common usage is a lot more general than it is when used here. It doesn’t matter how unimpressive you consider the linked post it remains the case that when used as lesswrong jargon the meaning conveyed by the phrase is a lot more specific than the simple combination of the two words.
In the specific case of “Wrong Question”, take fault with the jargon usage, not the use of a link to explain the jargon. Saying only “wrong question” in the sentence would represent a different message and so would be a failure of communication.
But I do make the point that the problem here is one of fashion
No, you make the point that a different problem from the one Jack and I were commenting on is one of fashion. (The silliness of this when taken as a serious response is why I thought you might merely be making a joke and not also trying to make a serious point.)
You are wrong when it comes to “Wrong Question”.
I’m willing to be convinced, but the mere fact that you say this doesn’t convince me. (I think there are two separate common uses, actually. If you say someone is asking the wrong question, you mean that there’s a right question they should be asking and the one they’ve asked is a distraction from it. If you say they’re asking a wrong question, you mean the question itself is wrongheaded—typically because of a false assumption—and no answer to it is going to be informative rather than confusing.)
What do you think Pei Wang would have taken “a wrong question” to mean without the hyperlink, and how does it differ from what you think it actually means, and would the difference really have impaired the discussion?
I’m going to guess at your answer (in the hope of streamlining the discussion): the difference is that Eliezer’s article about wrong questions talks specifically about questions that can be “dissolved by understanding the cognitive algorithm that generates the perception of a question”, as opposed to ones where all there is to understand is that there’s an untrue presupposition. Except that in the very first example Eliezer gives of a “wrong question”—the purely definitional if-a-tree-falls sort of question—what you need to understand that the question is wrongheaded isn’t a cognitive algorithm, it’s just the fact that sometimes language is ambiguous and what looks like a question of fact is merely a question of definition. Which philosophers (and others) have been pointing out for decades—possibly centuries.
But let’s stipulate for the sake of argument that I’ve misunderstood, and Eliezer really did intend “wrong question” to apply only to questions for which the right response is to understand the cognitive algorithms that make it feel as if there’s a question, and that Luke had that specific meaning in mind. Then would removing the hyperlink have made an appreciable difference to how Pei Wang would have understood Luke’s words? Nope, because he was only giving an example, and the more general meaning of the word is an example—indeed, substantially the same example—of the same thing.
Anyway, enough! -- at least for me. (Feel free to have the last word.)
[EDITED to add: If whoever downvoted this would care to say why, I’ll be grateful. Did I say something stupid? Was I needlessly rude? Do you just want this whole discussion to stop?]
(I’m not sure what the purpose of your last two sentences is. Did you have the impression I was trying to dismiss everything you said, or put you down, or something? I wasn’t.)
They were added on an edit after a few downvotes. Not directed at you, should have just them out.
I agree that those things are bad, but don’t actually see any “Less Wrong jargon” in that paragraph, with the possible exception of “Friendly AI”. “Wrong question” and “hack away at the edges” are not LW-specific notions.
The “Wrong Question” phrase is in general use but in lesswrong usage is somewhat more specific and stronger meaning than it often does elsewhere.
Pei’s argument is grand rebuttal of the proposal that humanity spend more time on AI safety (why fund something that isn’t possible?) but no argument at all against the second part of the proposal—defund AI capabilities research.
You propose unilateral defunding? Surely that isn’t likely to help. If you really think it is going to help—then how?
Just a suggestion for future dialogs: The amount of Less Wrong jargon, links to Less Wrong posts explaining that jargon, and the Yudkowsky “proclamation” in this paragraph is all a bit squicky, alienating and potentially condescending. And I think they muddle the point you’re making.
Anyway, biting Pei’s bullet for a moment, if building an AI isn’t safe, if it’s, like Pei thinks, similar to educating a child (except, presumably, with a few orders of magnitude more uncertainty about the outcome) that sounds like a really bad thing to be trying to do. He writes :
There’s a very good chance he’s right. But we’re terrible at educating children. Children routinely grow up to be awful people. And this one lacks the predictable, well-defined drives and physical limits that let us predict how most humans will eventually act (pro-social, in fear of authority). It sounds deeply irresponsible, albeit, not of immediate concern. Pei’s argument is a grand rebuttal of the proposal that humanity spend more time on AI safety (why fund something that isn’t possible?) but no argument at all against the second part of the proposal—defund AI capabilities research.
Seconded; that bit—especially the “Yudkowky proclaimed”—stuck out for me.
I wish I could upvote this multiple times.
On average, they grow up to be average people. They generally don’t grow up to be Genghis Khan or a James Bond villain, which is what the UFAI scenario predicts. FAI only needs to produce AIs that are as good as the average person, however bad that is in average terms.
How dangerous would an arbitrarily selected average person be to the rest of us if given significantly superhuman power?
The topic is intelligence. Some people have superhuman (well, more than 99.99% of humans) intelligence, and we are generally not afraid of them. We expect them to have ascended to the higher reaches of the Kohlberg hierarchy. There doesn’t seem to be a problem of Unfriendly Natural Intelligence. We don’t kill off smart people on the basis that they might be a threat. We don’t refuse people education on the grounds that we don’t know what they will do with all that dangerous knowledge. (There may have been societies that worked that way, but they don’t seem to be around any more).
Agreed with all of this.
Yes. Well said. The deeper issue though is the underlying causes of said squicky, alienating paragraphs. Surface recognition of potentially condescending paragraphs is probably insufficient.
Its unclear that Pei would agree with your presumption that educating an AGI will entail “a few orders of magnitude more uncertainty about the outcome”. We can control every aspect of an AGI’s development and education to a degree unimaginable in raising human children. Examples: We can directly monitor their thoughts. We can branch successful designs. And perhaps most importantly, we can raise them in a highly controlled virtual environment. All of this suggests we can vastly decrease the variance in outcome compared to our current haphazard approach of creating human minds.
Compared to what? Compared to an ideal education? Your point thus illustrates the room for improvement in educating AGI.
Routinely? Nevertheless, this only shows the scope and potential for improvement. To simplify: if we can make AGI more intelligent, we can also make it less awful.
An unfounded assumption. To the extent that humans have these “predictable, well-defined drives and physical limits” we can also endow AGI’s with these qualities.
Which doesn’t really require much of an argument against. Who is going to defund AI capabilities research such that this would actually prevent global progress?
As someone who’s been on LW since before it was LW, that paragraph struck me as wonderfully clear. But posts should probably be written for newbies, with as little jargon as possible.
OTOH, writing for newbies should mean linking to explanations of jargon when the jargon is unavoidable.
I agree that those things are bad, but don’t actually see any “Less Wrong jargon” in that paragraph, with the possible exception of “Friendly AI”. “Wrong question” and “hack away at the edges” are not LW-specific notions.
Those phrases would be fine with me if they weren’t hyperlinked to Less Wrong posts. They’re not LW-specific notions so there shouldn’t be a reason to link a Artificial Intelligence professor to blog posts discussing them. Anyway, I’m just expressing my reaction to the paragraph. You can take it or leave it.
Right: the problem is the gratuitous hyperlinking, about which I feel much the same way as I think you do—it’s not a matter of jargon.
(I’m not sure what the purpose of your last two sentences is. Did you have the impression I was trying to dismiss everything you said, or put you down, or something? I wasn’t.)
Tell me about it. Hyperlinks are totally wrong for academic communication. You’re supposed to put (Jones 2004) every sentence or two instead!
Apologies if you’re merely joking, but: Obviously Jack’s (and my) problem with the hyperlinks here is not that academic-paper-style citations would be better but that attaching those references to terms like “wrong question” and “hack away at the edges” (by whatever means) gives a bad impression.
The point is that the ideas conveyed by “wrong question” and “hack away at the edges” in that paragraph are not particularly abstruse or original or surprising; that someone as smart as Pei Wang can reasonably be expected to be familiar with them already; and that the particular versions of those ideas found at the far ends of those hyperlinks are likewise not terribly special. Accordingly, linking to them suggests (1) that Luke thinks Pei Wang is (relative to what one might expect of a competent academic in his field) rather dim, and—less strongly -- (2) that Luke thinks that the right way to treat this dimness is for him to drink of the LW kool-aid.
But this dialog wasn’t just written for Pei Wang, it was written for public consumption. Some of the audience will not know these things.
And even smart academics don’t know every piece of jargon in existence. We tend to overestimate how much of what we know is stuff other people (or other people we see as our equals) know. This is related to Eliezer’s post “Explainers Shoot High, Aim Low!”
Not merely. I wouldn’t have included those particular links when writing an email—I would when writing a blog post. But I do make the point that the problem here is one of fashion, not one intrinsic to that being communicated. Most references included in most papers aren’t especially useful or novel—the reasons for including them aren’t about information at all.
You are wrong when it comes to “Wrong Question”. The phrase in common usage is a lot more general than it is when used here. It doesn’t matter how unimpressive you consider the linked post it remains the case that when used as lesswrong jargon the meaning conveyed by the phrase is a lot more specific than the simple combination of the two words.
In the specific case of “Wrong Question”, take fault with the jargon usage, not the use of a link to explain the jargon. Saying only “wrong question” in the sentence would represent a different message and so would be a failure of communication.
No, you make the point that a different problem from the one Jack and I were commenting on is one of fashion. (The silliness of this when taken as a serious response is why I thought you might merely be making a joke and not also trying to make a serious point.)
I’m willing to be convinced, but the mere fact that you say this doesn’t convince me. (I think there are two separate common uses, actually. If you say someone is asking the wrong question, you mean that there’s a right question they should be asking and the one they’ve asked is a distraction from it. If you say they’re asking a wrong question, you mean the question itself is wrongheaded—typically because of a false assumption—and no answer to it is going to be informative rather than confusing.)
What do you think Pei Wang would have taken “a wrong question” to mean without the hyperlink, and how does it differ from what you think it actually means, and would the difference really have impaired the discussion?
I’m going to guess at your answer (in the hope of streamlining the discussion): the difference is that Eliezer’s article about wrong questions talks specifically about questions that can be “dissolved by understanding the cognitive algorithm that generates the perception of a question”, as opposed to ones where all there is to understand is that there’s an untrue presupposition. Except that in the very first example Eliezer gives of a “wrong question”—the purely definitional if-a-tree-falls sort of question—what you need to understand that the question is wrongheaded isn’t a cognitive algorithm, it’s just the fact that sometimes language is ambiguous and what looks like a question of fact is merely a question of definition. Which philosophers (and others) have been pointing out for decades—possibly centuries.
But let’s stipulate for the sake of argument that I’ve misunderstood, and Eliezer really did intend “wrong question” to apply only to questions for which the right response is to understand the cognitive algorithms that make it feel as if there’s a question, and that Luke had that specific meaning in mind. Then would removing the hyperlink have made an appreciable difference to how Pei Wang would have understood Luke’s words? Nope, because he was only giving an example, and the more general meaning of the word is an example—indeed, substantially the same example—of the same thing.
Anyway, enough! -- at least for me. (Feel free to have the last word.)
[EDITED to add: If whoever downvoted this would care to say why, I’ll be grateful. Did I say something stupid? Was I needlessly rude? Do you just want this whole discussion to stop?]
They were added on an edit after a few downvotes. Not directed at you, should have just them out.
The “Wrong Question” phrase is in general use but in lesswrong usage is somewhat more specific and stronger meaning than it often does elsewhere.
You propose unilateral defunding? Surely that isn’t likely to help. If you really think it is going to help—then how?
I’m not strongly proposing it, just pointing out the implications of the argument.