Eliezer: Well, the person who actually holds a coherent technical view, who disagrees with me, is named Paul Christiano.
What does Yudkowsky mean by ‘technical’ here? I respect the enormous contribution Yudkowsky has made to these discussions over the years, but I find his ideas about who counts as a legitimate dissenter from his opinions utterly ludicrous. Are we really supposed to think that Francois Chollet, who created Keras, is the main contributor to TensorFlow, and designed the ARC dataset (demonstrating actual, operationalizable knowledge about the kind of simple tasks deep learning systems would not be able to master), lacks a coherent technical view? And on what should we base this? The word of Yudkowsky who mostly makes verbal, often analogical, arguments and has essentially no significant technical contributions to the field?
To be clear, I think Yudkowsky does what he does well, and I see value in making arguments as he does, but they do not strike me as particularly ‘technical’. The fact that Yudkowsky doesn’t even know enough about Chollet to pronounce his name displays a troubling lack of effort to engage seriously with opposing views. This isn’t just about coming across poorly to outsiders, it’s about dramatic miscalibration with respect to the value of other people’s opinions as well as the rigour of his own.
Yes, I’ve read it. Perhaps that does make it a little unfair of me to criticise lack of engagement in this case. I should be more preicse: Kudos to Yudkowsky for engaging, but no kudos for coming to believe that someone having a very different view to the one he has arrived at must not have a ‘coherent technical view’.
I’d consider myself to have easily struck down Chollet’s wack ideas about the informal meaning of no-free-lunch theorems, which Scott Aaronson also singled out as wacky. As such, citing him as my technical opposition doesn’t seem good-faith; it’s putting up a straw opponent without much in the way of argument and what there is I’ve already stricken down. If you want to cite him as my leading technical opposition, I’m happy enough to point to our exchange and let any sensible reader decide who held the ball there; but I would consider it intellectually dishonest to promote him as my leading opposition.
I don’t want to cite anyone as your ‘leading technical opposition’. My point is that many people who might be described as having ‘coherent technical views’ would not consider your arguments for what to expect from AGI to be ‘technical’ at all. Perhaps you can just say what you think it means for a view to be ‘technical’?
As you say, readers can decide for themselves what to think about the merits of your position on intelligence versus Chollet’s (I recommend this essay by Chollet for a deeper articulation of some of his views: https://arxiv.org/pdf/1911.01547.pdf). Regardless of whether or not you think you ‘easily struck down’ his ‘wack ideas’, I think it is important for people to realise that they come from a place of expertise about the technology in question.
You mention Scott Aaronson’s comments on Chollet. Aaronson says (https://scottaaronson.blog/?p=3553) of Chollet’s claim that an Intelligence Explosion is impossible: “the certainty that he exudes strikes me as wholly unwarranted.” I think Aaronson (and you) are right to point out that the strong claim Chollet makes is not established by the arguments in the essay. However, the same exact criticism could be levelled at you. The degree of confidence in the conclusion is not in line with the nature of the evidence.
While I have serious issues with Eliezer’s epistemics on AI, I also agree that Chollet’s argument was terrible in that the No Free Lunch theorem is essentially irrelevant.
In a nutshell, this is also one of the problems I had with DragonGod’s writing on AI.
Maybe it’s my own bias as an engineer familiar with the safety solutions actually in use, but I think Drexler’s CAIS model is a viable alignment solution.
I upvoted, because these are important concerns overall, but this sentence stuck out to me:
The fact that Yudkowsky doesn’t even know enough about Chollet to pronounce his name displays a troubling lack of effort to engage seriously with opposing views.
I’m not claiming that Yudkowsky does display a troubling lack of effort to engage seriously with opposing views or he does not display such, but surely this can be decided more accurately by looking at his written output online than at his ability to correctly pronounce names in languages he is not native in. I, personally, skip names while reading after noticing it is a name and I wouldn’t say that I never engaged seriously with someone’s arguments.
Maybe Francois Chollet has coherent technical views on alignment that he hasn’t published or shared anywhere (the blog post doesn’t count, for reasons that are probably obvious if you read it), but it doesn’t seem fair to expect Eliezer to know / mention them.
What does Yudkowsky mean by ‘technical’ here? I respect the enormous contribution Yudkowsky has made to these discussions over the years, but I find his ideas about who counts as a legitimate dissenter from his opinions utterly ludicrous. Are we really supposed to think that Francois Chollet, who created Keras, is the main contributor to TensorFlow, and designed the ARC dataset (demonstrating actual, operationalizable knowledge about the kind of simple tasks deep learning systems would not be able to master), lacks a coherent technical view? And on what should we base this? The word of Yudkowsky who mostly makes verbal, often analogical, arguments and has essentially no significant technical contributions to the field?
To be clear, I think Yudkowsky does what he does well, and I see value in making arguments as he does, but they do not strike me as particularly ‘technical’. The fact that Yudkowsky doesn’t even know enough about Chollet to pronounce his name displays a troubling lack of effort to engage seriously with opposing views. This isn’t just about coming across poorly to outsiders, it’s about dramatic miscalibration with respect to the value of other people’s opinions as well as the rigour of his own.
He wrote a whole essay responding specifically to Chollet! https://intelligence.org/2017/12/06/chollet/
Yes, I’ve read it. Perhaps that does make it a little unfair of me to criticise lack of engagement in this case. I should be more preicse: Kudos to Yudkowsky for engaging, but no kudos for coming to believe that someone having a very different view to the one he has arrived at must not have a ‘coherent technical view’.
I’d consider myself to have easily struck down Chollet’s wack ideas about the informal meaning of no-free-lunch theorems, which Scott Aaronson also singled out as wacky. As such, citing him as my technical opposition doesn’t seem good-faith; it’s putting up a straw opponent without much in the way of argument and what there is I’ve already stricken down. If you want to cite him as my leading technical opposition, I’m happy enough to point to our exchange and let any sensible reader decide who held the ball there; but I would consider it intellectually dishonest to promote him as my leading opposition.
I don’t want to cite anyone as your ‘leading technical opposition’. My point is that many people who might be described as having ‘coherent technical views’ would not consider your arguments for what to expect from AGI to be ‘technical’ at all. Perhaps you can just say what you think it means for a view to be ‘technical’?
As you say, readers can decide for themselves what to think about the merits of your position on intelligence versus Chollet’s (I recommend this essay by Chollet for a deeper articulation of some of his views: https://arxiv.org/pdf/1911.01547.pdf). Regardless of whether or not you think you ‘easily struck down’ his ‘wack ideas’, I think it is important for people to realise that they come from a place of expertise about the technology in question.
You mention Scott Aaronson’s comments on Chollet. Aaronson says (https://scottaaronson.blog/?p=3553) of Chollet’s claim that an Intelligence Explosion is impossible: “the certainty that he exudes strikes me as wholly unwarranted.” I think Aaronson (and you) are right to point out that the strong claim Chollet makes is not established by the arguments in the essay. However, the same exact criticism could be levelled at you. The degree of confidence in the conclusion is not in line with the nature of the evidence.
While I have serious issues with Eliezer’s epistemics on AI, I also agree that Chollet’s argument was terrible in that the No Free Lunch theorem is essentially irrelevant.
In a nutshell, this is also one of the problems I had with DragonGod’s writing on AI.
Why didn’t you mention Eric Drexler?
Maybe it’s my own bias as an engineer familiar with the safety solutions actually in use, but I think Drexler’s CAIS model is a viable alignment solution.
I upvoted, because these are important concerns overall, but this sentence stuck out to me:
I’m not claiming that Yudkowsky does display a troubling lack of effort to engage seriously with opposing views or he does not display such, but surely this can be decided more accurately by looking at his written output online than at his ability to correctly pronounce names in languages he is not native in. I, personally, skip names while reading after noticing it is a name and I wouldn’t say that I never engaged seriously with someone’s arguments.
Fair point.
Maybe Francois Chollet has coherent technical views on alignment that he hasn’t published or shared anywhere (the blog post doesn’t count, for reasons that are probably obvious if you read it), but it doesn’t seem fair to expect Eliezer to know / mention them.