Yes, I’ve read it. Perhaps that does make it a little unfair of me to criticise lack of engagement in this case. I should be more preicse: Kudos to Yudkowsky for engaging, but no kudos for coming to believe that someone having a very different view to the one he has arrived at must not have a ‘coherent technical view’.
I’d consider myself to have easily struck down Chollet’s wack ideas about the informal meaning of no-free-lunch theorems, which Scott Aaronson also singled out as wacky. As such, citing him as my technical opposition doesn’t seem good-faith; it’s putting up a straw opponent without much in the way of argument and what there is I’ve already stricken down. If you want to cite him as my leading technical opposition, I’m happy enough to point to our exchange and let any sensible reader decide who held the ball there; but I would consider it intellectually dishonest to promote him as my leading opposition.
I don’t want to cite anyone as your ‘leading technical opposition’. My point is that many people who might be described as having ‘coherent technical views’ would not consider your arguments for what to expect from AGI to be ‘technical’ at all. Perhaps you can just say what you think it means for a view to be ‘technical’?
As you say, readers can decide for themselves what to think about the merits of your position on intelligence versus Chollet’s (I recommend this essay by Chollet for a deeper articulation of some of his views: https://arxiv.org/pdf/1911.01547.pdf). Regardless of whether or not you think you ‘easily struck down’ his ‘wack ideas’, I think it is important for people to realise that they come from a place of expertise about the technology in question.
You mention Scott Aaronson’s comments on Chollet. Aaronson says (https://scottaaronson.blog/?p=3553) of Chollet’s claim that an Intelligence Explosion is impossible: “the certainty that he exudes strikes me as wholly unwarranted.” I think Aaronson (and you) are right to point out that the strong claim Chollet makes is not established by the arguments in the essay. However, the same exact criticism could be levelled at you. The degree of confidence in the conclusion is not in line with the nature of the evidence.
While I have serious issues with Eliezer’s epistemics on AI, I also agree that Chollet’s argument was terrible in that the No Free Lunch theorem is essentially irrelevant.
In a nutshell, this is also one of the problems I had with DragonGod’s writing on AI.
Maybe it’s my own bias as an engineer familiar with the safety solutions actually in use, but I think Drexler’s CAIS model is a viable alignment solution.
Yes, I’ve read it. Perhaps that does make it a little unfair of me to criticise lack of engagement in this case. I should be more preicse: Kudos to Yudkowsky for engaging, but no kudos for coming to believe that someone having a very different view to the one he has arrived at must not have a ‘coherent technical view’.
I’d consider myself to have easily struck down Chollet’s wack ideas about the informal meaning of no-free-lunch theorems, which Scott Aaronson also singled out as wacky. As such, citing him as my technical opposition doesn’t seem good-faith; it’s putting up a straw opponent without much in the way of argument and what there is I’ve already stricken down. If you want to cite him as my leading technical opposition, I’m happy enough to point to our exchange and let any sensible reader decide who held the ball there; but I would consider it intellectually dishonest to promote him as my leading opposition.
I don’t want to cite anyone as your ‘leading technical opposition’. My point is that many people who might be described as having ‘coherent technical views’ would not consider your arguments for what to expect from AGI to be ‘technical’ at all. Perhaps you can just say what you think it means for a view to be ‘technical’?
As you say, readers can decide for themselves what to think about the merits of your position on intelligence versus Chollet’s (I recommend this essay by Chollet for a deeper articulation of some of his views: https://arxiv.org/pdf/1911.01547.pdf). Regardless of whether or not you think you ‘easily struck down’ his ‘wack ideas’, I think it is important for people to realise that they come from a place of expertise about the technology in question.
You mention Scott Aaronson’s comments on Chollet. Aaronson says (https://scottaaronson.blog/?p=3553) of Chollet’s claim that an Intelligence Explosion is impossible: “the certainty that he exudes strikes me as wholly unwarranted.” I think Aaronson (and you) are right to point out that the strong claim Chollet makes is not established by the arguments in the essay. However, the same exact criticism could be levelled at you. The degree of confidence in the conclusion is not in line with the nature of the evidence.
While I have serious issues with Eliezer’s epistemics on AI, I also agree that Chollet’s argument was terrible in that the No Free Lunch theorem is essentially irrelevant.
In a nutshell, this is also one of the problems I had with DragonGod’s writing on AI.
Why didn’t you mention Eric Drexler?
Maybe it’s my own bias as an engineer familiar with the safety solutions actually in use, but I think Drexler’s CAIS model is a viable alignment solution.