This is NOT premature! You just saved yourself at least one reader you were about to lose (me). I (and I suspect many others) have not been among the most regular readers of OB because it was frankly not clear to me whether you had anything new to say, or whether you were yet another clever, but ultimately insubstantial “lumper” polymath-wannabe (to use Howard Gardner’s term) who ranged plausibly over too many subjects. Your ‘physics’ series especially, almost made me unsubscribe from this blog (I’ll be honest: though you raised some interesting thoughts there, that series did NOT impress me).
But with this post, for what it is worth, you’ve FINALLY seriously engaged my attention. I’ll be reading your follow-up to this thread of thinking very carefully. I too, started my research career fascinated by AI (wrote an ELIZA clone in high school). Unlike you, my more mature undergrad reaction to the field was not that AI was hard, but that it was simply an inappropriate computer science framing of an extremely hard problem in the philosophy of mind. I think you would agree with this statement. Since philosophy didn’t seem like a money-making career, my own reaction was to steer towards a fascinating field that neighbors AI, control theory (which CS people familiar with history will probably think of as “stuff beyond subsymbolic AI, or modern analog computing”). Back when I started grad school in control theory (1997), I was of the opinion that it had more to say about the philosophy problem than AI. My opinion grew more nuanced through my PhD and postdoc, and today I am a somewhat omnivorous decision-science guy with a core philosophical world-view based on control theory, but stealing freely from AI, operations research and statistics to fuel my thinking both on the practical pay-the-bills problems I work on, as well as the philosophy problem underlying AI.
Oddly enough, I too have been drafting my first premature essay on AI, corresponding to yours, which I have tentatively titled, “Moving Goalposts are Good for AI.” I should post the thing in the next week or two.
I suspect you won’t agree with my conclusions though :)
Eliezer:
This is NOT premature! You just saved yourself at least one reader you were about to lose (me). I (and I suspect many others) have not been among the most regular readers of OB because it was frankly not clear to me whether you had anything new to say, or whether you were yet another clever, but ultimately insubstantial “lumper” polymath-wannabe (to use Howard Gardner’s term) who ranged plausibly over too many subjects. Your ‘physics’ series especially, almost made me unsubscribe from this blog (I’ll be honest: though you raised some interesting thoughts there, that series did NOT impress me).
But with this post, for what it is worth, you’ve FINALLY seriously engaged my attention. I’ll be reading your follow-up to this thread of thinking very carefully. I too, started my research career fascinated by AI (wrote an ELIZA clone in high school). Unlike you, my more mature undergrad reaction to the field was not that AI was hard, but that it was simply an inappropriate computer science framing of an extremely hard problem in the philosophy of mind. I think you would agree with this statement. Since philosophy didn’t seem like a money-making career, my own reaction was to steer towards a fascinating field that neighbors AI, control theory (which CS people familiar with history will probably think of as “stuff beyond subsymbolic AI, or modern analog computing”). Back when I started grad school in control theory (1997), I was of the opinion that it had more to say about the philosophy problem than AI. My opinion grew more nuanced through my PhD and postdoc, and today I am a somewhat omnivorous decision-science guy with a core philosophical world-view based on control theory, but stealing freely from AI, operations research and statistics to fuel my thinking both on the practical pay-the-bills problems I work on, as well as the philosophy problem underlying AI.
Oddly enough, I too have been drafting my first premature essay on AI, corresponding to yours, which I have tentatively titled, “Moving Goalposts are Good for AI.” I should post the thing in the next week or two.
I suspect you won’t agree with my conclusions though :)