Is there any published work in AI (whether or not directed towards Friendliness) that you consider does not immediately, fundamentally fail due to the various issues and fallacies you’ve written on over the course of LW? (E.g. meaningfully named Lisp symbols, hiddenly complex wishes, magical categories, anthropomorphism, etc.)
Yes, I meant AGI by AI. I don’t consider any of the stuff outside AGI to be worth calling AI. The good stuff there is merely the more or less successful descendants of spinoffs of failed attempts to create AGI, and is good in direct proportion to its distance from that original vision.
Is there any published work in AI (whether or not directed towards Friendliness) that you consider does not immediately, fundamentally fail due to the various issues and fallacies you’ve written on over the course of LW? (E.g. meaningfully named Lisp symbols, hiddenly complex wishes, magical categories, anthropomorphism, etc.)
ETA: By AI I meant AGI.
I assume this is to be interpreted as “published work in AGI”. Plenty of perfectly good AI work around.
Yes, I meant AGI by AI. I don’t consider any of the stuff outside AGI to be worth calling AI. The good stuff there is merely the more or less successful descendants of spinoffs of failed attempts to create AGI, and is good in direct proportion to its distance from that original vision.
Well, it appears that no published work in AI has ended in successful strong artificial intelligence.
It might be making visible progress, or failing that, at least not making basic fatal errors.