CFAI doc in SIAI FAQ
I found a link to “Creating Friendly AI” http://singinst.org/upload/CFAI.html in the SIAI FAQ, which I think was recently updated. The document looks quite dated, and considering the length and the title I wonder why it hasn’t been kept up. Is it even worth reading, considering it seems 10 years old?
BTW, there are many dead links in it, also the ‘printable version’ link is dead.
It’s the doc that got me interested in Elizer’s writing, and thusly Less Wrong, just a year or two ago. (I was looking for Singularitan stuff to distract me from my bleak macroeconomic ideas).
I think the “Beyond Anthropomorphism” section is particularly insightful. [Gist as I understand it: Many of laypeople’s worries about uFAI, such as that it might resent its servitude to humanity and decide to wipe us out, are misguided, because e.g. resenting servitude is a property of evolved human cognition, not a property of minds in general.]
Eliezer says no, but Anissimov disagrees. Starglider has a detailed criticism.
As someone who has published lots of writing that no longer reflects my views, I can certainly understand Eliezer’s insistence that it is obsolete. And indeed it is. On the other hand, I know a few SI people who think there are important points in CFAI not made anywhere else, and prefer its presentation of a few points to those in CEV. I won’t name names, but people are welcome to identify themselves.
The information I was looking for, dead on! Now I wonder about http://singinst.org/ourresearch/publications/GISAI/printable-GISAI.html
I really appreciate CFAI and think everyone should read it, because it’s an example of a brilliant notice tackling the Friendly AI problem from scratch. It makes specific suggestions in abundance, something that authors of much of the machine ethics literature could only dream of. To truly understand the motivations for CEV it is necessary to understand what came before it, the CFAI proposals.
Eliezer has condemned, deleted, and/or simply refused to release a great amount of his excellent works, for instance short intros on the SIAI website, Algernon’s Law, and much else. This is brand control. Even the Singularitarian Principles have a big “obsolete” warning at the top, ostensibly for just a few sentences expressing support for “the Singularity” rather than a “friendly Singularity”.
Creating Friendly AI is what really inspired me to get involved in the pursuit of Friendly AI. I don’t know if the Sequences or CEV would have had as powerful of an effect.
I read it a while ago and I definitely don’t regret doing so.
I have no idea whether SIAI would still stand by it or disavow it at this point, but even if they think the central methodology is flawed there are still a lot of other arguments made which answer a lot of other questions e.g. ‘Wouldn’t an AI do X?’ or ‘Why try to build any AI at all, isn’t it safer just to ban the whole field?’.
It does look a bit like the first attempt at something, and it seems to be lacking in the degree of mathematical rigour that I get the impression SIAI wants.
Yeah, I started reading it and will probably finish b/c it’s interesting, I just wonder how different it it from their current direction and what the diffs are.
It is simplest to just say that it is obsolete. It has largely been obsoleted by CEV. But SIAI’s thinking on Friendly AI is fluid/dynamic and does not accord with any dogma or single published paper. You can say that it clusters around “CEV-like ideas”. (There is a general consensus that some sort of extrapolation is a good idea rather than directly implementing literal human wishes.) For an example of more recent thinking, see Muehlhauser & Helm’s draft paper on the SIAI blog.