LINK: Bostrom & Yudkowsky, “The Ethics of Artificial Intelligence” (2011)
Just noticed that Less Wrong has apparently not yet linked to Bostrom & Yudkowsky’s new paper for the forthcoming Cambridge Handbook of Artificial Intelligence, entitled “The Ethics of Artificial Intelligence.” Enjoy.
- 9 Apr 2011 20:14 UTC; 48 points) 's comment on David Deutsch on How To Think About The Future by (
I searched for “friendly”. No matches were found! The document is not “friendly”-friendly!
It says forthcoming, so I’ll give in to my urge to nitpick.
Page 6: Will the audience be familiar with the term “instrumental rationality?”
Possible typo on page 10: “the idea whole brain emulation,” should be “the idea of whole brain emulation.”
Controversial point that could be omitted on page 15: Claim that a truly desirable outcome means no conflict at all. The main point could be retained if it was changed to the less controversial claim that in a truly desirable outcome there would be no stories about saving the world against all odds.
Those are the only three things I noticed—overall, an excellent paper.
I think the “Minds with Exotic Properties” section only scratches the surface of exotic properties. It mostly deals with subjective rate of time and reproduction, two phenomena we already have quite good metaphors for. I think the point where human analogies really start to break down is when we start to talk about merging minds.
I will not elaborate here, just link to this short comment of mine, and to my agreeing reply to this comment by Johnicholas in a somewhat different context. There I called Individualism Bias what this Bostrom-Yudkowsky section is a case of in my opinion. Maybe they simply didn’t want to exceed the Shock Level of the Cambridge Handbook editors. But it is interesting to contrast sci-fi with philosophy: sci-fi does not have this blind spot, merging minds are almost a cliché there.
I would like to see thoughts along these lines elaborated and discussed somewhere.
The paper says:
I’d be interested to know what examples the authors had in mind.
The abstract:
(I skimmed through the paper. It’s nice but I didn’t see any that struck as particularly novel.)
Yeah, like Chalmers’ paper, it’s survey article.
A common reply to that might be: “the one(s) we are most likely to get”.
Thanks. My guess would be:
Ethics in Machine Learning and Other Domain‐Specific AI Algorithms: Yudkowsky
Artificial General Intelligence: Yudkowsky
Machines with Moral Status: Bostrom
Minds with Exotic Properties: Bostrom
Superintelligence: Yudkowsky
I am not so sure about the abstract and conclusion.