The example in the post below is not about an Artificial Intelligence literally at all! If the post were about what AIs supposedly can’t do, the central example would have used an AI!
Contra this assertion, Yudkowksy-2007 was very capable of using parables. The “genie” in this article is easily recognized as metaphorically referring to an imagined AI. For example, here is Yudkowsky-2007 in Lost Purposes, linking here:
I have seen many people go astray when they wish to the genie of an imagined AI, dreaming up wish after wish that seems good to them, sometimes with many patches and sometimes without even that pretense of caution.
I’m very sympathetic to the claim that this parable has been misinterpreted. This is a common problem with parables! They are good at provoking thought, they are good at strategic ambiguity, and they are bad at clear communication.
I’m not sympathetic to the claim that this post is not about AI literally at all.
It certainly bears upon AI, but it bears that way by making a point about the complexity of a task rather than talking about an intelligent mechanism which is purportedly aligned on that task. It does this by talking about an unintelligent mechanism, which is meant to be a way of talking about the task itself rather than any particular machine for doing it.
Yes, and. The post is about the algorithmic complexity of human values and it is about powerful optimizers (“genies”) and it is about the interaction of those two concepts. The post makes specific points about genies, including intelligent genies, that it would not make if it was not also about genies. Eg:
There are three kinds of genies: Genies to whom you can safely say “I wish for you to do what I should wish for”; genies for which no wish is safe; and genies that aren’t very powerful or intelligent.
You wrote, “the Outcome Pump is a genie of the second class”. But the Time Travel Outcome Pump is fictional. The genie of the second class that Yudkowsky-2007 expects to see in reality is an AI. So the Outcome Pump is part of a parable for this aspect of powerful & intelligent AIs, despite being unintelligent.
There’s lots of evidence I could give here, the tags (“Parables & Fables”), a comment from Yudkowsky-2007 on this post, and the way others have read it, both in the comments and in other posts like Optimality is the Tiger. Also, the Time Travel Outcome Pump is easy to use safely, it’s not the case that “no wish is safe”, and that attitude only makes sense parabolically. I don’t think that’s a valuable discussion topic, I’m not sure you would even disagree.
However, when reading parables, it’s important to understand what properties transfer and what properties do not. Jesus is recorded as saying “The Kingdom of Heaven is like a pearl of great price”. If I read that and go searching for heaven inside oysters then I have not understood the parable. Similarly, if someone reads this parable and concludes that an AI will not be intelligent then they have not understood the parable or the meaning of AI.
I don’t really see people making that misinterpretation of this post, it’s a pretty farcical take. I notice you disagree here and elsewhere. Given that, I understand your desire for a top-of-post clarification. Adding this type of clarification is usually the job of an editor.
Contra this assertion, Yudkowksy-2007 was very capable of using parables. The “genie” in this article is easily recognized as metaphorically referring to an imagined AI. For example, here is Yudkowsky-2007 in Lost Purposes, linking here:
Similarly, portions of Project Lawful are about AI, That Alien Message is about AI, and so forth.
I’m very sympathetic to the claim that this parable has been misinterpreted. This is a common problem with parables! They are good at provoking thought, they are good at strategic ambiguity, and they are bad at clear communication.
I’m not sympathetic to the claim that this post is not about AI literally at all.
It certainly bears upon AI, but it bears that way by making a point about the complexity of a task rather than talking about an intelligent mechanism which is purportedly aligned on that task. It does this by talking about an unintelligent mechanism, which is meant to be a way of talking about the task itself rather than any particular machine for doing it.
Yes, and. The post is about the algorithmic complexity of human values and it is about powerful optimizers (“genies”) and it is about the interaction of those two concepts. The post makes specific points about genies, including intelligent genies, that it would not make if it was not also about genies. Eg:
You wrote, “the Outcome Pump is a genie of the second class”. But the Time Travel Outcome Pump is fictional. The genie of the second class that Yudkowsky-2007 expects to see in reality is an AI. So the Outcome Pump is part of a parable for this aspect of powerful & intelligent AIs, despite being unintelligent.
There’s lots of evidence I could give here, the tags (“Parables & Fables”), a comment from Yudkowsky-2007 on this post, and the way others have read it, both in the comments and in other posts like Optimality is the Tiger. Also, the Time Travel Outcome Pump is easy to use safely, it’s not the case that “no wish is safe”, and that attitude only makes sense parabolically. I don’t think that’s a valuable discussion topic, I’m not sure you would even disagree.
However, when reading parables, it’s important to understand what properties transfer and what properties do not. Jesus is recorded as saying “The Kingdom of Heaven is like a pearl of great price”. If I read that and go searching for heaven inside oysters then I have not understood the parable. Similarly, if someone reads this parable and concludes that an AI will not be intelligent then they have not understood the parable or the meaning of AI.
I don’t really see people making that misinterpretation of this post, it’s a pretty farcical take. I notice you disagree here and elsewhere. Given that, I understand your desire for a top-of-post clarification. Adding this type of clarification is usually the job of an editor.