I find it interesting that most answers to this question seem to be based on, “How can I justify not letting the AI out of the box?” and not “What are the likely results of releasing the AI or failing to do so? Based on that, should I do it?”
I don’t know about that. My conclusion was that the AI in question was stupid or completely irrational. Those observations seem to have a fairly straightforward relationship to predictions of future consequences.
I don’t know about that. My conclusion was that the AI in question was stupid or completely irrational. Those observations seem to have a fairly straightforward relationship to predictions of future consequences.