Oh, sure. My only intention was to show that limiting the AI’s power to mere communication doesn’t imply safety. There may be thousands of specific ways how it could go wrong. For instance:
The Oracle answers that human utility is maximised by wireheading everybody to become a happiness automaton, and that it is a moral duty to do that to others even against their will. Most people believe the Oracle (because its previous answers always proved true and useful, and moreover it makes a really neat PowerPoint presentations of its arguments) and wireheading becomes compulsory. After the minority of dissidents are defeated, all mankind turns into happiness automata and happily dies out a while later.
Yes, that’s the main difficulty behind friendly AI in general. This does not constitute a specific way that it could go wrong.
Oh, sure. My only intention was to show that limiting the AI’s power to mere communication doesn’t imply safety. There may be thousands of specific ways how it could go wrong. For instance:
The Oracle answers that human utility is maximised by wireheading everybody to become a happiness automaton, and that it is a moral duty to do that to others even against their will. Most people believe the Oracle (because its previous answers always proved true and useful, and moreover it makes a really neat PowerPoint presentations of its arguments) and wireheading becomes compulsory. After the minority of dissidents are defeated, all mankind turns into happiness automata and happily dies out a while later.