I am here to propose to you today that we should not balance the risks and opportunities of advanced artificial intelligence. We should welcome the risks and remain blind to opportunities. We should needlessly confront entirely unnecessary dangers. To achieve these goals, we must plan stupidly and irrationally. We should act in fear and panic, and give in to technophobia; alternatively, we should act in blind enthusiasm. We should only respect the interests of some parties with a stake in the Singularity. We must try to ensure that the benefits of advanced technologies remain restricted to a small number of people, rather than accrue to as many individuals as possible. We must encourage, even if it’s impossible, violent conflicts using these technologies; and we must see that the massive destructive capability falls into the hands of individuals. We should think through these issues later, when it is too late to do anything about them . . .
I like those reversals tests. They are not only useful, but also quite hilarious.
I don’t know, it seems as though Wired magazine understands my hopes for the future pretty well. Where is the scary part?