From the beginning, the primary interest in nuclear technology was the “inexhaustible supply of energy”.
The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence.
Both seem wonderful until one thinks of the possible risks. In neither case will anyone regulate the mathematics.
The regulation of nuclear weapons deals with objects and materials, whereas with AI it will be a bewildering variety of software that we cannot yet describe.
I’m not aware of any large movement calling for regulation either inside or outside AI, because we don’t know how to write such regulation.
Could someone be kind enough to share the text of Stuart Russell’s interview with Science here?
Quoted here
There you go.
Superb, thanks! Did you create this, or is there a way I could have found this for myself? Cheers :)
Message sent.