I disagree that “you really didn’t gain all that much” in your example. There are possible numbers such that it’s better to avoid producing AI, but (a) that may not be a lever which is available to us, and (b) AI done right would probably represent an existential eucatastrophe, greatly improving our ability to avoid or deal with future threats.
I have an intellectual issue with using “probably” before an event that has never happened before, in the history of the universe (so far as I can tell).
And—if I am given the choice between slow, steady improvement in the lot of humanity (which seems to be the status quo), and a dice throw that results in either paradise, or extinction—I’ll stick with slow steady, thanks, unless the odds were overwhelmingly positive. And—I suspect they are, but in the opposite direction, because there are far more ways to screw up than to succeed, and once the AI is out—you no longer have a chance to change it much. I’d prefer to wait it out, slowly refining things, until paradise is assured.
Hmm. That actually brings a thought to mind. If an unfriendly AI was far more likely than a friendly one (as I have just been suggesting) - why aren’t we made of computronium? I can think of a few reasons, with no real way to decide. The scary one is “maybe we are, and this evolution thing is the unfriendly part...”
I disagree that “you really didn’t gain all that much” in your example. There are possible numbers such that it’s better to avoid producing AI, but (a) that may not be a lever which is available to us, and (b) AI done right would probably represent an existential eucatastrophe, greatly improving our ability to avoid or deal with future threats.
I have an intellectual issue with using “probably” before an event that has never happened before, in the history of the universe (so far as I can tell).
And—if I am given the choice between slow, steady improvement in the lot of humanity (which seems to be the status quo), and a dice throw that results in either paradise, or extinction—I’ll stick with slow steady, thanks, unless the odds were overwhelmingly positive. And—I suspect they are, but in the opposite direction, because there are far more ways to screw up than to succeed, and once the AI is out—you no longer have a chance to change it much. I’d prefer to wait it out, slowly refining things, until paradise is assured.
Hmm. That actually brings a thought to mind. If an unfriendly AI was far more likely than a friendly one (as I have just been suggesting) - why aren’t we made of computronium? I can think of a few reasons, with no real way to decide. The scary one is “maybe we are, and this evolution thing is the unfriendly part...”