If we fail to implement AI safely on our first attempt, we may not get a second chance (Yudkowsky 2008a).
...seems wrong. We have failed to implement AI safely a million times by now. Obviously we don’t get only one shot at the problem.
The sentence seems to be trying to say that we only have to terminally screw up once. I don’t really see why it doesn’t just say that. Except that what it does actually say sounds scarier—since it implies that any failure is a terminal screw up. That would be scary—if it was true. However, it isn’t true.
This sentence:
...seems wrong. We have failed to implement AI safely a million times by now. Obviously we don’t get only one shot at the problem.
The sentence seems to be trying to say that we only have to terminally screw up once. I don’t really see why it doesn’t just say that. Except that what it does actually say sounds scarier—since it implies that any failure is a terminal screw up. That would be scary—if it was true. However, it isn’t true.
Good catch, thanks.