Why do you think people wouldn’t shut down an AI when they see it developing the capability of being unable to shut down, regardless of how useful it is currently being?
Why do you think people wouldn’t shut down an AI when they see it developing the capability of being unable to shut down, regardless of how useful it is currently being?
Does section 5.2 of Disjunctive Scenarios answer your question? There are plenty of reasons why various groups would set an AI free, and just something like it having unlimited Internet access may allow it to copy itself to be run somewhere else, preventing any future shutdown attempts.
Also, “we should shut this thing because it’s dangerous, regardless of how useful it is currently” is something that humans are empirically terrible at. Most people know that modern operating systems are probably full of undiscovered security holes that someone may be exploiting even as we speak, but nobody’s seriously proposing that we take down all computers while rebuild operating systems from the ground up to be more secure.
More generally, how many times have you read a report of some accident that includes some phrasing to the extent of “everyone knew it was a disaster just waiting to happen”? Eliezer also had a long article just recently about all kinds of situations where a lot of people know that the situation is fucked up, but can’t really do anything about it despite wanting to.
React to what? Such an AI might appear perfectly safe and increasingly useful right up to the point where it can no longer be turned off.
Why do you think people wouldn’t shut down an AI when they see it developing the capability of being unable to shut down, regardless of how useful it is currently being?
Does section 5.2 of Disjunctive Scenarios answer your question? There are plenty of reasons why various groups would set an AI free, and just something like it having unlimited Internet access may allow it to copy itself to be run somewhere else, preventing any future shutdown attempts.
Also, “we should shut this thing because it’s dangerous, regardless of how useful it is currently” is something that humans are empirically terrible at. Most people know that modern operating systems are probably full of undiscovered security holes that someone may be exploiting even as we speak, but nobody’s seriously proposing that we take down all computers while rebuild operating systems from the ground up to be more secure.
More generally, how many times have you read a report of some accident that includes some phrasing to the extent of “everyone knew it was a disaster just waiting to happen”? Eliezer also had a long article just recently about all kinds of situations where a lot of people know that the situation is fucked up, but can’t really do anything about it despite wanting to.