Hi weverka, sorry for the downvotes (not mine, for the record). The answer is that Yudkowsky’s proposal is aiming to solve a different ‘shutdown problem’ than the shutdown problem I’m discussing in this post. Yudkowsky’s proposal is aimed at stopping humans developing potentially-dangerous AI. The problem I’m discussing in this post is the problem of designing artificial agents that both (1) pursue goals competently, and (2) never try to prevent us shutting them down.
Hi weverka, sorry for the downvotes (not mine, for the record). The answer is that Yudkowsky’s proposal is aiming to solve a different ‘shutdown problem’ than the shutdown problem I’m discussing in this post. Yudkowsky’s proposal is aimed at stopping humans developing potentially-dangerous AI. The problem I’m discussing in this post is the problem of designing artificial agents that both (1) pursue goals competently, and (2) never try to prevent us shutting them down.
thank you.