Yeah I was really only thinking about “not yet trust the AGI” as the main concern. Like, I’m somewhat hopeful that we can get the AGI to have a snap negative reaction to the thought of deceiving its operator, but it’s bound to have a lot of other motivations too, and some of those might conflict with that. And it seems like a harder task to make sure that the latter motivations will never ever outbid the former, than to just give every snap negative reaction a veto, or something like that, if that’s possible.
I don’t think “if every option is bad, freeze in place paralyzed forever” is a good strategy for humans :-P and eventuality it would be a bad strategy for AGIs too, as you say.
Yeah I was really only thinking about “not yet trust the AGI” as the main concern. Like, I’m somewhat hopeful that we can get the AGI to have a snap negative reaction to the thought of deceiving its operator, but it’s bound to have a lot of other motivations too, and some of those might conflict with that. And it seems like a harder task to make sure that the latter motivations will never ever outbid the former, than to just give every snap negative reaction a veto, or something like that, if that’s possible.
I don’t think “if every option is bad, freeze in place paralyzed forever” is a good strategy for humans :-P and eventuality it would be a bad strategy for AGIs too, as you say.