The Deceptive Turn Thesis seems almost unavoidable if you start from the assumptions “the AI doesn’t place an inhumanly high value on honesty” and “the AI is tested on inputs vaguely resembling the real world”. That latter assumption is probably unavoidable, unless it turns out that human values can be so generalized as to be comprehensible in inhuman settings. If we’re stuck testing an AI in a sandbox that resembles reality then it can probably infer enough about reality to know when it would benefit by dissembling.
The Deceptive Turn Thesis seems almost unavoidable if you start from the assumptions “the AI doesn’t place an inhumanly high value on honesty” and “the AI is tested on inputs vaguely resembling the real world”. That latter assumption is probably unavoidable, unless it turns out that human values can be so generalized as to be comprehensible in inhuman settings. If we’re stuck testing an AI in a sandbox that resembles reality then it can probably infer enough about reality to know when it would benefit by dissembling.