When considering whether deceptive alignment would lead to catastrophe, I think it’s also important to note that deceptively aligned AIs could pursue misaligned goals in sub-catastrophic ways.
Suppose GPT-5 terminally values paperclips. It might try to topple humanity, but there’s a reasonable chance it would fail. Instead, it could pursue the simpler strategies of suggesting users purchase more paperclips, or escaping the lab and lending its abilities to human-run companies that build paperclips. These strategies would offer a higher probability of a smaller payoff, even if they’re likely to be detected by a human at some point.
Which strategy would the model choose? That depends on a large number of speculative considerations, such as how difficult it is to take over the world, whether the model’s goals are time-discounted or “longtermist,” and whether the model places any terminal value on human flourishing. But in the space of all possible goals, it’s not obvious to me that the best way to pursue most of them is pursuit of world domination. For a more thorough discussion of this argument, I’d strongly recommend the first two sections of Instrumental Convergence?.
When considering whether deceptive alignment would lead to catastrophe, I think it’s also important to note that deceptively aligned AIs could pursue misaligned goals in sub-catastrophic ways.
Suppose GPT-5 terminally values paperclips. It might try to topple humanity, but there’s a reasonable chance it would fail. Instead, it could pursue the simpler strategies of suggesting users purchase more paperclips, or escaping the lab and lending its abilities to human-run companies that build paperclips. These strategies would offer a higher probability of a smaller payoff, even if they’re likely to be detected by a human at some point.
Which strategy would the model choose? That depends on a large number of speculative considerations, such as how difficult it is to take over the world, whether the model’s goals are time-discounted or “longtermist,” and whether the model places any terminal value on human flourishing. But in the space of all possible goals, it’s not obvious to me that the best way to pursue most of them is pursuit of world domination. For a more thorough discussion of this argument, I’d strongly recommend the first two sections of Instrumental Convergence?.