When humans choose their actions, they often think about the impact those actions will have on their status. They often don’t play to win; they play to impress the observers. (Yes, winning is impressive, but if winning using certain moves is more impressive than winning using some other moves, even if the probability of the latter strategy is greater, many will choose the former.) AI would not care about status, if it expects that humans will soon be dead. AI would not over-complicate things when not necessary, because it is not trying to signal its superhuman intelligence to a hypothetical observer.
For example, people keep making fun of the “Nigerian prince” scams, but they continue to exist, because apparently they work. Who knows, maybe the same technology can be used to destroy humanity. Like, send everyone an SMS at the same time, asking them to follow your commands, and promising them millions of dollars if they obey. Ask for something simple and harmless first to train compliance, then ask them to do something such that if 1 person in 1000 does it, the civilization will collapse. Maybe 1 in 1000 will actually do it.
(Among other reasons, this plan sounds stupid, because the phone operators could trivially stop it by blocking the SMS functionality for a while. Yeah, but maybe if you approach everyone at the same time and the whole action takes less than an hour, they won’t react quickly enough. In hindsight, it will be obvious that disabling SMS quickly would have been the right move, but at the moment… it will seem just like a weird prank, and disabling SMS will seem like a very serious move with possible impact on profits that requires approval of the important people; and if that happens on a weekend, people will hesitate to bother the important ones.)
Also, you only need to destroy humanity once, so if you try dozen stupid plans in parallel, even if each of them is more likely to fail than to succeed...
When humans choose their actions, they often think about the impact those actions will have on their status. They often don’t play to win; they play to impress the observers. (Yes, winning is impressive, but if winning using certain moves is more impressive than winning using some other moves, even if the probability of the latter strategy is greater, many will choose the former.) AI would not care about status, if it expects that humans will soon be dead. AI would not over-complicate things when not necessary, because it is not trying to signal its superhuman intelligence to a hypothetical observer.
For example, people keep making fun of the “Nigerian prince” scams, but they continue to exist, because apparently they work. Who knows, maybe the same technology can be used to destroy humanity. Like, send everyone an SMS at the same time, asking them to follow your commands, and promising them millions of dollars if they obey. Ask for something simple and harmless first to train compliance, then ask them to do something such that if 1 person in 1000 does it, the civilization will collapse. Maybe 1 in 1000 will actually do it.
(Among other reasons, this plan sounds stupid, because the phone operators could trivially stop it by blocking the SMS functionality for a while. Yeah, but maybe if you approach everyone at the same time and the whole action takes less than an hour, they won’t react quickly enough. In hindsight, it will be obvious that disabling SMS quickly would have been the right move, but at the moment… it will seem just like a weird prank, and disabling SMS will seem like a very serious move with possible impact on profits that requires approval of the important people; and if that happens on a weekend, people will hesitate to bother the important ones.)
Also, you only need to destroy humanity once, so if you try dozen stupid plans in parallel, even if each of them is more likely to fail than to succeed...