In many respects, I expect this to be closer to what actually happens than “everyone falls over dead in the same second” or “we definitively solve value alignment”. Multipolar worlds, AI that generally follows the law (when operators want it to, and modulo an increasing number of loopholes) but cannot fully be trusted, and generally muddling through are the default future. I’m hoping we don’t get instrumental survival drives though.
In many respects, I expect this to be closer to what actually happens than “everyone falls over dead in the same second” or “we definitively solve value alignment”. Multipolar worlds, AI that generally follows the law (when operators want it to, and modulo an increasing number of loopholes) but cannot fully be trusted, and generally muddling through are the default future. I’m hoping we don’t get instrumental survival drives though.