This is worth thinking about, but a reminder to participants that the space of possibilities is larger than we can imagine, and FAR larger than we can write compelling stories around.
I think my preferred story is NOT that it’s recognizable as Humanity vs AI. Unless the FOOM is worst-case (like many orders of magnitude literally overnight), it’s likely to look much more like AI-assisted humans vs other humans’ preferred cultural and legal norms. This will be a commercial AI getting so good at optimizing that company’s control of resources that it just seems like a powerful company pursuing goals orthogonal to customers/consumers/victims. Which then manages to lobby and advertise well enough that it gets MORE power rather than less when subgroups of humans try to fight it.
I think it’s almost guaranteed that in the early days (or decades) of AI takeover, there will be lots of human Quislings who (wittingly or un-) cooperate with the AI in the belief that it benefits them (or even think it benefits humanity overall).
Think “FAANG team up to take over the world with more AI assistance than you expected” more than “AI takes over violently and overtly”. AI won’t care about status or being known to rule. It’ll only care about the goals it has (those it was created with plus those it’s learned/self-modified to want).
Also, this may already be happening. It’s hard to distinguish a smart, driven AI from a successful corporate behemoth. In fact, the argument can be made that corporations ARE agents in the AI sense, distinct from the cells/humans who they comprise.
As someone who was recently the main person doing machine learning research & development in a ‘data driven company’, I can confirm that we were working as hard as we could to replace human decision making with machine learning models on every level of the business that we could. It worked better, made more money, more reliably, with fewer hours of human work input. Over the years I was there we gradually scaled down the workforce of human decision makers and scaled up the applications of ML and each step along that path was clearly a profit win. Money spent on testing and maintaining new models, and managing the massive data pipelines they depended on, quickly surpassed the money spent on wages to people. I suspect a lot of companies are pushing in similar directions.
This is worth thinking about, but a reminder to participants that the space of possibilities is larger than we can imagine, and FAR larger than we can write compelling stories around.
I think my preferred story is NOT that it’s recognizable as Humanity vs AI. Unless the FOOM is worst-case (like many orders of magnitude literally overnight), it’s likely to look much more like AI-assisted humans vs other humans’ preferred cultural and legal norms. This will be a commercial AI getting so good at optimizing that company’s control of resources that it just seems like a powerful company pursuing goals orthogonal to customers/consumers/victims. Which then manages to lobby and advertise well enough that it gets MORE power rather than less when subgroups of humans try to fight it.
I think it’s almost guaranteed that in the early days (or decades) of AI takeover, there will be lots of human Quislings who (wittingly or un-) cooperate with the AI in the belief that it benefits them (or even think it benefits humanity overall).
Think “FAANG team up to take over the world with more AI assistance than you expected” more than “AI takes over violently and overtly”. AI won’t care about status or being known to rule. It’ll only care about the goals it has (those it was created with plus those it’s learned/self-modified to want).
Also, this may already be happening. It’s hard to distinguish a smart, driven AI from a successful corporate behemoth. In fact, the argument can be made that corporations ARE agents in the AI sense, distinct from the cells/humans who they comprise.
As someone who was recently the main person doing machine learning research & development in a ‘data driven company’, I can confirm that we were working as hard as we could to replace human decision making with machine learning models on every level of the business that we could. It worked better, made more money, more reliably, with fewer hours of human work input. Over the years I was there we gradually scaled down the workforce of human decision makers and scaled up the applications of ML and each step along that path was clearly a profit win. Money spent on testing and maintaining new models, and managing the massive data pipelines they depended on, quickly surpassed the money spent on wages to people. I suspect a lot of companies are pushing in similar directions.
SF author Charles Stross once made an extended analogy where corporations are alien invaders: http://www.antipope.org/charlie/blog-static/2010/12/invaders-from-mars.html