This sounds really interesting! Generally it seems that most people either believe AI will get power by directly being ordered to organize the entire world, or it will be some kind of paper-clip factory robot going rogue and hacking other computers. I am starting to think it will more likely be: Companies switch to AI middle managers to save $$$, then things just happen from there.
Now one way this could go really mazy is like this: All AI-models, even unique ones custom made for a particular company are based on some earlier model. Let’s say Wallmart buys a model that is based on a general Middle Manager Model.
This model now has the power to hire and fire low-level workers, so it will be very much in their interest to find out what makes the model tick. They can’t analyse the exact model (which is being run from a well guarded server park). But at some point somebody online will get hold of a general Middle Manager Model and let people play with it. Perhaps the open-source people will do all sorts of funny experiments with it and find bugs that could have been inherited by the Wallmart model.
Now the workers at Wallmart all start playing with the online model in their spare time, looking around AI-forums for possible exploits. Nobody knows if these also work on the real model, but people will still share them, hoping to be able to hack the system: “Hey, if you try to report sick on days when you anyway had time off, the Manager will give you extra credits!” “Listen, if you scan these canned tomatoes fifty times it triggers bug in the system so you will get a higher raise!” Etc.
The workers have no way to know which of the exploits work, but everybody will be too afraid of loosing their jobs if they are the only one NOT hacking the AI. Wait for a few years and you will see the dark tech-cult turning up.
This sounds really interesting! Generally it seems that most people either believe AI will get power by directly being ordered to organize the entire world, or it will be some kind of paper-clip factory robot going rogue and hacking other computers. I am starting to think it will more likely be: Companies switch to AI middle managers to save $$$, then things just happen from there.
Now one way this could go really mazy is like this: All AI-models, even unique ones custom made for a particular company are based on some earlier model. Let’s say Wallmart buys a model that is based on a general Middle Manager Model.
This model now has the power to hire and fire low-level workers, so it will be very much in their interest to find out what makes the model tick. They can’t analyse the exact model (which is being run from a well guarded server park). But at some point somebody online will get hold of a general Middle Manager Model and let people play with it. Perhaps the open-source people will do all sorts of funny experiments with it and find bugs that could have been inherited by the Wallmart model.
Now the workers at Wallmart all start playing with the online model in their spare time, looking around AI-forums for possible exploits. Nobody knows if these also work on the real model, but people will still share them, hoping to be able to hack the system: “Hey, if you try to report sick on days when you anyway had time off, the Manager will give you extra credits!” “Listen, if you scan these canned tomatoes fifty times it triggers bug in the system so you will get a higher raise!” Etc.
The workers have no way to know which of the exploits work, but everybody will be too afraid of loosing their jobs if they are the only one NOT hacking the AI. Wait for a few years and you will see the dark tech-cult turning up.