Thus most decisions will probably be allocated to AI systems
If AI systems make most decisions, humans will lose control of the future
If humans have no control of the future, the future will probably be bad for humans
Sure—at some point in the future, maybe.
Maybe, maybe not. Humans tend to have a bit of an ego when it comes to letting a filthy machine make decisions for them. But I’ll bite the bullet.
There’s several levels on which I disagree here. Firstly, we’re assuming that “humans” have control of the future in the first place. It’s hard to assign coherent agency to humanity as a whole, it’s more of a weird mess of conflicting incentives, and nobody really controls it. Secondly, if those AI systems are designed in the right way, the might just become the tools for humanity to sorta steer the future the way we want it.
I agree with your framing here that systems made up of rules + humans + various technological infrastructure are the actual things that control the future. But I think the key is that the systems themselves would begin to favour more non-human decision making because of incentive structures.
Eg, corporate entities have a profit incentive to have the most efficient decision maker in charge of the company, and maybe that includes a CEO but the board might insist on the use of an AI assistant for that CEO, and if the CEO makes a decision that goes against the AI and it turns out to be wrong shareholders in that company will come to trust the AI system more and more of the time. They don’t necessarily care about the ego of the CEO they just care about the outcomes, within the competitive market.
In this way, more and more decision making gets turned over to non-human systems because of the competitive structures which are very difficult to escape from. As this transition continues it becomes very hard to control the unseen externalities from these decisions.
I suppose this doesn’t seem too catastrophic in its fundamental form, but I think the outcomes of playing it forward essentially seem to be a significant potential for harm from these externalities, without much of a mechanism for recourse.
I agree with your framing here that systems made up of rules + humans + various technological infrastructure are the actual things that control the future. But I think the key is that the systems themselves would begin to favour more non-human decision making because of incentive structures.
Eg, corporate entities have a profit incentive to have the most efficient decision maker in charge of the company, and maybe that includes a CEO but the board might insist on the use of an AI assistant for that CEO, and if the CEO makes a decision that goes against the AI and it turns out to be wrong shareholders in that company will come to trust the AI system more and more of the time. They don’t necessarily care about the ego of the CEO they just care about the outcomes, within the competitive market.
In this way, more and more decision making gets turned over to non-human systems because of the competitive structures which are very difficult to escape from. As this transition continues it becomes very hard to control the unseen externalities from these decisions.
I suppose this doesn’t seem too catastrophic in its fundamental form, but I think the outcomes of playing it forward essentially seem to be a significant potential for harm from these externalities, without much of a mechanism for recourse.