I think my best guess is kind of like this story, but:
People aren’t even really deploying best practices.
ML systems generalize kind of pathologically over long time horizons, and so e.g. long-term predictions don’t correctly reflect the probability of systemic collapse.
As a result there’s no complicated “take over the sensors moment” it’s just everything is going totally off the rails and everyone is yelling about it but it just keeps gradually drifting on the rails.
Maybe the biggest distinction is that e.g. “watchdogs” can actually give pretty good arguments about why things are bad. In the story we fix all the things they can explain and are left only with the crazy hard core of human-incomprehensible problems, but in reality we will probably just fix the things that are pretty obvious and will be left with the hard core of problems that are still fairly obvious but not quite obvious enough that institutions can respond intelligently to them.
I think my best guess is kind of like this story, but:
People aren’t even really deploying best practices.
ML systems generalize kind of pathologically over long time horizons, and so e.g. long-term predictions don’t correctly reflect the probability of systemic collapse.
As a result there’s no complicated “take over the sensors moment” it’s just everything is going totally off the rails and everyone is yelling about it but it just keeps gradually drifting on the rails.
Maybe the biggest distinction is that e.g. “watchdogs” can actually give pretty good arguments about why things are bad. In the story we fix all the things they can explain and are left only with the crazy hard core of human-incomprehensible problems, but in reality we will probably just fix the things that are pretty obvious and will be left with the hard core of problems that are still fairly obvious but not quite obvious enough that institutions can respond intelligently to them.