Amazon using an (unknown secret) algorithm to hire or fire Flex drivers is not a instance of “AI”, not even in the buzzword sense of AI = ML. For all we know it’s doing something trivially simple, like combining a few measured properties (how often they’re on time, etc.) with a few manually assigned weights and thresholds. Even if it’s using ML, it’s going to be something much more like a bog standard Random Forest model trained on 100k rows with no tuning, than a scary powerful language model with a runaway growth trend.
Even if some laws are passed about this, they’d be expandable in the directions of “Bezos is literally an evil overlord [which is a quote from the linked article], our readers/voters love to hate him, we should hurt him some more”; and “we already have laws establishing protected characteristics in hiring/firing/housing/etc; if black-box ML models can’t prove they’re not violating the law, then they’re not allowed”. The latter has a very narrow domain of applicability so would not affect AI risk.
What possible law or regulation, now or in the future, would differentially impede dangerous AI (on the research path leading to AGI) and all other software, or even all other ML? A law that equally impedes all ML would never get enough support to pass; a law that could be passed would have to use some narrow discriminating wording that programmers could work around most of the time, and so accomplish very little.
Amazon using an (unknown secret) algorithm to hire or fire Flex drivers is not a instance of “AI”, not even in the buzzword sense of AI = ML. For all we know it’s doing something trivially simple, like combining a few measured properties (how often they’re on time, etc.) with a few manually assigned weights and thresholds. Even if it’s using ML, it’s going to be something much more like a bog standard Random Forest model trained on 100k rows with no tuning, than a scary powerful language model with a runaway growth trend.
Even if some laws are passed about this, they’d be expandable in the directions of “Bezos is literally an evil overlord [which is a quote from the linked article], our readers/voters love to hate him, we should hurt him some more”; and “we already have laws establishing protected characteristics in hiring/firing/housing/etc; if black-box ML models can’t prove they’re not violating the law, then they’re not allowed”. The latter has a very narrow domain of applicability so would not affect AI risk.
What possible law or regulation, now or in the future, would differentially impede dangerous AI (on the research path leading to AGI) and all other software, or even all other ML? A law that equally impedes all ML would never get enough support to pass; a law that could be passed would have to use some narrow discriminating wording that programmers could work around most of the time, and so accomplish very little.