Ideally that would be the case. However, if I had to guess, this roiling mass of Luddites would likely have chosen to boycott anything to do with AI as a result of their job/career losses. We’d like to believe that we’d easily be convinced out of violence. However, when humans get stuck in a certain of thinking, we become stubborn and accept our own facts regardless of whatever an expert, or expert system, says to us. This future ChatGPT could use this to its advantage, but I don’t see how it prevents violence once people’s minds are set on violence. Telling them “Don’t worry, be happy, this will all pass as long as you trust the government, the leaders, and the rising AGI” seems profoundly unlikely to work especially in America where telling anyone to trust the government just makes them distrust the messenger even more. And saying “market forces will allow new jobs to be created” seems unlikely to convince anyone if they’ve been thrown out due to AI.
And the increasing crackdowns on any one particular group would only be tolerated if there was a controlled burn of unemployment through society. When it’s just about everyone you have to crackdown against, at that point, you have a revolution on your hands. All it takes is one group suffering brutality for it to cascade.
The way to stop this is total information control and deception, which, again, we’ve decided is totally undesirable and dystopian behavior. Justifying it with “For the greater good” and “the ends justifies the means” becomes the same sort of crypto-Leninist talk that the technoprogressives tend to so furiously hate.
This thought experiment requires the belief that automation will happen rapidly, without any care or foresight or planning, and that there are no serious proposals to allow for a soft landing. The cold fact is that this is not an unrealistic expectation. I’d put p(doom) at probably as high as 90% that I’m actually underestimating the amount of reaction, failing to account for racial radicalization, religious radicalization, third-worldism, progressivism flirting with Ludditism, conservatism becoming widespread paleoconservative primitivism, and so on.
If there is a more controlled burn— if we don’t simply throw everyone out of their jobs with only a basic welfare scheme to cover for them— then that number drops dramatically because we are easily amused and distracted by tech toys and entertainment. It is entirely possible for a single variable to drastically alter outcomes, and right now, we seem to be speedrunning the outcome with all the worst possible variables working against us.
Ideally that would be the case. However, if I had to guess, this roiling mass of Luddites would likely have chosen to boycott anything to do with AI as a result of their job/career losses. We’d like to believe that we’d easily be convinced out of violence. However, when humans get stuck in a certain of thinking, we become stubborn and accept our own facts regardless of whatever an expert, or expert system, says to us. This future ChatGPT could use this to its advantage, but I don’t see how it prevents violence once people’s minds are set on violence. Telling them “Don’t worry, be happy, this will all pass as long as you trust the government, the leaders, and the rising AGI” seems profoundly unlikely to work especially in America where telling anyone to trust the government just makes them distrust the messenger even more. And saying “market forces will allow new jobs to be created” seems unlikely to convince anyone if they’ve been thrown out due to AI.
And the increasing crackdowns on any one particular group would only be tolerated if there was a controlled burn of unemployment through society. When it’s just about everyone you have to crackdown against, at that point, you have a revolution on your hands. All it takes is one group suffering brutality for it to cascade.
The way to stop this is total information control and deception, which, again, we’ve decided is totally undesirable and dystopian behavior. Justifying it with “For the greater good” and “the ends justifies the means” becomes the same sort of crypto-Leninist talk that the technoprogressives tend to so furiously hate.
This thought experiment requires the belief that automation will happen rapidly, without any care or foresight or planning, and that there are no serious proposals to allow for a soft landing. The cold fact is that this is not an unrealistic expectation. I’d put p(doom) at probably as high as 90% that I’m actually underestimating the amount of reaction, failing to account for racial radicalization, religious radicalization, third-worldism, progressivism flirting with Ludditism, conservatism becoming widespread paleoconservative primitivism, and so on.
If there is a more controlled burn— if we don’t simply throw everyone out of their jobs with only a basic welfare scheme to cover for them— then that number drops dramatically because we are easily amused and distracted by tech toys and entertainment. It is entirely possible for a single variable to drastically alter outcomes, and right now, we seem to be speedrunning the outcome with all the worst possible variables working against us.