This article closely aligns to what I think, but it misses a big point. I believe the crux is not whether the END state is desirable but rather how societal upheaval should be managed in relation to AI development. Even if we believed that ASI-run society could lead to paradise (some do and some don’t clearly), if we never can get there at all, this whole conversation is a moot point. Judging by how AI development is going, there’s a distinct chance that we never “get there” by enraging regular people in the short term. In fact, bad planning can lead to a world that barely adopts ASI because people have revolted so hard against it.
To start, I agree with all your points about AI people like Darius Amodei giving naively optimistic views of the future. They all assume we’re going to get to ASI, and then ASI will suddenly become an inevitable way of running the world everywhere. However, this is a convenient assumption and it hides a messy reality of how we’d actually leverage ASI on day one. Amodei says it himself in a WSJ interview that widespread takeover is an aspiration rather than some verifiable fact.
Sadly, this is likely not going to happen. A jagged, staged adoption is more likely, leading to unrest among those who were put out of a job first. This is where the lack of planning will hurt us most.
My assumptions that underpin a jagged adoption: - Continued lightning fast pace of AI improvement and roll-out - Easy disintermediation of digital work by AI - Trailing capabilities in physical interface creation and adoption (essentially, robust, scalable, and widely capable robotics), even by a few years. - The trust “tax” which is put on robots in doing mission-critical physical work vs. humans (essentially robots need to do much better than humans for regular people to trust them in critical scenarios). - Life-changing improvements to the average person depends on the adoption of physical interfaces, not digital interfaces (i.e. AI cooking my dinner is much more positively disruptive than AI telling me what to cook).
My logical thought process: Belief in these assumptions ⇒ jagged AI adoption ⇒ societal upheaval ⇒ scapegoating of AI.
It stands to reason that anyone with purely digital work will get put out of a job much faster than those which rely on a physical interface. Essentially, if you can do your work remotely, an AI will replace you faster than if you have to be in person. Robotics might be close behind in theory, but if enough people lose their jobs fast enough, a span of a few years of 15-20% unemployment will be enough to kick off a Jihad.
Millions of people do digital work today. The problem with millions of people losing their jobs so quickly is that the economy has no time to compensate. These people can’t learn new skills in time to find meaningful, new careers, and being suddenly out of work, they’ll look for a boogeyman to blame. Unsurprisingly, ASI would become the scapegoat.
All the while, ASI could be making incredible breakthroughs in various scientific fields, but if these breakthroughs aren’t immediately translated to physical improvements like life extending drugs, quality of life changes, and so on, then it’s effectively meaningless to the average person.
As such, we could very well see AI development become self-defeating. The regular person isn’t going to hope for some transhuman singularity if they can’t put food on the table that year. It’s one thing if AI takes the jobs of .1% of the population, but once you’re hitting percentages like 5-10%, you can easily imagine these unemployed people banding up to do massive protests, conduct sabotage on AI data centers, and commit violent acts to those who propped up AI in the first place.
For the record, this is not a typical bourgeoisie vs. proletariat framing. Digital jobs are largely white collar jobs, and this economic collapse will affect business owners as well. Once large parts of the population lose their salary (or expect to lose their salary), they won’t spend as much. If they don’t spend as much, the economy goes into recession, impacting the value of every company and their ability to get capital. Companies that can’t sell products and can’t get capital go bust, meaning that many business owners will eventually come around as well against ASI adoption.
In my mind, the only way a Butlerian Jihad COULD be avoided is by slowing ASI roll-out until regular people can see massive benefits from it such that they’re OK with letting go of their jobs. However, the reckless, headlong advancement of AI leads me to believe that this won’t happen. As a result, we’ll all have to learn our lesson the hard way.
Again, the main point in all of this is not that humans can never co-exist with ASI or that severe conflict is inevitable. Rather, it’s specifying the consequence of not planning. The likely consequence isn’t the creation of an actual dystopia—it’s the widespread revolt of a populace which believes that AI will lead them to a dystopia, regardless whether or not their beliefs turn out to be true.
On a last note, I believe looking at illegal immigration is a good stand-in for what’s going to happen. Just as people have revolted against illegal immigrants taking their jobs, so too will they revolt against AI if it happens too quickly. The US recently elected a president on a platform of stopping illegal immigration. Don’t be surprised if another one gets elected based on stopping the AI takeover.
This article closely aligns to what I think, but it misses a big point. I believe the crux is not whether the END state is desirable but rather how societal upheaval should be managed in relation to AI development. Even if we believed that ASI-run society could lead to paradise (some do and some don’t clearly), if we never can get there at all, this whole conversation is a moot point. Judging by how AI development is going, there’s a distinct chance that we never “get there” by enraging regular people in the short term. In fact, bad planning can lead to a world that barely adopts ASI because people have revolted so hard against it.
To start, I agree with all your points about AI people like Darius Amodei giving naively optimistic views of the future. They all assume we’re going to get to ASI, and then ASI will suddenly become an inevitable way of running the world everywhere. However, this is a convenient assumption and it hides a messy reality of how we’d actually leverage ASI on day one. Amodei says it himself in a WSJ interview that widespread takeover is an aspiration rather than some verifiable fact.
Sadly, this is likely not going to happen. A jagged, staged adoption is more likely, leading to unrest among those who were put out of a job first. This is where the lack of planning will hurt us most.
My assumptions that underpin a jagged adoption:
- Continued lightning fast pace of AI improvement and roll-out
- Easy disintermediation of digital work by AI
- Trailing capabilities in physical interface creation and adoption (essentially, robust, scalable, and widely capable robotics), even by a few years.
- The trust “tax” which is put on robots in doing mission-critical physical work vs. humans (essentially robots need to do much better than humans for regular people to trust them in critical scenarios).
- Life-changing improvements to the average person depends on the adoption of physical interfaces, not digital interfaces (i.e. AI cooking my dinner is much more positively disruptive than AI telling me what to cook).
My logical thought process:
Belief in these assumptions ⇒ jagged AI adoption ⇒ societal upheaval ⇒ scapegoating of AI.
It stands to reason that anyone with purely digital work will get put out of a job much faster than those which rely on a physical interface. Essentially, if you can do your work remotely, an AI will replace you faster than if you have to be in person. Robotics might be close behind in theory, but if enough people lose their jobs fast enough, a span of a few years of 15-20% unemployment will be enough to kick off a Jihad.
Millions of people do digital work today. The problem with millions of people losing their jobs so quickly is that the economy has no time to compensate. These people can’t learn new skills in time to find meaningful, new careers, and being suddenly out of work, they’ll look for a boogeyman to blame. Unsurprisingly, ASI would become the scapegoat.
All the while, ASI could be making incredible breakthroughs in various scientific fields, but if these breakthroughs aren’t immediately translated to physical improvements like life extending drugs, quality of life changes, and so on, then it’s effectively meaningless to the average person.
As such, we could very well see AI development become self-defeating. The regular person isn’t going to hope for some transhuman singularity if they can’t put food on the table that year. It’s one thing if AI takes the jobs of .1% of the population, but once you’re hitting percentages like 5-10%, you can easily imagine these unemployed people banding up to do massive protests, conduct sabotage on AI data centers, and commit violent acts to those who propped up AI in the first place.
For the record, this is not a typical bourgeoisie vs. proletariat framing. Digital jobs are largely white collar jobs, and this economic collapse will affect business owners as well. Once large parts of the population lose their salary (or expect to lose their salary), they won’t spend as much. If they don’t spend as much, the economy goes into recession, impacting the value of every company and their ability to get capital. Companies that can’t sell products and can’t get capital go bust, meaning that many business owners will eventually come around as well against ASI adoption.
In my mind, the only way a Butlerian Jihad COULD be avoided is by slowing ASI roll-out until regular people can see massive benefits from it such that they’re OK with letting go of their jobs. However, the reckless, headlong advancement of AI leads me to believe that this won’t happen. As a result, we’ll all have to learn our lesson the hard way.
Again, the main point in all of this is not that humans can never co-exist with ASI or that severe conflict is inevitable. Rather, it’s specifying the consequence of not planning. The likely consequence isn’t the creation of an actual dystopia—it’s the widespread revolt of a populace which believes that AI will lead them to a dystopia, regardless whether or not their beliefs turn out to be true.
On a last note, I believe looking at illegal immigration is a good stand-in for what’s going to happen. Just as people have revolted against illegal immigrants taking their jobs, so too will they revolt against AI if it happens too quickly. The US recently elected a president on a platform of stopping illegal immigration. Don’t be surprised if another one gets elected based on stopping the AI takeover.