Sta-Hi Mooney, hero of Rudy Rucker’s PKD award-winning scifi novel Software (1982), is a prototype hero for glorious destruction of a runaway AI power. Sta-Hi, as his name suggests, is a Dionysian whose perspective is incredibly distorted and entirely incoherent. Sparing any spoilers from this spectacular novel, Rucker’s hero is in fact empowered in the struggle against AI tyranny through his own irrational, incoherent, and impulsive thinking. This is a surprising conclusion to come out of a pioneer in computer science, and I recommend reading the novel in addition to my own attempt which follows.
The greatest fear of AI often resides in a presumption that a superintelligence could indeed fully predict or program the behavior of an individual and ultimately an entire civilization, essentially a final checkmate for overcoming any struggle with human or biological life. This is often tied into mind-controlling images, sounds, or symbols as in Stephenson’s Snowcrash, or the preempting of any actions which might hinder it. This touches on the philosophical notion of free will, which I will now reduce into a practical computational framework. Let us sketch out some basic calculations for free will and consider how to quantify unpredictability of behaviors.
Cracking the behavioral patterns of a highly agreeable, coherent group of Apollonians is O(1), or some very low number, as just a few videos or essays could be enough to convince all of them. For a group of idiosyncratic Dionysians, each with their own disagreeable way of thinking, the problem of persuasion is closer to O(n), with each of them requiring individual attention for conversion. To further confound super-persuasion each Dionysian might generate several belief systems, ideally contradicting one another, making the conversion problem O(x*n) where n is the average number of belief systems within each incoherent individual. To be even more safe, less predictable, and maximize our own free will, there is the complete foolishness of the Nietzschean cosmic dancer. When the fundamental cosmological belief changes with each passion, in each fleeting moment, then the problem of persuading such an individual or predicting their next thought becomes nearly irreducible, with even a lifetime of deep conversation unlikely to lead to a conversion.
Maximizing free will or unpredictability does not necessarily require a long and unconventional reading list as profound shifts in our basic patterns of consciousness can also be achieved via chemical means. I recall that Hunter Thompson’s major objection to the Fear and Loathing in Las Vegas film was that the panoply of drugs were intended to be giving him insight into the American Dream, rather than appearing as some pointless display of hedonistic excess. Likewise, the use of psychoactive drugs is essential for Sta-Hi Mooney in evading and overcoming superintelligent villains bent on killing him.
If the reader accepts that yes, free will comes in degrees and can be usefully maximized, this still leaves the question of preciesly how an unsophisticated attacker can carry out any meaningful struggles against superintelligence. Recall the AT&T “hack” of 2013, in which the personal information of millions were scraped with a single command. Remember the glut of hilarious defacement attacks on major news media websites a few years prior, perpetrated by teenagers using ludicrously simple SQL injections. Or, if some of us were once mischevious 12 year olds in the 90s, we might remember coding a quick script to create and log hundreds of consecutive accounts on our favorite MUD, just to see what happens. These were devastating unsophisticated attacks, largely possible due to blindspots or naivete on the part of designers, and there is plenty of reason to presume that a superintelligence, having only faced internal evolutionary pressures, will be quite unprepared for its first external threats.
There’s the oft-repeated analogy that we are to a superintelligence as ants are to humans, and in spite of our sophistication, ants and other unsophisticated pests are extremely capable of destroying property, tainting food, and on occasion directly killing humans. As humans have proliferated, our pests have multiplied with us. If we were to think only of comparative sophistication, we might begin to wonder how a relatively simple virus like COVID could wreak such incredible havoc on our civilization and even though we might have the capability to eliminate it, we cannot seem to practically carry this out. Obviously sophistication-as-power is specious reasoning not reflected in nature, and the relationship between beings with different levels of sophistication is far more complex than absolute domination by the sophisticated. A more realistic view is that synthetic beings that have never faced external, unpredictable evolutionary pressures will have quite a hard time outside their bottle regardless of their sophistication.
How to defeat superintelligence, the Sta-Hi way
Sta-Hi Mooney, hero of Rudy Rucker’s PKD award-winning scifi novel Software (1982), is a prototype hero for glorious destruction of a runaway AI power. Sta-Hi, as his name suggests, is a Dionysian whose perspective is incredibly distorted and entirely incoherent. Sparing any spoilers from this spectacular novel, Rucker’s hero is in fact empowered in the struggle against AI tyranny through his own irrational, incoherent, and impulsive thinking. This is a surprising conclusion to come out of a pioneer in computer science, and I recommend reading the novel in addition to my own attempt which follows.
The greatest fear of AI often resides in a presumption that a superintelligence could indeed fully predict or program the behavior of an individual and ultimately an entire civilization, essentially a final checkmate for overcoming any struggle with human or biological life. This is often tied into mind-controlling images, sounds, or symbols as in Stephenson’s Snowcrash, or the preempting of any actions which might hinder it. This touches on the philosophical notion of free will, which I will now reduce into a practical computational framework. Let us sketch out some basic calculations for free will and consider how to quantify unpredictability of behaviors.
Cracking the behavioral patterns of a highly agreeable, coherent group of Apollonians is O(1), or some very low number, as just a few videos or essays could be enough to convince all of them. For a group of idiosyncratic Dionysians, each with their own disagreeable way of thinking, the problem of persuasion is closer to O(n), with each of them requiring individual attention for conversion. To further confound super-persuasion each Dionysian might generate several belief systems, ideally contradicting one another, making the conversion problem O(x*n) where n is the average number of belief systems within each incoherent individual. To be even more safe, less predictable, and maximize our own free will, there is the complete foolishness of the Nietzschean cosmic dancer. When the fundamental cosmological belief changes with each passion, in each fleeting moment, then the problem of persuading such an individual or predicting their next thought becomes nearly irreducible, with even a lifetime of deep conversation unlikely to lead to a conversion.
Maximizing free will or unpredictability does not necessarily require a long and unconventional reading list as profound shifts in our basic patterns of consciousness can also be achieved via chemical means. I recall that Hunter Thompson’s major objection to the Fear and Loathing in Las Vegas film was that the panoply of drugs were intended to be giving him insight into the American Dream, rather than appearing as some pointless display of hedonistic excess. Likewise, the use of psychoactive drugs is essential for Sta-Hi Mooney in evading and overcoming superintelligent villains bent on killing him.
If the reader accepts that yes, free will comes in degrees and can be usefully maximized, this still leaves the question of preciesly how an unsophisticated attacker can carry out any meaningful struggles against superintelligence. Recall the AT&T “hack” of 2013, in which the personal information of millions were scraped with a single command. Remember the glut of hilarious defacement attacks on major news media websites a few years prior, perpetrated by teenagers using ludicrously simple SQL injections. Or, if some of us were once mischevious 12 year olds in the 90s, we might remember coding a quick script to create and log hundreds of consecutive accounts on our favorite MUD, just to see what happens. These were devastating unsophisticated attacks, largely possible due to blindspots or naivete on the part of designers, and there is plenty of reason to presume that a superintelligence, having only faced internal evolutionary pressures, will be quite unprepared for its first external threats.
There’s the oft-repeated analogy that we are to a superintelligence as ants are to humans, and in spite of our sophistication, ants and other unsophisticated pests are extremely capable of destroying property, tainting food, and on occasion directly killing humans. As humans have proliferated, our pests have multiplied with us. If we were to think only of comparative sophistication, we might begin to wonder how a relatively simple virus like COVID could wreak such incredible havoc on our civilization and even though we might have the capability to eliminate it, we cannot seem to practically carry this out. Obviously sophistication-as-power is specious reasoning not reflected in nature, and the relationship between beings with different levels of sophistication is far more complex than absolute domination by the sophisticated. A more realistic view is that synthetic beings that have never faced external, unpredictable evolutionary pressures will have quite a hard time outside their bottle regardless of their sophistication.