This is very cool to see. I just finished re-reading Dune. I wonder what signal prompted me to do that, and I wonder if it was the same signal that prompted you to write this.
I’ve been thinking a lot recently about rationalist advocacy and community. I don’t think that individuals unilaterally deciding to stop automating things is going to make a dent in the problem. This is a straightforward coordination problem. If you drop out of modern society, for whatever reason, society fills in the hole you left. The only way to challenge Moloch is to create an alternative social framework that actually works better, at least in some regards.
One thing that keeps cropping up in my thoughts/discussions about rationalist community is that the value-add of the community needs to be very clear and concrete. The metaphor or analogue of professional licensure might be appropriate—a “rationalist credential”, some kind of impossible-to-fake, difficult-to-earn token of mastery that denotes high skill level and knowledge, that then becomes symbolically associated with the movement. I mention this idea because the value-add of being a credentialed rationalist would then have to be weighed against whatever weird social restrictions that the community adopts—e.g., your suggestion of avoiding automation, or instituting some kind of fealty system. These ideas may be empirically, demonstrably good ideas (we don’t really know yet) but their cost in weirdness points can’t be ignored.
As an side—and I’m open to being corrected on this—I don’t think Herbert was actually advocating for a lot of the ideas he portrays. Dune and Frank Herbert explore a lot of ideas but don’t really make prescriptions. In fact, I think that Herbert is putting forth his universe as an example of undesirable stagnation, not some kind of demonstrated perfection. It would be cool to be a mentat or a Bene Gesserit, i.e. a member of a tribe focused on realizing human potential, but I don’t think he was saying with his books that the multi-millennial ideologically motivated political stranglehold of the Bene Gesserit was a good thing. I don’t think that Herbert thinks that feudalism is a good thing just because it’s the system he presents. Maybe I’m wrong.
The basic problem with Dune is that Herbert based a lot of his extrapolations and discussion on things which were pseudoscience or have turned out to be false. And to some extent, people don’t realize this because they read their own beliefs into the novels—for example, OP commits this error in describing the Butlerian Jihad, which was not a war against autonomous machines but against people who used machines (likewise, Leto II’s ‘Arafel’ involved prescient machines… made by the Ixians), and which was not named after Samuel Butler in the first place. If Herbert had been thinking of a classic autonomous AI threat, that would be more interesting, but he wasn’t. Similarly, ‘ancestral memories’: Herbert seriously thought there was some sort of hidden memory repository which explained various social phenomena, and the whole Leto II/Fish Speaker/war is apparently sourced from a highly speculative outsider, probably crank, book (which is so obscure I have been unable to get a copy to see how far the borrowings go). We know now normal humans can’t be trained into anything like Mentats, after centuries of failure of education dating at least back to Rousseau & Locke’s blankslatism, and especially all the many attempts at developing prodigies, and case-studies like dual n-back. His overall paradigm of genetics was reasonable but unfortunately, for the wrong species—apples rather than humans. Or the sociology in The Dosadi Experiment or how to design AI in Destination: Void or… the list goes on. Nice guy, nothing like L. Ron Hubbard (and a vastly better writer), and it makes for great novels, but like many SF authors or editors of the era* he often used his fiction as tracts/mouthpieces, and he was steeped in the witch’s brew that was California & the human potential movement and that made his extrapolations very poor if we want to use them for any serious non-fiction purpose.
So, it just doesn’t come up. The Butlerian Jihad isn’t that relevant because it’s hardly described at all in the books and what is described isn’t relevant as we’re concerned about entirely different scenarios; human prescience doesn’t exist, period, so it doesn’t matter that it probably wouldn’t follow the genetics he outlines so the whole paradigm of Bene Gesserit and Houses is irrelevant as is everything that follows; Mentats can’t exist, at least not without such massive eugenics to boost human intelligence that it’d spark a Singularity first, so there’s not much point in discussing nootropics with an eye towards becoming a Mentat because all things like stimulants or spaced repetition can do is give you relatively small benefits at the margin (or to put it another way, things Mentats do in fiction can be done in reality, but only using software on computers)
* eg Hubbard, Asimov, Cordwainer Smith even discounting the hallucination theory, especially John W. Campbell
I don’t think he was saying with his books that the multi-millennial ideologically motivated political stranglehold of the Bene Gesserit was a good thing. I don’t think that Herbert thinks that feudalism is a good thing just because it’s the system he presents.
I would say that he clearly presents the breeding program as a very good thing and vital for the long-term preservation & flourishing of humanity as the only way to create humans who are genuine ‘adults’ capable of long-term planning (in a rather gom gabbar sense).
As far as feudalism goes, there’s an amusing anecdote from Norman Spinrad I quote in my essay where he tweaks Herbert about all “this royalist stuff” and Herbert claims he was going to end it with democracy. (Given how little planning Herbert tended to do, I have to suspect that his response was rather embarrassed and he was thinking to himself, ‘I’ll do it later’...) He wouldn’t be the first author to find feudalism a lot more fun to write than their own liberal-democratic values. (George R. R. Martin is rather liberal, is a free speech advocate, was a conscientious objector, and describes Game of Thrones as anti-war, but you won’t find too much democracy in his books.)
I agree that Herbert thought the breeding program was necessary. But I also think he couched it as tragically necessary. Leto II’s horrific repression was similarly tragically necessary.
I think the questions provoked by Herbert’s concepts of Mentats and Bene Gesserit might actually be fruitful to think about.
If there were no meditation traditions on Earth, then we would have no reason to suspect that jhanas, or any other advanced states of meditative achievement, exist. If there were no musical instruments, we would have no reason to suspect that a human could use fingers or breath to manipulate strings or harmonics to create intricate, polyphonic, improvised melodies. If there were no arithmetic, we would view a person who could do rudimentary mental math to be a wizard. One can extend this line of thinking to many things—reading and writing, deep strategy games like chess, high-level physical sports, and perhaps even specific fields of knowledge.
So it is probably safe to say that we “know” that a human can’t be trained to do the things that Mentats do in Dune, but I don’t think it’s safe to say that we have any idea what humans could be trained to do with unpredictable avenues of development and 20,000 years of cultural evolution.
I guess I’m not really disagreeing with anything you said, but rather advocating that we take Herbert’s ideas seriously but not literally.
This is pretty close to my thinking too. Herbert’s proposal was something like, “We have no idea what levels of human potential are out there.” He takes this idea and describes what it might look like, based on a few possible lines of development. Possibly he thought these were the most likely avenues of development, but that still seems unclear. Either way, he happened to pick examples that were wrong in the details, but the proposal stands.
You’re entirely right that taking Herbert’s views on most specific subjects isn’t helpful. He was wrong about genetics, about education, and about a lot of things besides. (Though like moridinamael, I’m also not clear on whether he personally believed in things like genetic memory, though I would be interested to see sources if you have them. I assumed that it was an element he included for fictional/allegorical purposes.) But I think he was a clever guy who spent a lot of time thinking about problems we’re interested in, even if he often got it wrong.
I think it’s a little harsh to say that I commit the error of reading-in my beliefs about the Butlerian Jihad, given that I quote Reverend Mother Gaius Helen Mohiam as saying, “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them,” and Leto II as saying, “The target of the Jihad was a machine-attitude as much as the machines.” I’m aware that there are a lot of textual clues that the Jihad wasn’t a war against autonomous machines themselves. Though autonomous machines were certainly involved; the glossary to the original book defines the Butlerian Jihad as, “the crusade against computers, thinking machines, and conscious robots”, and the OC Bible’s commandment is, “Thou shalt not make a machine in the likeness of a human mind.”
More generally, I was using the Jihad as a metaphor to make a point about automation in general.
It’s clear that Strong AI is illegal under the prohibition of “thinking machines”, but it had always puzzled me why lesser devices — like calculators and recording devices — were included. I had passed it off as another mistake on Herbert’s part. But when I read Nabil’s comment it reminded me strongly of the Jihad, and I realized that if taken to an extreme conclusion it would lead to a proscription against almost all automation, like the one we find in Dune. Consider it a steelman of the position, if you would like.
Just because I quote Samuel Butler at the end, doesn’t mean I think the Jihad was named after him! It’s just an amusing coincidence.
Looking forward to reading your essay on the Genetics of Dune!
Though like moridinamael, I’m also not clear on whether he personally believed in things like genetic memory, though I would be interested to see sources if you have them. I assumed that it was an element he included for fictional/allegorical purposes.
Yes, we shouldn’t assume a SF author endorsed any speculative proto/pseudo-science he includes. But in the case of genetic memory, we can be fairly sure that he ‘believed in it’ in the sense that he took it way more seriously than you or I and considered it a live hypothesis because he says so explicitly in an interview I quote in the essay: he thinks genetic memory and pheromones, or something much like them, is necessary to explain things like the cohesion of mob & social groups like aristocracies without explicit obvious status markers, or the supposed generational patterns of warfare ‘spasms’ (this is a reference to the obscure crankery of The Sexual Cycle of Human Warfare† which apparently deeply influenced Herbert and you won’t understand all the references/influences unless you at least look at an overview of it because it’s so lulzy).
Reading back, I see I got sidetracked and didn’t resolve your main point about why the Butlerian Jihad targeted all software. The one-line explanation is: permitting any software is an existential risk because it is a crutch which will cripple humanity’s long-term growth throughout the universe, leaving us vulnerable to the inevitable black swans (not necessarily AI).
First, you should read my essay, and especially that Herbert interview and the Spinrad democracy footnote and if you have the time, Herbert’s attitude towards computers & software is most revealed in Without Me You’re Nothing, which is a very strange artifact: his 1980 technical guide/book on programming PCs of that era—leaving aside the wildly outdated information which you can skip over, the interesting parts are his essays or commentaries on PCs in general, which convey his irascible humanist libertarian attitude on PCs as being a democratizing and empowering force for independent-human growth. Herbert was quite a PC enthusiast: beyond writing a whole book about how to use them, his farmstead apparently had rigged up all sorts of gadgets and ‘home automation’ he had made as a hobby to help him farm and, at least in theory, be more independent & capable & a Renaissance man. (Touponce is also well worth reading.) There’s a lot of supporting information in those I won’t try to get into here which I think support my generalizations below.
So, your basic error is that you are wrong about the BJ not being about AI or existential-risk per se. The BJ here is in fact about existential-risk from Herbert’s POV; it’s just that it’s much more indirect than you are thinking. It has nothing to do with signaling or arms-races. Herbert’s basic position is that machines (like PCs), ‘without me [the living creative human user], they are nothing’: they are dead, uncreative, unable to improvise or grow, and constraining. (At least without a level of strong AI he considered centuries or millennia away & to require countless fundamental breakthroughs.) They lock humans into fixed patterns. And to Herbert, this fixedness is death. It is death, sooner or later, perhaps many millennium later, but death nevertheless; and [human] life is jazz:
In all of my universe I have seen no law of nature, unchanging and inexorable. This universe presents only changing relationships which are sometimes seen as laws by short-lived awareness. These fleshly sensoria which we call self are ephemera withering in the blaze of infinity, fleetingly aware of temporary conditions which confine our activities and change as our activities change. If you must label the absolute, use its proper name: “Temporary”.
Or
The person who takes the banal and ordinary and illuminates it in a new way can terrify. We do not want our ideas changed. We feel threatened by such demands. ‘I already know the important things!’ we say. Then Changer comes and throws our old ideas away.
And
Odrade pushed such thoughts aside. There were things to do on the crossing. None of them more important than gathering her energies. Honored Matres could be analyzed almost out of reality, but the actual confrontation would be played as it came—a jazz performance. She liked the idea of jazz although the music distracted her with its antique flavors and the dips into wildness. Jazz spoke about life, though. No two performances ever identical. Players reacted to what was received from the others: jazz. Feed us with jazz.
(‘Muad’dib’s first lesson was how to learn’/‘the wise man shapes himself, the fool lives only to die’ etc etc)
Whether it’s some space plague or space aliens or sterility or decadence or civil war or spice running out or thinking machines far in the future, it doesn’t matter, because the universe will keep changing, and humans mentally enslaved to, and dependent on, their thinking machines, would not. Their abilities will be stunted and wither away, they will fail to adapt and evolve and grow and gain capabilities like prescience. (Even if the thinking-machines survive whatever doomsday inevitably comes, who cares? They aren’t humans. Certainly Herbert doesn’t care about AIs, he’s all about humanity.) And sooner or later—gambler’s ruin—there will be something and humanity will go extinct. Unless they strengthen themselves and enter into the infinite open universe, abandoning delusions about certainty or immortality or reducing everything to simple rules.
That is why the BJ places the emphasis on banning anything that serves as a crutch for humans, mechanizing their higher life.* It’s fine to use a forklift or a spaceship, humans were never going to hoist a 2-ton pallet or flap their wings to fly the galaxy and those tools extend their abilities; it’s not fine to ask a computer for an optimal Five-Year Plan for the economy or to pilot the space ship because now it’s replacing the human role. The strictures force the development of mentats, Reverend Mothers, Navigators, Face Dancers, sword-masters, and so on and so force, all of which eventually merge in the later books, evolving super-capable humans who can Scatter across the universe, evading ever new and more dangerous enemies, ensuring that humanity never goes extinct, never gets lazy, and someday will become, as the Bene Gesserit put it, ‘adults’, who presumably can discard all the feudal trippery and stand as mature independent equals in fully democratic societies.
As you can see, this has little to do with Confucianism or the stasis being intrinsically desirable or it being a good thing to remove all bureaucracy (bureaucracy is just a tool, like any other, to be used skillfully) or indeed all automation etc.
That is funny! I hadn’t thought about Dune in a while, but Nabil’s comment on SSC brought thoughts of the Jihad flooding back.
I agree with your critiques of unilateral action; it’s a major problem with all proposals like this (maybe a whole post on this at some point). Something that bugs me about a lot of calls to action, even relatively mundane political ones, is that they don’t make clear what I, personally, can do to further the cause.
This is why I specifically advised that people not automate anything new. Many of us are programmers or engineers; we feel positively about automation and will often want to implement it in our lives. Some of us even occupy positions of power in various organizations, or are in a position to advise people who are. I know that this idea will make me less likely to automate things in my life; I hope it will influence others similarly.
Dismantling the automation we have sounds like a much tougher coordination problem. I’m less optimistic about that one! But maybe we can not actively make it worse.
The fealty proposal was intended as a joke! I just think we could consider being more Confucian.
Exactly what Herbert believed is hard to say, but my impression has always been that he mostly agrees with the views of his “main” characters; Leto I, Paul, Hayt, Leto II, Siona, Miles Teg, etc. Regarding Feudalism, he says that it is the “natural condition of human beings…not that it is the only condition or not that it is the right condition”. I’ve found this interview pretty enlightening.
In regards to the “multi-millennial ideologically motivated political stranglehold”, I’m not sure if he thinks it’s good. But insofar as we think human extinction is bad, we have to see this system as, if not good, then at least successful.
This is very cool to see. I just finished re-reading Dune. I wonder what signal prompted me to do that, and I wonder if it was the same signal that prompted you to write this.
I’ve been thinking a lot recently about rationalist advocacy and community. I don’t think that individuals unilaterally deciding to stop automating things is going to make a dent in the problem. This is a straightforward coordination problem. If you drop out of modern society, for whatever reason, society fills in the hole you left. The only way to challenge Moloch is to create an alternative social framework that actually works better, at least in some regards.
One thing that keeps cropping up in my thoughts/discussions about rationalist community is that the value-add of the community needs to be very clear and concrete. The metaphor or analogue of professional licensure might be appropriate—a “rationalist credential”, some kind of impossible-to-fake, difficult-to-earn token of mastery that denotes high skill level and knowledge, that then becomes symbolically associated with the movement. I mention this idea because the value-add of being a credentialed rationalist would then have to be weighed against whatever weird social restrictions that the community adopts—e.g., your suggestion of avoiding automation, or instituting some kind of fealty system. These ideas may be empirically, demonstrably good ideas (we don’t really know yet) but their cost in weirdness points can’t be ignored.
As an side—and I’m open to being corrected on this—I don’t think Herbert was actually advocating for a lot of the ideas he portrays. Dune and Frank Herbert explore a lot of ideas but don’t really make prescriptions. In fact, I think that Herbert is putting forth his universe as an example of undesirable stagnation, not some kind of demonstrated perfection. It would be cool to be a mentat or a Bene Gesserit, i.e. a member of a tribe focused on realizing human potential, but I don’t think he was saying with his books that the multi-millennial ideologically motivated political stranglehold of the Bene Gesserit was a good thing. I don’t think that Herbert thinks that feudalism is a good thing just because it’s the system he presents. Maybe I’m wrong.
I am a fan of Dune (I recently wrote a whole essay on the genetics in Dune), but I’ve never drawn on it much for LW topics.
The basic problem with Dune is that Herbert based a lot of his extrapolations and discussion on things which were pseudoscience or have turned out to be false. And to some extent, people don’t realize this because they read their own beliefs into the novels—for example, OP commits this error in describing the Butlerian Jihad, which was not a war against autonomous machines but against people who used machines (likewise, Leto II’s ‘Arafel’ involved prescient machines… made by the Ixians), and which was not named after Samuel Butler in the first place. If Herbert had been thinking of a classic autonomous AI threat, that would be more interesting, but he wasn’t. Similarly, ‘ancestral memories’: Herbert seriously thought there was some sort of hidden memory repository which explained various social phenomena, and the whole Leto II/Fish Speaker/war is apparently sourced from a highly speculative outsider, probably crank, book (which is so obscure I have been unable to get a copy to see how far the borrowings go). We know now normal humans can’t be trained into anything like Mentats, after centuries of failure of education dating at least back to Rousseau & Locke’s blankslatism, and especially all the many attempts at developing prodigies, and case-studies like dual n-back. His overall paradigm of genetics was reasonable but unfortunately, for the wrong species—apples rather than humans. Or the sociology in The Dosadi Experiment or how to design AI in Destination: Void or… the list goes on. Nice guy, nothing like L. Ron Hubbard (and a vastly better writer), and it makes for great novels, but like many SF authors or editors of the era* he often used his fiction as tracts/mouthpieces, and he was steeped in the witch’s brew that was California & the human potential movement and that made his extrapolations very poor if we want to use them for any serious non-fiction purpose.
So, it just doesn’t come up. The Butlerian Jihad isn’t that relevant because it’s hardly described at all in the books and what is described isn’t relevant as we’re concerned about entirely different scenarios; human prescience doesn’t exist, period, so it doesn’t matter that it probably wouldn’t follow the genetics he outlines so the whole paradigm of Bene Gesserit and Houses is irrelevant as is everything that follows; Mentats can’t exist, at least not without such massive eugenics to boost human intelligence that it’d spark a Singularity first, so there’s not much point in discussing nootropics with an eye towards becoming a Mentat because all things like stimulants or spaced repetition can do is give you relatively small benefits at the margin (or to put it another way, things Mentats do in fiction can be done in reality, but only using software on computers)
* eg Hubbard, Asimov, Cordwainer Smith even discounting the hallucination theory, especially John W. Campbell
I would say that he clearly presents the breeding program as a very good thing and vital for the long-term preservation & flourishing of humanity as the only way to create humans who are genuine ‘adults’ capable of long-term planning (in a rather gom gabbar sense).
As far as feudalism goes, there’s an amusing anecdote from Norman Spinrad I quote in my essay where he tweaks Herbert about all “this royalist stuff” and Herbert claims he was going to end it with democracy. (Given how little planning Herbert tended to do, I have to suspect that his response was rather embarrassed and he was thinking to himself, ‘I’ll do it later’...) He wouldn’t be the first author to find feudalism a lot more fun to write than their own liberal-democratic values. (George R. R. Martin is rather liberal, is a free speech advocate, was a conscientious objector, and describes Game of Thrones as anti-war, but you won’t find too much democracy in his books.)
I agree that Herbert thought the breeding program was necessary. But I also think he couched it as tragically necessary. Leto II’s horrific repression was similarly tragically necessary.
I think the questions provoked by Herbert’s concepts of Mentats and Bene Gesserit might actually be fruitful to think about.
If there were no meditation traditions on Earth, then we would have no reason to suspect that jhanas, or any other advanced states of meditative achievement, exist. If there were no musical instruments, we would have no reason to suspect that a human could use fingers or breath to manipulate strings or harmonics to create intricate, polyphonic, improvised melodies. If there were no arithmetic, we would view a person who could do rudimentary mental math to be a wizard. One can extend this line of thinking to many things—reading and writing, deep strategy games like chess, high-level physical sports, and perhaps even specific fields of knowledge.
So it is probably safe to say that we “know” that a human can’t be trained to do the things that Mentats do in Dune, but I don’t think it’s safe to say that we have any idea what humans could be trained to do with unpredictable avenues of development and 20,000 years of cultural evolution.
I guess I’m not really disagreeing with anything you said, but rather advocating that we take Herbert’s ideas seriously but not literally.
This is pretty close to my thinking too. Herbert’s proposal was something like, “We have no idea what levels of human potential are out there.” He takes this idea and describes what it might look like, based on a few possible lines of development. Possibly he thought these were the most likely avenues of development, but that still seems unclear. Either way, he happened to pick examples that were wrong in the details, but the proposal stands.
You’re entirely right that taking Herbert’s views on most specific subjects isn’t helpful. He was wrong about genetics, about education, and about a lot of things besides. (Though like moridinamael, I’m also not clear on whether he personally believed in things like genetic memory, though I would be interested to see sources if you have them. I assumed that it was an element he included for fictional/allegorical purposes.) But I think he was a clever guy who spent a lot of time thinking about problems we’re interested in, even if he often got it wrong.
I think it’s a little harsh to say that I commit the error of reading-in my beliefs about the Butlerian Jihad, given that I quote Reverend Mother Gaius Helen Mohiam as saying, “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them,” and Leto II as saying, “The target of the Jihad was a machine-attitude as much as the machines.” I’m aware that there are a lot of textual clues that the Jihad wasn’t a war against autonomous machines themselves. Though autonomous machines were certainly involved; the glossary to the original book defines the Butlerian Jihad as, “the crusade against computers, thinking machines, and conscious robots”, and the OC Bible’s commandment is, “Thou shalt not make a machine in the likeness of a human mind.”
More generally, I was using the Jihad as a metaphor to make a point about automation in general.
It’s clear that Strong AI is illegal under the prohibition of “thinking machines”, but it had always puzzled me why lesser devices — like calculators and recording devices — were included. I had passed it off as another mistake on Herbert’s part. But when I read Nabil’s comment it reminded me strongly of the Jihad, and I realized that if taken to an extreme conclusion it would lead to a proscription against almost all automation, like the one we find in Dune. Consider it a steelman of the position, if you would like.
Just because I quote Samuel Butler at the end, doesn’t mean I think the Jihad was named after him! It’s just an amusing coincidence.
Looking forward to reading your essay on the Genetics of Dune!
Yes, we shouldn’t assume a SF author endorsed any speculative proto/pseudo-science he includes. But in the case of genetic memory, we can be fairly sure that he ‘believed in it’ in the sense that he took it way more seriously than you or I and considered it a live hypothesis because he says so explicitly in an interview I quote in the essay: he thinks genetic memory and pheromones, or something much like them, is necessary to explain things like the cohesion of mob & social groups like aristocracies without explicit obvious status markers, or the supposed generational patterns of warfare ‘spasms’ (this is a reference to the obscure crankery of The Sexual Cycle of Human Warfare† which apparently deeply influenced Herbert and you won’t understand all the references/influences unless you at least look at an overview of it because it’s so lulzy).
Reading back, I see I got sidetracked and didn’t resolve your main point about why the Butlerian Jihad targeted all software. The one-line explanation is: permitting any software is an existential risk because it is a crutch which will cripple humanity’s long-term growth throughout the universe, leaving us vulnerable to the inevitable black swans (not necessarily AI).
First, you should read my essay, and especially that Herbert interview and the Spinrad democracy footnote and if you have the time, Herbert’s attitude towards computers & software is most revealed in Without Me You’re Nothing, which is a very strange artifact: his 1980 technical guide/book on programming PCs of that era—leaving aside the wildly outdated information which you can skip over, the interesting parts are his essays or commentaries on PCs in general, which convey his irascible humanist libertarian attitude on PCs as being a democratizing and empowering force for independent-human growth. Herbert was quite a PC enthusiast: beyond writing a whole book about how to use them, his farmstead apparently had rigged up all sorts of gadgets and ‘home automation’ he had made as a hobby to help him farm and, at least in theory, be more independent & capable & a Renaissance man. (Touponce is also well worth reading.) There’s a lot of supporting information in those I won’t try to get into here which I think support my generalizations below.
So, your basic error is that you are wrong about the BJ not being about AI or existential-risk per se. The BJ here is in fact about existential-risk from Herbert’s POV; it’s just that it’s much more indirect than you are thinking. It has nothing to do with signaling or arms-races. Herbert’s basic position is that machines (like PCs), ‘without me [the living creative human user], they are nothing’: they are dead, uncreative, unable to improvise or grow, and constraining. (At least without a level of strong AI he considered centuries or millennia away & to require countless fundamental breakthroughs.) They lock humans into fixed patterns. And to Herbert, this fixedness is death. It is death, sooner or later, perhaps many millennium later, but death nevertheless; and [human] life is jazz:
Or
And
(‘Muad’dib’s first lesson was how to learn’/‘the wise man shapes himself, the fool lives only to die’ etc etc)
Whether it’s some space plague or space aliens or sterility or decadence or civil war or spice running out or thinking machines far in the future, it doesn’t matter, because the universe will keep changing, and humans mentally enslaved to, and dependent on, their thinking machines, would not. Their abilities will be stunted and wither away, they will fail to adapt and evolve and grow and gain capabilities like prescience. (Even if the thinking-machines survive whatever doomsday inevitably comes, who cares? They aren’t humans. Certainly Herbert doesn’t care about AIs, he’s all about humanity.) And sooner or later—gambler’s ruin—there will be something and humanity will go extinct. Unless they strengthen themselves and enter into the infinite open universe, abandoning delusions about certainty or immortality or reducing everything to simple rules.
That is why the BJ places the emphasis on banning anything that serves as a crutch for humans, mechanizing their higher life.* It’s fine to use a forklift or a spaceship, humans were never going to hoist a 2-ton pallet or flap their wings to fly the galaxy and those tools extend their abilities; it’s not fine to ask a computer for an optimal Five-Year Plan for the economy or to pilot the space ship because now it’s replacing the human role. The strictures force the development of mentats, Reverend Mothers, Navigators, Face Dancers, sword-masters, and so on and so force, all of which eventually merge in the later books, evolving super-capable humans who can Scatter across the universe, evading ever new and more dangerous enemies, ensuring that humanity never goes extinct, never gets lazy, and someday will become, as the Bene Gesserit put it, ‘adults’, who presumably can discard all the feudal trippery and stand as mature independent equals in fully democratic societies.
As you can see, this has little to do with Confucianism or the stasis being intrinsically desirable or it being a good thing to remove all bureaucracy (bureaucracy is just a tool, like any other, to be used skillfully) or indeed all automation etc.
* I suspect that there’s a similar idea behind ‘BuSab’ in his ConSentiency universe, but TBH, I find those novels/stories too boring to read carefully.
† 183MB color scan:
https://www.gwern.net/docs/sociology/1950-walter-thesexualcycleofhumanwarfare.pdf
That is funny! I hadn’t thought about Dune in a while, but Nabil’s comment on SSC brought thoughts of the Jihad flooding back.
I agree with your critiques of unilateral action; it’s a major problem with all proposals like this (maybe a whole post on this at some point). Something that bugs me about a lot of calls to action, even relatively mundane political ones, is that they don’t make clear what I, personally, can do to further the cause.
This is why I specifically advised that people not automate anything new. Many of us are programmers or engineers; we feel positively about automation and will often want to implement it in our lives. Some of us even occupy positions of power in various organizations, or are in a position to advise people who are. I know that this idea will make me less likely to automate things in my life; I hope it will influence others similarly.
Dismantling the automation we have sounds like a much tougher coordination problem. I’m less optimistic about that one! But maybe we can not actively make it worse.
The fealty proposal was intended as a joke! I just think we could consider being more Confucian.
Exactly what Herbert believed is hard to say, but my impression has always been that he mostly agrees with the views of his “main” characters; Leto I, Paul, Hayt, Leto II, Siona, Miles Teg, etc. Regarding Feudalism, he says that it is the “natural condition of human beings…not that it is the only condition or not that it is the right condition”. I’ve found this interview pretty enlightening.
In regards to the “multi-millennial ideologically motivated political stranglehold”, I’m not sure if he thinks it’s good. But insofar as we think human extinction is bad, we have to see this system as, if not good, then at least successful.
Thanks for the feedback! :)
Thanks for the interview. This is great.