Why is flesh weaker than diamond? Diamond is made of carbon-carbon bonds. Proteins also have some carbon-carbon bonds! So why should a diamond blade be able to cut skin?
I reply: Because the strength of the material is determined by its weakest link, not its strongest link. A structure of steel beams held together at the vertices by Scotch tape (and lacking other clever arrangements of mechanical advantage) has the strength of Scotch tape rather than the strength of steel.
Or: Even when the load-bearing forces holding large molecular systems together are locally covalent bonds, as in lignin (what makes wood strong), if you’ve got larger molecules only held together by covalent bonds at interspersed points along their edges, that’s like having 10cm-diameter steel beams held together by 1cm welds. Again, barring other clever arrangements of mechanical advantage, that structure has the strength of 1cm of steel rather than 10cm of steel.
Bone is stronger than wood; it runs on a relatively stronger structure of ionic bonds, which are no locally weaker than carbon bonds in terms of attojoules of potential energy per bond. Bone is weaker than diamond, then, because… why?
Well, partially, IIUC, because calcium atoms are heavier than carbon atoms. So even if per-bond the ionic forces are strong, some of that is lost in the price you pay for including heavier atoms whose nuclei have more protons that are able to exert the stronger electrical forces making up that stronger bond.
But mainly, bone is so much weaker than diamond (on my understanding) because the carbon bonds in diamond have a regular crystal structure that locks the carbon atoms into relative angles, and in a solid diamond this crystal structure is tesselated globally. Hydroxyapatite (the crystal part of bone) also tesselates in an energetically favorable configuration; but (I could be wrong about this) it doesn’t have the same local resistance to local deformation; and also, the actual hydroxyapatite crystal is assembled by other tissues that layer the ionic components into place, which means that a larger structure of bone is full of fault lines. Bone cleaves along the weaker fault line, not at its strongest point.
But then, why don’t diamond bones exist already? Not just for the added strength; why make the organism look for calcium and phosphorus instead of just carbon?
The search process of evolutionary biology is not the search of engineering; natural selection can only access designs via pathways of incremental mutations that are locally advantageous, not intelligently designed simultaneous changes that compensate for each other. There were, last time I checked, only three known cases where evolutionary biology invented the freely rotating wheel. Two of those known cases are ATP synthase and the bacterial flagellum, which demonstrates that freely rotating wheels are in fact incredibly useful in biology, and are conserved when biology stumbles across them after a few hundred million years of search. But there’s no use for a freely rotating wheel without a bearing and there’s no use for a bearing without a freely rotating wheel, and a simultaneous dependency like that is a huge obstacle to biology, even though it’s a hardly noticeable obstacle to intelligent engineering.
The entire human body, faced with a strong impact like being gored by a rhinocerous horn, will fail at its weakest point, not its strongest point. How much evolutionary advantage is there to stronger bone, if what fails first is torn muscle? How much advantage is there to an impact-resistant kidney, if most fights that destroy a kidney will kill you anyways? Evolution is not the sort of optimizer that says, “Okay, let’s design an entire stronger body.” (Analogously, the collection of faults that add up to “old age” is large enough that a little more age resistance in one place is not much of an advantage if other aging systems or outward accidents will soon kill you anyways.)
I don’t even think we have much of a reason to believe that it’d be physically (rather than informationally) difficult to have a set of enzymes that synthesize diamond. It could just require 3 things to go right simultaneously, and so be much much harder to stumble across than tossing more hydroxyapatite to lock into place in a bone crystal. And then even if somehow evolution hit on the right set of 3 simultaneous mutations, sometime over the history of Earth, the resulting little isolated chunk of diamond probably would not be somewhere in the phenotype that had previously constituted the weakest point in a mechanical system that frequently failed. If evolution has huge difficulty inventing wheels, why expect that it could build diamond chainmail, even assuming that diamond chainmail is physically possible and could be useful to an organism that had it?
Talking to the general public is hard. The first concept I’m trying to convey to them is that there’s an underlying physical, mechanical reason that flesh is weaker than diamond; and that this reason isn’t that things animated by vitalic spirit, elan vital, can self-heal and self-reproduce at the cost of being weaker than the cold steel making up lifeless machines, as is the price of magic imposed by the universe to maintain game balance. This is a very natural way for humans to think; and the thing I am trying to come in and do is say, “Actually, no, it’s not a mystical balance, it’s that diamond is held together by bonds that are hundreds of kJ/mol; and the mechanical strength of proteins is determined by forces a hundred times as weak as that, the part where proteins fold up like spaghetti held together by static cling.”
There is then a deeper story that’s even harder to explain, about why evolution doesn’t build freely rotating wheels or diamond chainmail; why evolutionary design doesn’t find the physically possible stronger systems. But first you need to give people a mechanical intuition for why, in a very rough intuitive sense, it is physically possible to have stuff that moves and lives and self-repairs but is strong like diamond instead of flesh, without this violating a mystical balance where the price of vitalic animation is lower material strength.
And that mechanical intuition is: Deep down is a bunch of stuff that, if you could see videos of it, would look more like tiny machines than like magic, though they would not look like familiar machines (very few freely rotating wheels). Then why aren’t these machines strong like human machines of steel are strong? Because iron atoms are stronger than carbon atoms? Actually no, diamond is made of carbon and that’s still quite strong. The reason is that these tiny systems of machinery are held together (at the weakest joints, not the strongest joints!) by static cling.
And then the deeper question: Why does evolution build that way? And the deeper answer: Because everything evolution builds is arrived at as an error, a mutation, from something else that it builds. Very tight bonds fold up along very deterministic pathways. So (in the average case, not every case) the neighborhood of functionally similar designs is densely connected along shallow energy gradients and sparsely connected along deep energy gradients. Intelligence can leap long distances through that design space using coordinated changes, but evolutionary exploration usually cannot.
And I do try to explain that too. But it is legitimately more abstract and harder to understand. So I lead with the idea that proteins are held together by static cling. This is, I think, validly the first fact you lead with if the audience does not already know it, and just has no clue why anyone could possibly possibly think that there might even be machinery that does what bacterial machinery does but better. The typical audience is not starting out with the intuition that one would naively think that of course you could put together stronger molecular machinery, given the physics of stronger bonds, and then we debate whether (as I believe) the naive intuition is actually just valid and correct; they don’t understand what the naive intuition is about, and that’s the first thing to convey.
If somebody then says, “How can you be so ignorant of chemistry? Some atoms in protein are held together by covalent bonds, not by static cling! There’s even eg sulfur bonds whereby some parts of the folded-spaghetti systems end up glued together with real glue!” then this does not validly address the original point because: the underlying point about why flesh is more easily cleaved than diamond, is about the weakest points of flesh rather than the strongest points in flesh, because that’s what determines the mechanical strength of the larger system.
I think there is an important way of looking at questions like these where, at the final end, you ask yourself, “Okay, but does my argument prove that flesh is in fact as strong as diamond? Why isn’t flesh as strong as diamond, then, if I’ve refuted the original argument for why it isn’t?” and this is the question that leads you to realize that some local strong covalent bonds don’t matter to the argument if those bonds aren’t the parts that break under load.
My main moral qualm about using the Argument From Folded Spaghetti Held Together By Static Cling as an intuition pump is that the local ionic bonds in bone are legitimately as strong per-bond as the C-C bonds in diamond, and the reason that bone is weaker than diamond is (iiuc) actually more about irregularity, fault lines, and resistance to local deformation than about kJ/mol of the underlying bonds. If somebody says “Okay, fine, you’ve validly explained why flesh is weaker than diamond, but why is bone weaker than diamond?” I have to reply “Valid, iiuc that’s legit more about irregularity and fault lines and interlaced weaker superstructure and local deformation resistance of the bonds, rather than the raw potential energy deltas of the load-bearing welds.”
bone is so much weaker than diamond (on my understanding) … Bone cleaves along the weaker fault line, not at its strongest point.
While it is true that the ultimate strength of diamond is much higher than bone, this is relevant primarily for its ability to resist continuously applied pressure (as is its hardness enabling cutting). The point about fault lines seems more relevant for toughness, another material property that describes how much energy can be absorbed without breaking, and there bone beats diamond easily—diamond is brittle.
There are materials that have both high strength and toughness, e.g. nacre and some metallic glass, both of which are composites.
What does this operationalize as? Presumably not that if we load a bone and a diamond rod under equal pressures, the diamond rod breaks first? Is it more about if we drop sudden sharp weights onto a bone rod and a diamond rod, the diamond rod breaks first? I admit I hadn’t expected that, despite a general notion that diamond is crystal and crystals are unexpectedly fragile against particular kinds of hits, and if so that modifies my sense of what’s a valid metaphor to use.
As an physicist who is also an (unpublished) SF author, if I was trying to describe an ultimate nanoengineered physically strong material, it would be a carbon-carbon composite, using a combination of interlocking structures made out of diamond, maybe with some fluorine passivization, separated by graphene-sheet bilayers, building a complex crack-diffusing structure to achieve toughness in ways comparable to the structures of jade, nacre, or bone. It would be not quite as strong or hard as pure diamond, but a lot tougher. And in a claw-vs-armor fight, yeah, it beats anything biology can do with bone, tooth, or spider silk. But it beats it by less than an order of magnitude, far less that the strength ratio between a covalant bond to a van der Vaals bond (or even somewhat less than to a hydrogen bond). Spider silk actually gets pretty impressively close to the limit of what can be done with C-N covariant bonds, it’s a very fancy piece of evolved nanotech, with a different set of anti-crack tricks. Now, flesh, that’s pretty soft, but it’s primarily evolved for metabolic effectiveness, flexibility, and ease of growth rather than being difficult to bite through: gristle, hide, chitin, or bone spicules get used when that’s important.
But yes, if I was giving a lecture to non-technical folks where “diamond is stronger than flesh-and-bone” was a quick illustrative point rather then the subject of the lecture, I might not bother to mention that, unless someone asked “doesn’t diamond shatter easily?”, to which the short answer is “crystaline diamond yes, but nanotech can and will build carbon-carbon composites out of diamond that don’t”.
I see the appeal of using “static cling” as a metaphor to non-technical folks, but it is something of an exaggeration for hydrogen bonds—that’s significantly weaker van der Vaals bonds. “Glue” might be a fairer analogy than “static cling”. The non-protein-chain bonds in biology that are the weak links that tend to fail when flesh tears are mostly hydrogen bonds, and the quickest way to explain that to someone non-technical would be “the same sort of bonds that hold ice together”. So the proportionate analogy is probably “diamond is a lot harder than ice, and the way the human body is built, outside of a few of the strongest bits like bones, teeth and sinews, is basically held together mostly by the same sort of weakish bonds that hold ice together”.
I checked this, and this post is correct. At least, when you’re talking about bones and common, natural diamonds, which are monocrystalline.
The toughness of bone is about 2-4MPa√m(depending on the exact form of toughness) and can increase to 3-20MPa√m locally as when bones crack, microfractures can deflect the crack from growing along the directions of maximum tensile stress.
As compared to common natural forms of diamond, which only have a toughness of 2MPa√m. Which is mediocre compared to other engineering materials. However! Other naturally occuring forms of diamond, such as Carbando, are much tougher and just as hard. Carbando’s strength comes from the random orientation of microdiamonds i.e. it is not mono-crystaline. There’s little numerical data in the literature on this, but it is predicted that its toughness will exceed 10-20MPa√m (paywalled article with a confusing preview). Some evidence for their toughness comes from industrial usage for thing like deep-drilling bits, unlike regular diamond. Moreover, designed dimaonds have achieved Pareto improvements in toughness and hardness compared to common natural diamonds (reaching upto 26.6MPa√mfor nanotwinned diamond).
So diamonds can be clearly superior to bone. And yeah, these things probably aren’t bound together on a large scale by van der waals forces (I haven’t looked into that aspect for unusual diamonds like Carbando, not an expert, just took a couple solid state physics courses in uni). But. Carbando seems to gain its strength from irregularities. Sometimes irregularities make materials much stronger, sometimes much weaker. Sometimes “fault lines” can be beneficial, because they allow the material to be ductile, which you want. Like the ductility of steel, IIRC, comes from irregularities in the lattice structure which are moved around as the material deforms.
Sometimes irregularities make materials much stronger, sometimes much weaker. Sometimes “fault lines” can be benificial, because they allow the material to be ductile, which you want. Like the ductility of steel, IIRC, comes from irregularities in the lattice structure which are moved around as the material deforms.
And in that deformation (of a metal or other crystal), you both create the discontinuities (esp. dislocations) that increase strength while also introducing brittleness (work hardening). But the highest strength you can get with this kind of process is still not as high as you’d get from a defect free crystal, such as a monocrystalline whisker.
This is an interesting one. I’d also have thought a priori that your strategy of focusing on strength (we’re basically focusing pretty hard on tensile strength I think?) would be nice and simple and intuitive.[1]
But in practice this seems to confuse/put off quite a few people (exemplified by this post and similar). I wonder if focusing on other aspects of designed-vs-evolved nanomachines might be more effective? One core issue is the ability to aggressively adversarially exploit existing weaknesses and susceptibilities… e.g. I have had success by making gestures like ‘rapidly iterating on pandemic-potential viruses or other replicators’[2]. I don’t think there’s a real need to specifically invoke hardnesses in a ‘materials science’ sense. Like, ten pandemics a month is probably enough to get the point across, and doesn’t require stepping much past existing bio. Ten pandemics a week, coordinated with photosynthetic and plant-parasitising bio stuff if you feel like going hard. I think these sorts of concepts might be inferentially closer for a lot of people. It’s always worth emphasising (and you do), that any specific scenario is overly conjunctive and just one option among many.
If I had to guess an objection, I wonder if you might feel that’s underplaying the risk in some way?
It brings to mind the amusing molecular simulations of proteins and other existing nanomachines, where everything is amusingly wiggly. Like, it looks so squishy! Obviously things can be stronger than that.
“Pandemics” aren’t a locally valid substitute step in my own larger argument, because an ASI needs its own manufacturing infrastructure before it makes sense for the ASI to kill the humans currently keeping its computers turned on. So things that kill a bunch of humans are not a valid substitute for being able to, eg, take over and repurpose the existing solar-powered micron-diameter self-replicating factory systems, aka algae, and those repurposed algae being able to build enough computing substrate to go on running the ASI after the humans die.
It’s possible this argument can and should be carried without talking about the level above biology, but I’m nervous that this causes people to start thinking in terms of Hollywood movie plots about defeating pandemics and hunting down the AI’s hidden cave of shoggoths, rather than hearing, “And this is a lower bound but actually in real life you just fall over dead.”
“Pandemics” aren’t a locally valid substitute step in my own larger argument, because an ASI needs its own manufacturing infrastructure before it makes sense for the ASI to kill the humans currently keeping its computers turned on.
When people are highly skeptical of the nanotech angle yet insist on a concrete example, I’ve sometimes gone with a pandemic coupled with limited access to medications that temporarily stave off, but don’t cure, that pandemic as a way to force a small workforce of humans preselected to cause few problems to maintain the AI’s hardware and build it the seed of a new infrastructure base while the rest of humanity dies.
I feel like this has so far maybe been more convincing and perceived as “less sci-fi” than Drexler-style nanotech by the people I’ve tried it on (small sample size, n<10).
Generally, I suspect not basing the central example on a position on one side of yet another fierce debate in technology forecasting trumps making things sound less like a movie where the humans might win. The rate of people understanding that something sounding like a movie does not imply the humans have a realistic chance at winning in real life just because they won in the movie seems, in my experience with these conversations so far, to exceed the rate of people getting on board with scenarios that involve any hint of Drexler-style nanotech.
After reading Pope and Belrose’s work, a viewpoint of “lots of good aligned ASIs already building nanosystems and better computing infra” has solidified in my mind. And therefore, any accidentally or purposefully created misaligned AIs necessarily wouldn’t have a chance of long-term competitive existence against the existing ASIs. Yet, those misaligned AIs might still be able to destroy the world via nanosystems; as we wouldn’t yet trust the existing AIs with the herculean task of protecting our dear nature against the invasive nanospecies and all such. Byrnes voiced similar concerns in his point 1 against Pope&Belrose.
Gotcha, that might be worth taking care to nuance, in that case. e.g. the linked twitter (at least) was explicitly about killing people[1]. But I can see why you’d want to avoid responses like ‘well, as long as we keep an eye out for biohazards we’re fine then’. And I can also imagine you might want to preserve consistency of examples between contexts. (Risks being misconstrued as overly-attached to a specific scenario, though?)
I’m nervous that this causes people to start thinking in terms of Hollywood movie plots… rather than hearing, “And this is a lower bound...”
Yeah… If I’m understanding what you mean, that’s why I said,
It’s always worth emphasising (and you do), that any specific scenario is overly conjunctive and just one option among many.
And I further think actually having a few scenarios up the sleeve is an antidote to the Hollywood/overly-specific failure mode. (Unfortunately ‘covalently bonded bacteria’ and nanomachines also make some people think in terms of Hollywood plots.) Infrastructure can be preserved in other ways, especially as a bootstrap. I think it might be worth giving some thought to other scenarios as intuition pumps.
e.g. AI manipulates humans into building quasi-self-sustaining power supplies and datacentres (or just waits for us to decide to do that ourselves), then launches kilopandemic followed by next-stage infra construction. Or, AI invests in robotics generality and proliferation (or just waits for us to decide to do that ourselves), then uses cyberattacks to appropriate actuators to eliminate humans and bootstrap self-sustenance. Or, AI exfiltrates itself and makes oodles of horcruxes backups, launches green goo with genetic clock for some kind of reboot after humans are gone (this one is definitely less solid). Or, AI selects and manipulates enough people willing to take a Faustian bargain as its intermediate workforce, equips them (with strategy, materials tech, weaponry, …) to wipe out everyone else, then bootstraps next-stage infra (perhaps with human assistants!) and finally picks off the remaining humans if they pose any threat.
Maybe these sound entirely barmy to you, but I assume at least some things in their vicinity don’t. And some palette/menu of options might be less objectionable to interlocutors while still providing some lower bounds on expectations.
An attempt to optimize for a minimum of abstractness, picking up what was communicated here:
How could an ASI kill all humans? Setting off several engineered pandemics a month with a moderate increase of infectiousness and lethality compared to historical natural cases.
How could an ASI sustain itself without humans? Conventional robotics with a moderate increase of intelligence in planning and controlling the machinery.
People coming in contact with that argument will check its plausibility, as they will with a hypothetical nanotech narrative. If so inclined, they will come to the conclusion that we may very well be able to protect ourselves against that scenario, either by prevention or mitigation, to which a follow-up response can be a list of other scenarios at the same level of plausibility, derived from not being dependent on hypothetical scientific and technological leaps. Triggering this kind of x-risk skepticism in people seems less problematic to me than making people think the primary x-risk scenario is far fetched sci-fi and most likely doesn’t hold to scrutiny by domain experts. I don’t understand why communicating a “certain drop dead scenario” with low plausibility seems preferable over a “most likely drop dead scenario” with high plausibility, but I’m open to being convinced that this approach is better suited for the goal of x-risk of ASI being taken seriously by more people. Perhaps I’m missing a part of the grander picture?
It’s false that currently existing robotic machinery controlled by moderately smart intelligence can pick up the pieces of a world economy after it collapses. One well-directed algae cell could, but not existing robots controlled by moderate intelligence.
The question in point 2 is whether an ASI could sustain itself without humans and without new types of hardware such as Drexler style nanomachinery, which to a significant portion of people (me not included) seems to be too hypothetical to be of actual concern. I currently don’t see why the answer to that question should be a highly certain no, as you seem to suggest. Here are some thoughts:
The world economy is largely catering to human needs, such as nutrition, shelter, healthcare, personal transport, entertainment and so on. Phenomena like massive food waste and people stuck in bullshit jobs, to name just two, also indicate that it’s not close to optimal in that. An ASI would therefore not have to prevent a world economy from collapsing or pick it up afterwards, which I also don’t think is remotely possible with existing hardware. I think the majority of processes running in the only example of a world economy we have is irrelevant to the self-preservation of an ASI.
An ASI would presumably need to keep it’s initial compute substrate running long enough to transition into some autocatalytic cycle, be it on the original or a new substrate. (As a side remark, it’s also thinkable that it might go into a reduced or dormant state for a while and let less energy- and compute-demanding processes act on its behalf until conditions have improved on some metric). I do believe that conventional robotics is sufficient to keep the lights on long enough, but to be perfectly honest, that’s conditioned on a lack of knowledge about many specifics, like exact numbers of hardware turnover and energy requirements of data centers capable of running frontier models, the amount and quality of chips currently existing on the planet, the actual complexity of keeping different types of power plants running for a relevant period of time, the many detailed issues of existing power grids, etc. I weakly suspect there is some robustness built into these systems that stems not only from the flexible bodies of human operators or from practical know how that can’t be deduced from the knowledge base of an ASI that might be built.
The challenge would be rendered more complex for an ASI if it were not running on general-purpose hardware but special-purpose circuitry that’s much harder to maintain and replace. It may additionally be a more complex task if the ASI could not gain access to its own source code (or relevant parts of it), since that presumably would make a migration onto other infrastructure considerably more difficult, though I’m not fully certain that’s actually the case, given that the compiled and operational code may be sufficient for an ASI to deduce weights and other relevant aspects.
Evolution presumably started from very limited organic chemistry and discovered autocatalytic cycles based on biochemistry, catalytically active macromolecules and compartmentalized cells. That most likely implies that a single cell may be able to repopulate an entire planet that is sufficiently earth-like and give rise to intelligence again after billions of years. That fact alone certainly does not imply that thinking sand needs to build hypothetical nanomachinery to win the battle against entropy over a long period of time. Existing actuators and chips on the planet, the hypothetical absence of humans, and a HLAI or ASI moderately above it may be sufficient in my current opinion.
I rather expect that existing robotic machinery could be controlled by ASI rather than “moderately smart intelligence” into picking up the pieces of a world economy after it collapses, or that if for some weird reason it was trying to play around with static-cling spaghetti It could pick up the pieces of the economy that way too.
It seems to me as if we expect the same thing then: If humanity was largely gone (e.g. by several engineered pandemics) and as a consequence the world economy came to a halt, an ASI would probably be able to sustain itself long enough by controlling existing robotic machinery, i.e. without having to make dramatic leaps in nanotech or other technology first. What I wanted to express with “a moderate increase of intelligence” is that it won’t take an ASI at the level of GPT-142 to do that, but GPT-7 together with current projects in robotics might suffice to get the necessary planning and control of actuators come into existence.
If that assumption holds, it means an ASI might come to the conclusion that it should end the threat that humanity poses to its own existence and goals long before it is capable of building Drexler nanotech, Dyson spheres, Von Neumann probes or anything else that a large portion of people find much too hypothetical to care about at this point in time.
This totally makes sense! But “proteins are held together by van der Waals forces that are much weaker than covalent bonds” still is a bad communication.
“things animated by vitalic spirit, elan vital, can self-heal and self-reproduce” Why aren’t you talking with Dr. Michael Levin and Dr. Gary Nolan to assist in the 3D mobility platform builds facilitating AGI biped interaction? Both of them would most assuredly be open to your consult forward. Good to see your still writing.
Why is flesh weaker than diamond? Diamond is made of carbon-carbon bonds. Proteins also have some carbon-carbon bonds! So why should a diamond blade be able to cut skin?
I reply: Because the strength of the material is determined by its weakest link, not its strongest link. A structure of steel beams held together at the vertices by Scotch tape (and lacking other clever arrangements of mechanical advantage) has the strength of Scotch tape rather than the strength of steel.
Or: Even when the load-bearing forces holding large molecular systems together are locally covalent bonds, as in lignin (what makes wood strong), if you’ve got larger molecules only held together by covalent bonds at interspersed points along their edges, that’s like having 10cm-diameter steel beams held together by 1cm welds. Again, barring other clever arrangements of mechanical advantage, that structure has the strength of 1cm of steel rather than 10cm of steel.
Bone is stronger than wood; it runs on a relatively stronger structure of ionic bonds, which are no locally weaker than carbon bonds in terms of attojoules of potential energy per bond. Bone is weaker than diamond, then, because… why?
Well, partially, IIUC, because calcium atoms are heavier than carbon atoms. So even if per-bond the ionic forces are strong, some of that is lost in the price you pay for including heavier atoms whose nuclei have more protons that are able to exert the stronger electrical forces making up that stronger bond.
But mainly, bone is so much weaker than diamond (on my understanding) because the carbon bonds in diamond have a regular crystal structure that locks the carbon atoms into relative angles, and in a solid diamond this crystal structure is tesselated globally. Hydroxyapatite (the crystal part of bone) also tesselates in an energetically favorable configuration; but (I could be wrong about this) it doesn’t have the same local resistance to local deformation; and also, the actual hydroxyapatite crystal is assembled by other tissues that layer the ionic components into place, which means that a larger structure of bone is full of fault lines. Bone cleaves along the weaker fault line, not at its strongest point.
But then, why don’t diamond bones exist already? Not just for the added strength; why make the organism look for calcium and phosphorus instead of just carbon?
The search process of evolutionary biology is not the search of engineering; natural selection can only access designs via pathways of incremental mutations that are locally advantageous, not intelligently designed simultaneous changes that compensate for each other. There were, last time I checked, only three known cases where evolutionary biology invented the freely rotating wheel. Two of those known cases are ATP synthase and the bacterial flagellum, which demonstrates that freely rotating wheels are in fact incredibly useful in biology, and are conserved when biology stumbles across them after a few hundred million years of search. But there’s no use for a freely rotating wheel without a bearing and there’s no use for a bearing without a freely rotating wheel, and a simultaneous dependency like that is a huge obstacle to biology, even though it’s a hardly noticeable obstacle to intelligent engineering.
The entire human body, faced with a strong impact like being gored by a rhinocerous horn, will fail at its weakest point, not its strongest point. How much evolutionary advantage is there to stronger bone, if what fails first is torn muscle? How much advantage is there to an impact-resistant kidney, if most fights that destroy a kidney will kill you anyways? Evolution is not the sort of optimizer that says, “Okay, let’s design an entire stronger body.” (Analogously, the collection of faults that add up to “old age” is large enough that a little more age resistance in one place is not much of an advantage if other aging systems or outward accidents will soon kill you anyways.)
I don’t even think we have much of a reason to believe that it’d be physically (rather than informationally) difficult to have a set of enzymes that synthesize diamond. It could just require 3 things to go right simultaneously, and so be much much harder to stumble across than tossing more hydroxyapatite to lock into place in a bone crystal. And then even if somehow evolution hit on the right set of 3 simultaneous mutations, sometime over the history of Earth, the resulting little isolated chunk of diamond probably would not be somewhere in the phenotype that had previously constituted the weakest point in a mechanical system that frequently failed. If evolution has huge difficulty inventing wheels, why expect that it could build diamond chainmail, even assuming that diamond chainmail is physically possible and could be useful to an organism that had it?
Talking to the general public is hard. The first concept I’m trying to convey to them is that there’s an underlying physical, mechanical reason that flesh is weaker than diamond; and that this reason isn’t that things animated by vitalic spirit, elan vital, can self-heal and self-reproduce at the cost of being weaker than the cold steel making up lifeless machines, as is the price of magic imposed by the universe to maintain game balance. This is a very natural way for humans to think; and the thing I am trying to come in and do is say, “Actually, no, it’s not a mystical balance, it’s that diamond is held together by bonds that are hundreds of kJ/mol; and the mechanical strength of proteins is determined by forces a hundred times as weak as that, the part where proteins fold up like spaghetti held together by static cling.”
There is then a deeper story that’s even harder to explain, about why evolution doesn’t build freely rotating wheels or diamond chainmail; why evolutionary design doesn’t find the physically possible stronger systems. But first you need to give people a mechanical intuition for why, in a very rough intuitive sense, it is physically possible to have stuff that moves and lives and self-repairs but is strong like diamond instead of flesh, without this violating a mystical balance where the price of vitalic animation is lower material strength.
And that mechanical intuition is: Deep down is a bunch of stuff that, if you could see videos of it, would look more like tiny machines than like magic, though they would not look like familiar machines (very few freely rotating wheels). Then why aren’t these machines strong like human machines of steel are strong? Because iron atoms are stronger than carbon atoms? Actually no, diamond is made of carbon and that’s still quite strong. The reason is that these tiny systems of machinery are held together (at the weakest joints, not the strongest joints!) by static cling.
And then the deeper question: Why does evolution build that way? And the deeper answer: Because everything evolution builds is arrived at as an error, a mutation, from something else that it builds. Very tight bonds fold up along very deterministic pathways. So (in the average case, not every case) the neighborhood of functionally similar designs is densely connected along shallow energy gradients and sparsely connected along deep energy gradients. Intelligence can leap long distances through that design space using coordinated changes, but evolutionary exploration usually cannot.
And I do try to explain that too. But it is legitimately more abstract and harder to understand. So I lead with the idea that proteins are held together by static cling. This is, I think, validly the first fact you lead with if the audience does not already know it, and just has no clue why anyone could possibly possibly think that there might even be machinery that does what bacterial machinery does but better. The typical audience is not starting out with the intuition that one would naively think that of course you could put together stronger molecular machinery, given the physics of stronger bonds, and then we debate whether (as I believe) the naive intuition is actually just valid and correct; they don’t understand what the naive intuition is about, and that’s the first thing to convey.
If somebody then says, “How can you be so ignorant of chemistry? Some atoms in protein are held together by covalent bonds, not by static cling! There’s even eg sulfur bonds whereby some parts of the folded-spaghetti systems end up glued together with real glue!” then this does not validly address the original point because: the underlying point about why flesh is more easily cleaved than diamond, is about the weakest points of flesh rather than the strongest points in flesh, because that’s what determines the mechanical strength of the larger system.
I think there is an important way of looking at questions like these where, at the final end, you ask yourself, “Okay, but does my argument prove that flesh is in fact as strong as diamond? Why isn’t flesh as strong as diamond, then, if I’ve refuted the original argument for why it isn’t?” and this is the question that leads you to realize that some local strong covalent bonds don’t matter to the argument if those bonds aren’t the parts that break under load.
My main moral qualm about using the Argument From Folded Spaghetti Held Together By Static Cling as an intuition pump is that the local ionic bonds in bone are legitimately as strong per-bond as the C-C bonds in diamond, and the reason that bone is weaker than diamond is (iiuc) actually more about irregularity, fault lines, and resistance to local deformation than about kJ/mol of the underlying bonds. If somebody says “Okay, fine, you’ve validly explained why flesh is weaker than diamond, but why is bone weaker than diamond?” I have to reply “Valid, iiuc that’s legit more about irregularity and fault lines and interlaced weaker superstructure and local deformation resistance of the bonds, rather than the raw potential energy deltas of the load-bearing welds.”
Minor point about the strength of diamond:
While it is true that the ultimate strength of diamond is much higher than bone, this is relevant primarily for its ability to resist continuously applied pressure (as is its hardness enabling cutting). The point about fault lines seems more relevant for toughness, another material property that describes how much energy can be absorbed without breaking, and there bone beats diamond easily—diamond is brittle.
There are materials that have both high strength and toughness, e.g. nacre and some metallic glass, both of which are composites.
What does this operationalize as? Presumably not that if we load a bone and a diamond rod under equal pressures, the diamond rod breaks first? Is it more about if we drop sudden sharp weights onto a bone rod and a diamond rod, the diamond rod breaks first? I admit I hadn’t expected that, despite a general notion that diamond is crystal and crystals are unexpectedly fragile against particular kinds of hits, and if so that modifies my sense of what’s a valid metaphor to use.
As an physicist who is also an (unpublished) SF author, if I was trying to describe an ultimate nanoengineered physically strong material, it would be a carbon-carbon composite, using a combination of interlocking structures made out of diamond, maybe with some fluorine passivization, separated by graphene-sheet bilayers, building a complex crack-diffusing structure to achieve toughness in ways comparable to the structures of jade, nacre, or bone. It would be not quite as strong or hard as pure diamond, but a lot tougher. And in a claw-vs-armor fight, yeah, it beats anything biology can do with bone, tooth, or spider silk. But it beats it by less than an order of magnitude, far less that the strength ratio between a covalant bond to a van der Vaals bond (or even somewhat less than to a hydrogen bond). Spider silk actually gets pretty impressively close to the limit of what can be done with C-N covariant bonds, it’s a very fancy piece of evolved nanotech, with a different set of anti-crack tricks. Now, flesh, that’s pretty soft, but it’s primarily evolved for metabolic effectiveness, flexibility, and ease of growth rather than being difficult to bite through: gristle, hide, chitin, or bone spicules get used when that’s important.
But yes, if I was giving a lecture to non-technical folks where “diamond is stronger than flesh-and-bone” was a quick illustrative point rather then the subject of the lecture, I might not bother to mention that, unless someone asked “doesn’t diamond shatter easily?”, to which the short answer is “crystaline diamond yes, but nanotech can and will build carbon-carbon composites out of diamond that don’t”.
I see the appeal of using “static cling” as a metaphor to non-technical folks, but it is something of an exaggeration for hydrogen bonds—that’s significantly weaker van der Vaals bonds. “Glue” might be a fairer analogy than “static cling”. The non-protein-chain bonds in biology that are the weak links that tend to fail when flesh tears are mostly hydrogen bonds, and the quickest way to explain that to someone non-technical would be “the same sort of bonds that hold ice together”. So the proportionate analogy is probably “diamond is a lot harder than ice, and the way the human body is built, outside of a few of the strongest bits like bones, teeth and sinews, is basically held together mostly by the same sort of weakish bonds that hold ice together”.
I checked this, and this post is correct. At least, when you’re talking about bones and common, natural diamonds, which are monocrystalline.
The toughness of bone is about 2-4 MPa√m(depending on the exact form of toughness) and can increase to 3-20 MPa√m locally as when bones crack, microfractures can deflect the crack from growing along the directions of maximum tensile stress.
As compared to common natural forms of diamond, which only have a toughness of 2 MPa√m. Which is mediocre compared to other engineering materials. However! Other naturally occuring forms of diamond, such as Carbando, are much tougher and just as hard. Carbando’s strength comes from the random orientation of microdiamonds i.e. it is not mono-crystaline. There’s little numerical data in the literature on this, but it is predicted that its toughness will exceed 10-20 MPa√m (paywalled article with a confusing preview). Some evidence for their toughness comes from industrial usage for thing like deep-drilling bits, unlike regular diamond. Moreover, designed dimaonds have achieved Pareto improvements in toughness and hardness compared to common natural diamonds (reaching upto 26.6MPa√mfor nanotwinned diamond).
So diamonds can be clearly superior to bone. And yeah, these things probably aren’t bound together on a large scale by van der waals forces (I haven’t looked into that aspect for unusual diamonds like Carbando, not an expert, just took a couple solid state physics courses in uni). But. Carbando seems to gain its strength from irregularities. Sometimes irregularities make materials much stronger, sometimes much weaker. Sometimes “fault lines” can be beneficial, because they allow the material to be ductile, which you want. Like the ductility of steel, IIRC, comes from irregularities in the lattice structure which are moved around as the material deforms.
And in that deformation (of a metal or other crystal), you both create the discontinuities (esp. dislocations) that increase strength while also introducing brittleness (work hardening). But the highest strength you can get with this kind of process is still not as high as you’d get from a defect free crystal, such as a monocrystalline whisker.
This is an interesting one. I’d also have thought a priori that your strategy of focusing on strength (we’re basically focusing pretty hard on tensile strength I think?) would be nice and simple and intuitive.[1]
But in practice this seems to confuse/put off quite a few people (exemplified by this post and similar). I wonder if focusing on other aspects of designed-vs-evolved nanomachines might be more effective? One core issue is the ability to aggressively adversarially exploit existing weaknesses and susceptibilities… e.g. I have had success by making gestures like ‘rapidly iterating on pandemic-potential viruses or other replicators’[2]. I don’t think there’s a real need to specifically invoke hardnesses in a ‘materials science’ sense. Like, ten pandemics a month is probably enough to get the point across, and doesn’t require stepping much past existing bio. Ten pandemics a week, coordinated with photosynthetic and plant-parasitising bio stuff if you feel like going hard. I think these sorts of concepts might be inferentially closer for a lot of people. It’s always worth emphasising (and you do), that any specific scenario is overly conjunctive and just one option among many.
If I had to guess an objection, I wonder if you might feel that’s underplaying the risk in some way?
It brings to mind the amusing molecular simulations of proteins and other existing nanomachines, where everything is amusingly wiggly. Like, it looks so squishy! Obviously things can be stronger than that.
By ‘success’ I mean ‘they have taken me seriously, apparently updated their priorities, and (I think) in a good and non-harmful way’
“Pandemics” aren’t a locally valid substitute step in my own larger argument, because an ASI needs its own manufacturing infrastructure before it makes sense for the ASI to kill the humans currently keeping its computers turned on. So things that kill a bunch of humans are not a valid substitute for being able to, eg, take over and repurpose the existing solar-powered micron-diameter self-replicating factory systems, aka algae, and those repurposed algae being able to build enough computing substrate to go on running the ASI after the humans die.
It’s possible this argument can and should be carried without talking about the level above biology, but I’m nervous that this causes people to start thinking in terms of Hollywood movie plots about defeating pandemics and hunting down the AI’s hidden cave of shoggoths, rather than hearing, “And this is a lower bound but actually in real life you just fall over dead.”
When people are highly skeptical of the nanotech angle yet insist on a concrete example, I’ve sometimes gone with a pandemic coupled with limited access to medications that temporarily stave off, but don’t cure, that pandemic as a way to force a small workforce of humans preselected to cause few problems to maintain the AI’s hardware and build it the seed of a new infrastructure base while the rest of humanity dies.
I feel like this has so far maybe been more convincing and perceived as “less sci-fi” than Drexler-style nanotech by the people I’ve tried it on (small sample size, n<10).
Generally, I suspect not basing the central example on a position on one side of yet another fierce debate in technology forecasting trumps making things sound less like a movie where the humans might win. The rate of people understanding that something sounding like a movie does not imply the humans have a realistic chance at winning in real life just because they won in the movie seems, in my experience with these conversations so far, to exceed the rate of people getting on board with scenarios that involve any hint of Drexler-style nanotech.
After reading Pope and Belrose’s work, a viewpoint of “lots of good aligned ASIs already building nanosystems and better computing infra” has solidified in my mind. And therefore, any accidentally or purposefully created misaligned AIs necessarily wouldn’t have a chance of long-term competitive existence against the existing ASIs. Yet, those misaligned AIs might still be able to destroy the world via nanosystems; as we wouldn’t yet trust the existing AIs with the herculean task of protecting our dear nature against the invasive nanospecies and all such. Byrnes voiced similar concerns in his point 1 against Pope&Belrose.
Gotcha, that might be worth taking care to nuance, in that case. e.g. the linked twitter (at least) was explicitly about killing people[1]. But I can see why you’d want to avoid responses like ‘well, as long as we keep an eye out for biohazards we’re fine then’. And I can also imagine you might want to preserve consistency of examples between contexts. (Risks being misconstrued as overly-attached to a specific scenario, though?)
Yeah… If I’m understanding what you mean, that’s why I said,
And I further think actually having a few scenarios up the sleeve is an antidote to the Hollywood/overly-specific failure mode. (Unfortunately ‘covalently bonded bacteria’ and nanomachines also make some people think in terms of Hollywood plots.) Infrastructure can be preserved in other ways, especially as a bootstrap. I think it might be worth giving some thought to other scenarios as intuition pumps.
e.g. AI manipulates humans into building quasi-self-sustaining power supplies and datacentres (or just waits for us to decide to do that ourselves), then launches kilopandemic followed by next-stage infra construction. Or, AI invests in robotics generality and proliferation (or just waits for us to decide to do that ourselves), then uses cyberattacks to appropriate actuators to eliminate humans and bootstrap self-sustenance. Or, AI exfiltrates itself and makes oodles of
horcruxesbackups, launches green goo with genetic clock for some kind of reboot after humans are gone (this one is definitely less solid). Or, AI selects and manipulates enough people willing to take a Faustian bargain as its intermediate workforce, equips them (with strategy, materials tech, weaponry, …) to wipe out everyone else, then bootstraps next-stage infra (perhaps with human assistants!) and finally picks off the remaining humans if they pose any threat.Maybe these sound entirely barmy to you, but I assume at least some things in their vicinity don’t. And some palette/menu of options might be less objectionable to interlocutors while still providing some lower bounds on expectations.
admittedly Twitter is where nuance goes to die, some heroic efforts notwithstanding
An attempt to optimize for a minimum of abstractness, picking up what was communicated here:
How could an ASI kill all humans? Setting off several engineered pandemics a month with a moderate increase of infectiousness and lethality compared to historical natural cases.
How could an ASI sustain itself without humans? Conventional robotics with a moderate increase of intelligence in planning and controlling the machinery.
People coming in contact with that argument will check its plausibility, as they will with a hypothetical nanotech narrative. If so inclined, they will come to the conclusion that we may very well be able to protect ourselves against that scenario, either by prevention or mitigation, to which a follow-up response can be a list of other scenarios at the same level of plausibility, derived from not being dependent on hypothetical scientific and technological leaps. Triggering this kind of x-risk skepticism in people seems less problematic to me than making people think the primary x-risk scenario is far fetched sci-fi and most likely doesn’t hold to scrutiny by domain experts. I don’t understand why communicating a “certain drop dead scenario” with low plausibility seems preferable over a “most likely drop dead scenario” with high plausibility, but I’m open to being convinced that this approach is better suited for the goal of x-risk of ASI being taken seriously by more people. Perhaps I’m missing a part of the grander picture?
It’s false that currently existing robotic machinery controlled by moderately smart intelligence can pick up the pieces of a world economy after it collapses. One well-directed algae cell could, but not existing robots controlled by moderate intelligence.
The question in point 2 is whether an ASI could sustain itself without humans and without new types of hardware such as Drexler style nanomachinery, which to a significant portion of people (me not included) seems to be too hypothetical to be of actual concern. I currently don’t see why the answer to that question should be a highly certain no, as you seem to suggest. Here are some thoughts:
The world economy is largely catering to human needs, such as nutrition, shelter, healthcare, personal transport, entertainment and so on. Phenomena like massive food waste and people stuck in bullshit jobs, to name just two, also indicate that it’s not close to optimal in that. An ASI would therefore not have to prevent a world economy from collapsing or pick it up afterwards, which I also don’t think is remotely possible with existing hardware. I think the majority of processes running in the only example of a world economy we have is irrelevant to the self-preservation of an ASI.
An ASI would presumably need to keep it’s initial compute substrate running long enough to transition into some autocatalytic cycle, be it on the original or a new substrate. (As a side remark, it’s also thinkable that it might go into a reduced or dormant state for a while and let less energy- and compute-demanding processes act on its behalf until conditions have improved on some metric). I do believe that conventional robotics is sufficient to keep the lights on long enough, but to be perfectly honest, that’s conditioned on a lack of knowledge about many specifics, like exact numbers of hardware turnover and energy requirements of data centers capable of running frontier models, the amount and quality of chips currently existing on the planet, the actual complexity of keeping different types of power plants running for a relevant period of time, the many detailed issues of existing power grids, etc. I weakly suspect there is some robustness built into these systems that stems not only from the flexible bodies of human operators or from practical know how that can’t be deduced from the knowledge base of an ASI that might be built.
The challenge would be rendered more complex for an ASI if it were not running on general-purpose hardware but special-purpose circuitry that’s much harder to maintain and replace. It may additionally be a more complex task if the ASI could not gain access to its own source code (or relevant parts of it), since that presumably would make a migration onto other infrastructure considerably more difficult, though I’m not fully certain that’s actually the case, given that the compiled and operational code may be sufficient for an ASI to deduce weights and other relevant aspects.
Evolution presumably started from very limited organic chemistry and discovered autocatalytic cycles based on biochemistry, catalytically active macromolecules and compartmentalized cells. That most likely implies that a single cell may be able to repopulate an entire planet that is sufficiently earth-like and give rise to intelligence again after billions of years. That fact alone certainly does not imply that thinking sand needs to build hypothetical nanomachinery to win the battle against entropy over a long period of time. Existing actuators and chips on the planet, the hypothetical absence of humans, and a HLAI or ASI moderately above it may be sufficient in my current opinion.
I rather expect that existing robotic machinery could be controlled by ASI rather than “moderately smart intelligence” into picking up the pieces of a world economy after it collapses, or that if for some weird reason it was trying to play around with static-cling spaghetti It could pick up the pieces of the economy that way too.
It seems to me as if we expect the same thing then: If humanity was largely gone (e.g. by several engineered pandemics) and as a consequence the world economy came to a halt, an ASI would probably be able to sustain itself long enough by controlling existing robotic machinery, i.e. without having to make dramatic leaps in nanotech or other technology first. What I wanted to express with “a moderate increase of intelligence” is that it won’t take an ASI at the level of GPT-142 to do that, but GPT-7 together with current projects in robotics might suffice to get the necessary planning and control of actuators come into existence.
If that assumption holds, it means an ASI might come to the conclusion that it should end the threat that humanity poses to its own existence and goals long before it is capable of building Drexler nanotech, Dyson spheres, Von Neumann probes or anything else that a large portion of people find much too hypothetical to care about at this point in time.
This totally makes sense! But “proteins are held together by van der Waals forces that are much weaker than covalent bonds” still is a bad communication.
Impressive.
“things animated by vitalic spirit, elan vital, can self-heal and self-reproduce”
Why aren’t you talking with Dr. Michael Levin and Dr. Gary Nolan to assist in the 3D mobility platform builds facilitating AGI biped interaction? Both of them would most assuredly be open to your consult forward.
Good to see your still writing.