For FAI: Is “Molecular Nanotechnology” putting our best foot forward?
Molecular nanotechnology, or MNT for those of you who love acronyms, seems to be a fairly common trope on LW and related literature. It’s not really clear to me why. In many of the examples of “How could AI’s help us” or “How could AI’s rise to power” phrases like “cracks protein folding” or “making a block of diamond is just as easy as making a block of coal” are thrown about in ways that make me very very uncomfortable. Maybe it’s all true, maybe I’m just late to the transhumanist party and the obviousness of this information was with my invitation that got lost in the mail, but seeing all the physics swept under the rug like that sets off every crackpot alarm I have.
I must post the disclaimer that I have done a little bit of materials science, so maybe I’m just annoyed that you’re making me obsolete, but I don’t see why this particular possible future gets so much attention. Let us assume that a smarter than human AI will be very difficult to control and represents a large positive or negative utility for the entirety of the human race. Even given that assumption, it’s still not clear to me that MNT is a likely element of the future. It isn’t clear to me than MNT is physically practical. I don’t doubt that it can be done. I don’t doubt that very clever metastable arrangements of atoms with novel properties can be dreamed up. Indeed, that’s my day job, but I have a hard time believing the only reason you can’t make a nanoassembler capable of arbitrary manipulations out of a handful of bottles you ordered from Sigma-Aldrich is because we’re just not smart enough. Manipulating individuals atoms means climbing huge binding energy curves, it’s an enormously steep, enormously complicated energy landscape, and the Schrodinger Equation scales very very poorly as you add additional particles and degrees of freedom. Building molecular nanotechnology seems to me to be roughly equivalent to being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded. Maybe a super human intelligence is capable of doing so, but it’s not at all clear to me that it’s even possible.
I assume the reason than MNT is added to a discussion on AI is because we’re trying to make the future sound more plausible via adding burdensome details. I understand that AI and MNT is less probable than AI or MNT alone, but that both is supposed to sound more plausible. This is precisely where I have difficulty. I would estimate the probability of molecular nanotechnology (in the form of programmable replicators, grey goo, and the like) as lower than the probability of human or super human level AI. I can think of all sorts of objection to the former, but very few objections to the latter. Including MNT as a consequence of AI, especially including it without addressing any of the fundamental difficulties of MNT, I would argue harms the credibility of AI researchers. It makes me nervous about sharing FAI literature with people I work with, and it continues to bother me.
I am particularly bothered by this because it seems irrelevant to FAI. I’m fully convinced that a smarter than human AI could take control of the Earth via less magical means, using time tested methods such as manipulating humans, rigging elections, making friends, killing its enemies, and generally only being a marginally more clever and motivated than a typical human leader. A smarter than human AI could out-manipulate human institutions and out-plan human opponents with the sort of ruthless efficiency that modern computers beat humans in chess. I don’t think convincing people that smarter than human AI’s have enormous potential for good and evil is particularly difficult, once you can get them to concede that smarter than human AIs are possible. I do think that waving your hands and saying super-intelligence at things that may be physically impossible makes the whole endeavor seem less serious. If I had read the chain of reasoning smart computer->nanobots before I had built up a store of good-will from reading the Sequences, I would have almost immediately dismissed the whole FAI movement a bunch of soft science fiction, and it would have been very difficult to get me to take a second look.
Put in LW parlance, suggesting things not known to be possible by modern physics without detailed explanations puts you in the reference class “people on the internet who have their own ideas about physics”. It didn’t help, in my particular case, that one of my first interactions on LW was in fact with someone who appears to have their own view about a continuous version of quantum mechanics.
And maybe it’s just me. Maybe this did not bother anyone else, and it’s an incredible shortcut for getting people to realize just how different a future a greater than human intelligence makes possible and there is no better example. It does alarm me though, because I think that physicists and the kind of people who notice and get uncomfortable when you start invoking magic in your explanations may be the kind of people FAI is trying to attract.
- How probable is Molecular Nanotech? by 29 Jun 2013 7:06 UTC; 75 points) (
- 16 Nov 2021 13:57 UTC; 3 points) 's comment on Attempted Gears Analysis of AGI Intervention Discussion With Eliezer by (
- 7 Aug 2015 18:19 UTC; 1 point) 's comment on Open thread, Aug. 03 - Aug. 09, 2015 by (
I agree with this.
What do you agree with? For example, I agree that it could, hypothetically, resort to such conventional methods (just as it could, hypothetically, paint the Moon in yellow), but I don’t think it’s likely. Do you mean that you think it’s likely (or not unlikely etc.)?
Specifically, with the claim that bringing up MNT is unnecessary, both in the “burdensome detail” sense and “needlessly science-fictional and likely to trigger absurdity heuristics” sense.
Likewise. I’ve always read Eliezer’s original statement along the lines of “and then AI will invent a powerful new technology or will use existing technology in a new and highly effective way.” There are a dozen of those, MNT is just one example. With so many options, the probability of achieving at least one of them is pretty high.
For some reason no one wants to hold Eric Drexler accountable now for the grandiose, irresponsible and frankly cringe-worthy things he wrote back in the 1980′s.
Case in point. I turned 27 in 1986, the year Drexler published Engines of Creation, so I belong to the generation referred to in the following speculation:
http://e-drexler.com/d/06/00/EOC/EOC_Chapter_8.html
I turn 54 this November, and I can assure you that no one in my generation has seen “medicine’s overtaking their aging process.”
Yet many cryonicists have bet their futures on this fantasy technology, when regular people can see that it has taken on the characteristics of an apocalyptic religious belief instead of a rational assessment of future capabilities. Cryonicist Thomas Donaldson warned that this would happen and not help cryonics’ credibility, back around the time Drexler predicted that I would start to grow younger by now.
Apparently Drexler wants to reboot his reputation with a new book, but someone needs to remind people about the things he promised us in his 1980′s-era writings which haven’t come to pass.
Isn’t life an example of self-assembling molecular nanotechnology? If life exists, then our physics allows for programmable systems which use similar processes.
We already have turing complete molecular computers… but they’re currently too slow and expensive for practical use. I predict self-assembling nanotech programmed with a library of robust modular components will happen long before strong AI.
Life is a wonderful example of self-assembling molecular nanotechnology, and as such gives you a template of the sorts of things that are actually possible (as opposed to Drexlerian ideas). That is to say, everything is built from a few dozen stereotyped monomers assembled into polymers (rather than arranging atoms arbitrarily), there are errors at every step of the way from mutations to misincorporation of amino acids in proteins so everything must be robust to small problems (seriously, like 10% of the large proteins in your body have an amino acid out of place as opposed to being built with atomic precision and they can be altered and damaged over time), it uses a lot of energy via a metabolism to maintain itself in the face of the world and its own chemical instability (often more energy than is embodied in the chemical bonds of the structure itself over a relatively short time if it’s doing anything interesting and for that matter building it requires much more energy than is actually embodied), you have many discrete medium-sized molecules moving around and interacting in aqueous solution (rather than much in the way of solid-state action) and on scales larger than viruses or protein crystals everything is built more or less according to a recipe of interacting forces and emergent behavior (rather than having something like a digital blueprint).
So yeah, remarkable things are possible, most likely even including things that naturally-evolved life does not do now. But there are limits and it probably does not resemble the sorts of things described in “Nanosystems” and its ilk at all.
Was this true at the macroscale too? The jet flying over my head says “no”. Artificial designs can have different goals than living systems, and are not constrained by the need to evolve via a nearly-continuous path of incremental fitness improvements from abiogenesis-capable ancestor molecules, and this turned out to make a huge difference in what was possible.
I’m also skeptical about the extent of what may be possible, but your examples don’t really add to that skepticism. Two examples (systems that evolved from random mutations don’t have ECC to prevent random mutations; systems that evolved from aquatic origins do most of their work in aqueous solution) are actually reasons for expecting a wider range of possibilities in designed vs evolved systems; one (dynamic systems may not be statically stable) is true at the macroscale too, and one (genetic code is vastly less transparent than computer code) is a reason to expect MNT to involve very difficult problems, but not necessary a reason to expect very underwhelming solutions.
Biology didn’t evolve to take advantage of ridiculously concentrated energy sources like fossil petroleum, or to major industrial infrastructure, two things that make jets possible. This is similar to some of the reasons I think that synthetic molecular technology will probably be capable of things that biology isn’t, by taking advantage of say electricity as an energy source or one-off batch synthesis of stuff by bringing together systems that are not self-replicating from parts made separately.
In fact the analogy of a bird to a jet might be apt to describe the differences between what a synthetic system could do and what biological systems do now, due to them using different energy sources and non-self replicating components (though it might be a lot harder to brute-force such a change in quantitative performance by ridiculous application of huge amounts of energy at low efficiency).
I still suspect, however, that when you are looking at the sorts of reactions that can be done and patterns that can be made in quantities that matter as more than curiosities or rare expensive fragile demonstrations, you will be dealing with more statistical reactions than precise engineering and dynamic systems rather than static (at least during the building process) just because of the nature of matter at this scale.
edited for formatting
Please use paragraphs.
EDIT: thanks for the formatting update!
What do you make of the picture Richard Jones paints ? I’m not much more than a lay man—though happen to know my way around medicine—find his critique of of Drexler’s vision of nanotechnology sound.
His position seems to be that Drexler-style nanotechnology is theoretically possible, but that developing it would be very difficult.
A hypothetical superintelligence might find it easier...
Yes, that seems to be is main argument against Drexler’s vision, though I wonder if he thinks it’s difficult to come up with a design that would be robust, or if the kind of nanotechnology would be difficult to implement since it requires certain conditions such as vacuum close to 0 kelvin, which might be a bit cumbersome even for a superintelligence(?) unless you hang out a lot in space.
Except that Drexlerian ideas are very alien compared to life, and are also physically possible (according to Nanosystems).
You are generalizing to all of physics from the narrow band of biochemistry. Biochemistry is aqueous, solvent-based, room-temperature-range, and evolved. It is not comparable to e.g. printed circuitry on a silicon chip.
There are sure to be limits. However, the limits are probably nothing like those of life. Life is kind of useful to point to as an example of how self-replicating systems can exist, but apart from that it is a very misleading analogy. (At least, if we’re talking about hard nanotech, which is what MNT usually is used to refer to and what Drexler focuses on. Soft nanotech that mimics or borrows from biology is incredibly interesting, but different.)
He is answering someone specifically bringing up life as an example of why Drexler’s ideas are possible, and why that doesn’t actually hold.
To what extent is labeling the behavior of biological systems as “emergent” just an admission that these systems are currently mysterious to us?
I don’t think it’s clear to what extent biological systems have “emergent” behavior, vs. organization into distinct “modules” each with a specific role, and robust feedback systems.
The book chapter On Modules and Modularity in the book System Modeling in Cellular Biology argues that simple modular design is likely selected for, as it would increase the ability of an organism to evolve and adapt. Non-modular systems are so interconnected that small changes break too many things. Biological systems may be more modular (and therefore understandable) than they currently seem- but we’ll need to extend our study to look more at dynamic behavior before we’d be able to identify these modules and understand their function.
The fact that biological systems are so reliable despite high error rates in the underlying processes shows that feedback systems are an effective strategy to build robust systems from somewhat unreliable components.
I think this suggests that we may not need to solve hard problems such as protein folding before we can build practical self assembling nanotech. We “just” need a programmable library of robust modules that can be combined in arbitrary ways- but we may find that these already exist, or that they can be engineered from what we do understand (such as RNA chemistry).
I don’t seem to have the same disdain for the word ‘emergent’ as much of the population here. I don’t use it as a curiosity stopper or in place of the word ‘mysterious’ - I wouldn’t be much of a biologist if a little emergent behavior stopped me cold. (Also no argument about many modular things in biological systems, I pull out and manipulate pathways and regulatory circuits regularly in my work, but there is a whole lot which is still very context-dependent). In this context I used the word emergent to mean that rather than having some kind of map of the final structure embedded in the genetic instructions, you have them specifying the properties of small elements which then produce those larger structures only in the context of their interactions with each other and which produce a lot more structure than is actually encoded in the DNA via the rather-opaque ‘decompression algorithm’ of physics and chemistry (through which small simple changes to the elements can map to almost no change in the product or vast changes across multiple attributes). I’ve always found the analogy of genetics to a blueprint or algorithm tiresome and the analogy to a food recipe much more applicable; nothing in a recipe dictates things like, say, fluffyness other than the interactions of everything you put in in the context of an oven. You can alter biological systems in numerous ways with some regularity but only in some cases are there simple knobs you can turn to alter isolated attributes.
I mostly agree with your last two paragraphs, actually. Synthetic systems with properties similar to things like RNA or protein chemistry may eventually have a lot of power especially if they contain chemical properties not present in any of the basic building blocks of biology. They just will not have atomic-scale precision or arbitrary control over matter, and will be limited by things analogous to nutrients and metabolisms and either require a hell of a lot of functionality not directly connected to their main functions to hold themselves together or a lot of external infrastructure to make disposable things.
I really like your recipe analogy, I think it would be very useful for teaching molecular biology.
I think our discussion mirrors the tension between traditional biology and bioengineering. As a bioengineer I’m primarily concerned with what is possible to build given the biology we already know.
While I agree that a “blueprint” isn’t a good analogy for naturally evolved living organisms, this doesn’t prevent us from engineering new molecular systems that are built from a blueprint. As I mentioned, we already have turing complete molecular computers- and software compilers that can turn any code into a set of molecules that will perform the computation. It’s currently too slow and expensive to be useful, but it shows that programmable molecular systems are possible.
It’s the usual analogy I see.
Nanosystems discusses theoretical maximums. However, even if you make the assumption that living cells are as good as it gets, an e-coli, which we know from extensive analysis uses around 25,000 moving parts, can double itself in 20 minutes.
So in theory, you have some kind of nano-robotic system that is able to build stuff. Probably not any old stuff—but it could produce tiny subunits that can be assembled to make other nano-robotic systems, and other similar things.
And if it ran as fast as an e-coli, it could build itself every 20 minutes.
That’s still pretty much a revolution, a technology that could be used to tear apart planets. It just might take a bit longer than it takes in pulp sci-fi.
That looks like it’s missing the point to me. As one of my physics professors put it, “we already have grey goo. It’s called bacteria.” If living cells are as good as it gets, and e-coli didn’t tear apart the Earth, that’s solid evidence that nanosystems won’t tear apart the Earth.
I’d say life is very near to as good as it gets in terms of moving around chemical energy and using it to transform materials without something like a furnace or a foundry. You’re never going to eat rock, it’s already in a pretty damn low energy state that you cannot use for energy. Lithotrophic bacteria take advantage of redox differences between materials in rocks and live REALLY slowly so that new materials can leech in. You need to apply external energy to it in order to transform it. And as TheOtherDave has said, major alterations have happened but according to rather non-grey-goo patterns, and I suspect that the sorts of large-scale (as opposed to a side-branch that some energy takes) reactions will be more similar to biological transformations than to other possibilities.
I do think that life is not necessarily as good as it gets in terms of production of interesting bulk materials or photosynthesis though because in both these cases we can take advantage of non-self-replicating (on its own) infrastructure to help things along. Imagine a tank in which electrodes coming from photovoltaics (hopefully made of something better than the current heavy-metal doped silicon that could easily be recycled or degraded or something when they inevitably photodegrade) directly drive the redox reactions that fix CO2 from the air into organic molecules, followed by the chemistry required to take that feedstock and make it into an interesting material (along with an inevitable waste product or six). Dropper in the appropriate nutrient/vitamin-analogues and let it run, then purify it… I sometimes wonder if such a system might in the long run cause an ‘ecological’ disruption by being more efficient at creating materials from simple feedstocks than regular living plants and over very long timescales crowding them out, but then there is the issue of the non-self-replicating components which add a drag. Its a very interesting and potentially strange set of scenarios to be sure, but yeah not exactly grey goo (grey sprawl?).
EDIT: Percival Zhang’s research at Virginia Tech may provide a look at some of the ideas I find particularly interesting:
Cell-free biofuel production:
http://pubs.acs.org/doi/abs/10.1021/cs200218f
Proposals for synthetic photosynthesis:
http://pubs.acs.org/doi/abs/10.1021/bk-2012-1097.ch015
http://precedings.nature.com/documents/4167/version/1
General overview:
http://www.vt.edu/spotlight/innovation/2012-02-27-fuels/zhang.html
I’d be really surprised if evolution has done all it can. We simply don’t know enough to say what might turn up in the next million years or ten million years.
Though they can alter it catastrophically.
Bacteria, as well as all life, are stuck at a local maximum because evolution cannot find optimal solutions. Part of Drexler’s work is to estimate what the theoretical optimum solutions can do.
My statement “tear apart planets” assumed too much knowledge on the part of the reader. I thought it was frankly pretty obvious. If you have a controllable piece of industrial machinery that uses electricity and can process common elements into copies of itself, but runs no faster than bacteria, tearing apart a planet is a straightforward engineering excercise. I did NOT mean the machinery looked like bacteria in any way, merely that it could copy itself no faster than bacteria.
And by “copy itself”, what I really meant is that given supplies of feedstock (bacteria need sugar, water, and a few trace elements...our “nanomachinery” would need electricity, and a supply of intermediates for every element you are working with in a pure form) it can arrange that feedstock into thousands of complex machine parts, such that the machinery that is doing this process can make it’s own mass in atomically perfect products in an hour.
I’ll leave it up to you to figure out how you could use this tech to take a planet apart in a few decades. I don’t mean a sci-fi swarm of goo, I mean an organized effort resembling a modern mine or construction site.
It’s not clear to me what you mean by “tearing apart a planet.” Are you sifting out most of the platinum and launching it into orbit? Turning it into asteroids? Rendering the atmosphere inhospitable to humans?
Because I agree that the last is obviously possible, the first probably possible, the second probably impossible without ludicrous expenditures of effort. But it’s not clear to me that any of those are things which nanotechnology would be the core enabler on.
If you mean something like “reshape the planet in its image,” then again I think bacteria are a good judge of feasibility- because of the feedstock issues. As well, it eventually becomes more profitable to prey on the nanomachines around you than the inert environment, and so soon we have an ecosystem a biologist would find familiar.
Jumping to another description, we could talk about “revolutionary technologies,” like the Haber-Bosch process, which consumes about 1% of modern energy usage and makes agriculture and industry possible on modern scales. It’s a chemical trick that extracts nitrogen from its inert state in the atmosphere and puts it into more useful forms like ammonia. Nanotech may make many tricks like that much more available and ubiquitous, but I think it will be a somewhat small addition to current biological and chemical industries, rather than a total rewriting of those fields.
This problem is very easy to solve using induction. Base step : the minimum “replicative subunit”. For life, that is usually a single cell. For nano-machinery, it is somewhat larger. For the sake of penciling in numbers, suppose you need a robot with a scoop and basic mining tools, a vacuum chamber, a 3d printer able to melt metal powder, a nanomachinery production system that is itself composed of nanomachinery, a plasma furnace, a set of pipes and tubes and storage tanks for producing the feedstock the nanomachinery needs, and a power source.
All in all, you could probably fit a single subunit into the size and mass of a greyhound bus. One notable problem is that there’s enough complexity here that current software could probably not keep a factory like this running forever because eventually something would break that it doesn’t know how to fix.
Anyways, you set down this subunit on a planet. It goes to work. In an hour, the nanomachinery subunit has made a complete copy of itself. In somewhat more time, it has to manufacture a second copy of everything else. The nanomachinery subunit makes all the high end stuff—the sensors, the circuitry, the bearings—everything complex, while the 3d printer makes all the big parts.
Pessimistically, this takes a week. A greyhound bus is 9x45 feet, and there are 5.5e15 square feet on the earth’s surface. To cover the whole planet’s surface would therefore take 44 weeks.
Now you need to do something with all the enormous piles of waste material (stuff you cannot make more subunits with) and un-needed materials. So you reallocate some of the 1.3e13 robotic systems to build electromagnetic launchers to fling the material into orbit. You also need to dispose of the atmosphere at some point, since all that air causes each electromagnetic launch to lose energy as friction, and waste heat is a huge problem. (my example isn’t entirely fair, I suspect that waste heat would cook everything before 44 weeks passed). So you build a huge number of stations that either compress the atmosphere or chemically bond the gasses to form solids.
With the vast resources in orbit, you build a sun-shade to stop all solar input to reduce the heat problem, and perhaps you build giant heat radiators in space and fling cold heat sinks to the planet or something. (with no atmospheric friction and superconductive launchers, this might work). You can also build giant solar arrays and beam microwave power down to the planet to supply the equipment so that each subunit no longer needs a nuclear reactor.
Once the earth’s crust is gone, what do you do about the rest of the planet’s mass? Knock molten globules into orbit by bombarding the planet with high energy projectiles? Build some kind of heat resistant containers that you launch into space full of lava? I don’t know. But at this point you have converted the entire earth’s crust into machines or waste piles to work with.
This is also yet another reason that AI is part of the puzzle. Even if failures were rare, there probably are not enough humans available to keep 1e13 robotic systems functioning, if each system occasionally needed a remote worker to log in and repair some fault. There’s also the engineering part of the challenge : these later steps require very complex systems to be designed and operated. If you have human grade AI, and the hardware to run a single human grade entity is just a few kilograms of nano-circuitry (like the actual hardware in your skull), you can create more intelligence to run the system as fast as you replicate everything else.
Standard reference: Nanosystems. In quite amazing detail, though the first couple of chapters online don’t begin to convey it.
There’s lots and lots of physics. All of this discussion has already been done.
While this may be a settled point in your mind, it is not in general a settled point in the mind of your audience. Inasmuch as you’re trying to convince other people of your beliefs, it’s best to meet them where they are, and not ask them to suspend their sense of disbelief in directions that are more or less orthogonal to your primary argument.
MNT is not widespread in the meme pool. Inasmuch as FAI assumes or appears to rely on MNT, it will pay a fitness cost in individuals who do not subscribe to the MNT meme.
Now maybe FAI is particularly convincing to people who already have the MNT meme, and including MNT in possible FAI futures gives it a huge fitness advantage in the “already believes MNT” subpopulation. Maybe the trade-off for FAI of reduced fitness in the meme pool at large (or the computational-materials-scientist meme-pool) is worth it in exchange for increased fitness in the transhumanist meme pool. I don’t know. I certainly haven’t done nearly the work publicizing FAI that you have, and obviously you have some idea of what you’re doing. I’m not trying to argue that it should be taken out, or never used as an example again. I will say that I hope you take this post/argument as weak counter-evidence on the effectiveness of this particular example, and update accordingly.
Eliezer linked to the Drexler book and dissertation and he probably trusts the physics in it. If you claim that the physics of nanotech is much harder than what is described there, then you better engage the technical arguments in the book, one by one, and believably show where the weaknesses lie. That’s how you “unsettle” the settled points. Simply offering a contradictory opinion is not going to cut it, as you are going to lose the status contest.
Given the unfathomably positive reception of the grandparent allow me to quote shminux’s reply for support and emphasis.
The opening post took the stance “but seeing all the physics swept under the rug like that sets off every crackpot alarm I have.”. Eliezer provided a reference to a standard physics resource that explains the physics and provides better arguments than Eliezer could hope to supply (without an unrealistic amount of retraining and detouring from his primary objective.) The response was to sweep the physics under the rug and move to “you have to meet me where I am”. Unsurprisingly, this sets off every crackpot alarm I have.
As an alternative to personally engaging in the technical arguments at the very least he could reply with reference to another authoritative source such as another textbook or several people with white hair and letters before their name. That sort of thing can support a position of “that science is disputed” or, if the hair is sufficiently white and the institutional affiliations particularly prominent it could potentially even support “Drexler is a crackpot too!”. But given that the dissertation in question was for MIT that degree of mainstream contempt seems unlikely.
I will be happy to engage Drexler at length when I get the chance to do so. I have not, in the last 3 days, managed to buy the book and go through the physics in detail. I hope that failure is not enough to condemn me as not acting in good faith. I made it through the first couple chapters of the dissertation, but it read like a dissertation, which is to say lots of tables and not much succinct reasoning that I could easily prove or disprove. There seemed to be little point in linking to “expert rebuttals” because presumably these would not be new information, though Richard Smalley is the canonical white haired Nobel Laureate who disagrees strongly with the idea of MNT as Drexler outlines it.
This post was not intended primarily as a discussion on whether MNT was true or not. If people consider that an important discussion, I’ll be happy to participate in it and lend whatever expertise I may or may not have. I’ll be happy to buy Nanosystems and walk us all through as much quantum mechanics as anyone could ever want. This was emphatically not my point however. I don’t have a strong opinion on whether MNT is true. I will freely admit to not having personally done the research necessary to come to a confident conclusion one way or the other. I am confident that it’s controversial. It’s not something one hears mentioned in materials science seminars, it doesn’t win you any grants, you wouldn’t put it in a paper. While it may still be true, I don’t think it’s well-established enough that it’s the sort of truth you can take for granted.
I personally would not, when giving an explanation for some phenomenon, ask you to take for granted without at least a citation the following statement. “The ground state energy of a system of atoms can be determined exactly without knowing anything about the wave function of the system and without knowing the wave functions of the individual electrons.” I would not expect anyone reading that statement to be able to evaluate its truth or falsehood without a considerable diversion of energy. I would anticipate that patient readers would be confused, and some people might give up reading altogether because I was stating as fact things they had no good way of verifying.
However, the Hohenberg-Kohn theorems are demonstrably true, and have been around for 50 years. That doesn’t make them obvious. If I skip a step in a proof or derivation, it doesn’t make the proof wrong, but it is going to make people who care about the math very uncomfortable. When one publishes rigorous technical writing, the goal is precisely to make the inferential gaps as small as possible, to lead your skeptical untrusting readers forcefully to a conclusion, without ever confusing them as to how you got from A to B, or opening the door to other explanations.
Absolutely not, and I think this occasioned a useful discussion. But if you have a physics or chemistry background, I for one would greatly appreciate it if you did so (and the Smalley critique, and perhaps Locklin below) and posted your take. Also you don’t need to buy the book, you should be able to get a copy at any large university library.
I am no expert in the relevant science, but I take the Smalley argument from authority with a grain of salt, for two reasons.
First, according to wikipedia Smalley was a creationist, and apparently he endorsed an Intelligent Design book, saying the following:
If he underestimated the ability of evolution to create complex molecular machines, perhaps he did the same about human engineering.
Also, the National Academy of Sciences, in its 2006 report on nanotechnology, discussed Drexler’s ideas and did not take Smalley’s critique to be decisive (not a ringing endorsement either, of course, suggesting further experimental research). Here is a page with the relevant sections.
This critique by Scott Locklin seems mainly to be arguing that Drexler was engaged in premature speculation that was not a useful contribution to science or engineering, and has not borne useful fruit. But he also attacks nuclear fusion, cancer research, and quantum computing (as technology funding target) for premature white elephant status, which seem like good company to be in for speculative future technology.
He says that there may be technologies with similar capabilities to those Drexler envisions eventually, but that Drexler has not contributed to realizing them, and suggests that Drexler made serious physics errors (but isn’t very clear about what they are).
I would be interested in knowing about the technological limits, separately from whether they will be reached anytime soon, and whether Drexler’s contributions were any good for science or engineering..
Okay. I’ll try and do this. I’m mildly qualified; I’m finishing up a Ph.D. in computational materials science. It will take me a little while to make time for it, but it should be fun! Anyone else who is interested in seeing this discussion feel free to encourage me/let me know.
I would love to see a critique that started “On page W of X, Drexler proposes Y, but this won’t work because Z”. Smalley made up a proposal that Drexler didn’t make (“fat fingers”) and critiqued that. If there’s a specific design in Nanosystems that won’t work, that would be very informative.
I would be interested to see this.
I would very much like to see this. Sounds like another discussion-level post would be in order.
Thanks!
Certainly not (perceived as acting in bad faith). Instead, that particular comment was a misstep in dance of rationality. It was worth correcting with emphasis only because many other people were making it too (via excessive upvoting). As Eliezer noted there would be a big improvement if you said “oops but still consider the PR implications”.
Like Carl I would appreciate someone else analysing the physics in Drexler’s dissertation and book thoroughly and giving a brief summary of key findings and key concepts.
For my part what I do take for granted is that DNA based machines can be used to create arbitrarily complex impacts on the environment. The question of precisely how much smaller than DNA based cells it is possible to make machines is a largely incidental concern.
This entire thread is about the PR implications. There’s a reason I titled it “Is MNT putting out best foot forward” and not, “Is MNT true?”
I don’t care about MNT. I do care about FAI. I regret deeply that this discussion has become focused on whether or not MNT is true, which is a subject I don’t really care about, and has gotten away from, “Is MNT a good way to talk about FAI” which is a subject I care a lot about.
Also I have some worries about the pattern “X is unsupported! What, you have massive support for X? Well talking about X is still bad publicity, really I’m concerned for how this makes you look in front of other people.” I’ll consider an ‘oops, I retract my previous argument, but...’ followed by that shift, but not without the ‘oops’. Otherwise I do update on X possibly being bad publicity, but not in a being-persuaded way, more of an okay-I’ve-observed-you way.
I don’t consider Drexler’s work to be “massive support” for MNT. I think that MNT is controversial. I think that one shouldn’t introduce controversial material in a discussion unless you absolutely have to for some of the same reasons I think that Nixon being a Quaker and Republican is a bad example.
I honestly wasn’t sure when I posted this whether anyone else here would feel the same way about MNT being non-obvious and controversial. It does seem safe to say that if MNT is controversial on LW, which is overwhelmingly sympathetic to transhumanist ideas, then it’s probably even less popular outside of explicitly transhumanist communities.
Drexler gets the physics right. It’s harder to evaluate the engineering effort needed. Eliezer’s claims about how easy it would be for an FAI to build MNT go well beyond what Drexler has claimed.
I’m fairly sure I know more about MNT than Eliezer (I tried to make a career of it around 1997-2003), and I’m convinced it would take an FAI longer than Eliezer expects unless the FAI has very powerful quantum computers.
Why do you expect this to help? What nanotech computations would a “very powerful quantum computer” accomplish so much faster than a classical computer? Or do you mean something like an “analog” quantum computer, also known as a “quantum simulator”, which solves the Schrodinger equation by simulating the Hamiltonian and its evolution, rather than the “ordinary” digital quantum computer, which speeds up numerical algorithms?
Anything that makes the Schrodinger equation tractable would make me much less confident of my analysis.
Offhand, I would expect analog quantum simulators to come before digital quantum computers, given how they are already naturally everywhere, anyway, just not in a well-controlled way. Sort of like birds were a living proof that “heavier-than-air flying machines” are possible. This year-old Nature review seems to show a number of promising directions.
How did natural selection solve this problem without quantum computers or even intelligence, and why can’t an AI exploit the same regularity even faster?
Natural selection used trial and error. An AI would do that faster and with fewer errors.
Estimating how long a strong AI takes to design molecular nanotechnology requires knowledge of molecular nanotechnology, knowledge of recursive artificial intelligence and knowledge of computation. This is particularly the case since most of the computation required to go from a recursively-self-improving-AI-seed to nanotech is going to be spent on the early levels of self improving, not the nanotech design itself.
The “unless the FAI has very powerful quantum computers” caveat gives a rather strong indication that your appeals to your own authority are less trustworthy with respect to AI and computation than they are about MNT (for reasons alluded to by shminux).
There are some problems for which knowledge of the problem plus knowledge of computation is sufficient to estimate a minimum amount of computation needed. Are you claiming to know that MNT isn’t like that? Or that an AI can create powerful enough computers that that’s irrelevant?
Appeals to authority about AI seem unimpressive, since nobody has demonstrated expertise at creating superhuman AI.
Perhaps my token effort at politeness made me less than completely clear. That wasn’t an appeal to AI authority. That was a rejection of your appeal to your own personal authority based on the degree to which you undermined your credibility on the subject by expressing magical thinking about quantum computation.
You just appealed to your own authority about molecular nano-technology. When can I expect you to announce your product release? (Be consistent!)
Magical thinking? I intended to mainly express uncertainty about it.
I don’t expect appeals to authority to accomplish much here. Maybe it was a mistake for me to mention it at all, but I’m concerned that people here might treat Eliezer as more of an authority on MNT than he deserves. I only claimed to have more authority about MNT than Eliezer. That doesn’t imply much—I’m trying to encourage more doubt about how an AI could take over the world.
Can you provide more detail and maybe give some examples?
From this paper, page 26:
Has Drexler said anything which implies that step 4 would succeed without lots of trial and error?
I’d like to address just the claim here that you could provide instructions to a nanosystem with a speaker. If we assume that the frequency range of the speaker lines up with human hearing, and that our nanosystem is in water, then the smallest possible wavelength we can get from our speaker is on the order of 7cm.
lamda=v / f= 1500 m/s / 20 kHz
How can you provide instructions to a nanosystem with a signal whose linear dimension is on the order of cm? How can you precisely control something when your manipulator is orders or magnitude larger than the thing you’re manipulating?
You can get microphones much smaller than 7 cm, and they can detect frequencies way lower than 20 kHz. There’s no rule saying you need a large detector to pick up a signal with a large wavelength.
I believe the original comment isn’t about the receiver, but about the emitter—that if you use audible-range sound or even ultrasound, the spatial resolution of the signal will be impossibly large compared to a nanobot. Each nanobot will be able to get the signal, but you won’t be able to only communicate with nanobots in a specific part of the body.
This might not be a fatal objection, since you could imagine some sort of protocol with unique addresses or whatnot, but it’s an objection.
This isn’t about bots, it’s about a little tiny factory building your second-stage materials.
You can get the effect of a huge telescope lens with an array of smaller telescopes. Could you get the same effect for sound?
Sure, if you can have all your pieces coordinate and stay coordinated with other. If you do that, you still have a communication problem, just a different one.
I think the reason AI and nanotech often go together in discussions of the future is summed up in this quote by John Cramer: “Nanotechnology will reduce any manufacturing problem, from constructing a vaccine that cures the common cold to fabricating a starship from the elements contained in sea water, to what is essentially a software problem.”
The physicist and quant Scott Locklin doesn’t think highly of Nanosystems. You can find his 2010 blog post about it easily enough.
Drexler’s doctorate seems fishy to me as well. One, he got it from MIT’s Media Lab, and not from a real department of science or engineering.
Two, the gadgeteers and artists who do stuff at the Media Lab have to produce real things which they can get to work. Drexler hasn’t done that, and that adds further to my sense of scandal about his doctorate.
And three, the Wikipedia page for the Media Lab lists its notable associates, their achievements and their publications. It doesn’t mention Drexler or a “nano” anything, like the Media Lab people who have a say over the page’s content feel embarrassed by the Drexler episode now.
You’re Mark Plus, right?
http://www.acceleratingfuture.com/michael/blog/2010/09/scott-locklin-on-nanotechnology-and-drexler/
Verdict: Locklin is “Flamebait.”
I checked the original article. I agree. There’s not much sign Locklin actually read Nanosystems, AFAICT, and at this point I’m not much inclined to give benefit of the doubt.
Your argument is extremely human-parochial. You seem to be thinking of AIs as potential supervillains who want to “rule the world,” (where ruling the world = controlling humans.) If you think that an AI would care about controlling humans, you are assuming that the AI would be very human-like. In the space of possible mind-designs, very few AIs care about humans as anything but raw resources.
In the space of possible mind-designs, your mind (and every human mind) is an extreme specialist in manipulating humans. So of course, to you manipulating humans seems vastly easier and more useful than building MNT or macro-sized robots, or whatever.
An AGI cares about not being killed by humans.
Corn manipulates humans to kill parasites that might damage the corn by a variety of ways. An entity doesn’t need to be smart to be engaged in manipulating humans.
As long as humans have the kind of power over our world that they have at the moment and AGI will either be skilled in dealing with humans or humans will shut it down if there seems to be a danger of the AGI amazing power and not caring about humans.
I’m not assuming that the AI has a large final preference for controlling humans. I am stressing how the AI interacts with humans because as a human that’s of particular concern to me. Access to human resource may also be a useful instrumental goal for a “young” AI, as human beings control fairly large amount of resources and gaining access to them may be the easiest route to power for an AI. My understanding is that in the context of FAI, we’re discussing AI in terms of what it means from humans, so that’s where I’m placing the emphasis. The discussion of how the AI gains resources/global control is valid even if the AI’s end game is tiling the universe in paperclips.
The question of whether an AI is likely to have more difficulty understanding humans or quantum mechanics is interesting. As a possible counterpoint, I would say that an AI programmed by human beings is likely to be close to human style thought in the space of all possible minds, so the vastness of mind space is perhaps not totally relevant. I’m not clear as to whether that’s a particularly good counterpoint.
I don’t have a problem with the AI building an army of macrosize robots, or taking over the internet, or whatever. I don’t think human society is well-designed, or is even capable of being well-designed, with respect to significantly slowing down an AI trying to convert us all into resources. Indeed, it seems to me that any number of possible path require fewer assumptions and less computational time than MNT. The essence of my complaint is that it seems like of the many possible paths to power for an AI, the one that gets stressed in FAI literature in on the less likely end of the spectrum, and I’m really confused as to why that choice has been made.
Might be easier for a program as well, if one person can write a chat bot that hypnotizes people :-)
One person.
Also, an AI needs to keep itself from being shut down. Also, an AI needs humans as its manipulators until it can have its own manipulators.
I’m commenting a few days after the main flurry of discussion and just wanted to raise a concern about how there seems to be a conflation in the OP and in many of the comments between (1) effective political advocacy among ignorant people who will stick with the results that fall out of the absurdity heuristic even when it gives false results and (2) truth seeking analysis based on detailed mechanistic considerations of how the world is likely to work.
Consider the 2x2 grid where, on one axis, we’re working in either an epistemically unhygienic advocacy frame where its OK to say false things that get people to support the right conclusion or policy (versus a truth seeking frame where you grind from the facts to the conclusion with high quality reasoning processes at each stage for the sake of figuring stuff out from scratch) and on the second axis Leplen’s dismissal of MNT is coherently founded and on the right track (versus it just being a misfiring absurdity heuristic).
I think in this forum it can be generally assumed that “FAI is important” as the background conclusion that is also a message that it is probably beneficial to advocate on behalf of.
Leplen’s claim here is a claim about Leplen’s historically contingent reasoning processes rather than about the object level workabilty of MNT and it is raised as though Leplen is a fairly normal person whose historically likely reaction to MNT is common enough to be indicative of how it will play with many other people. So the part of the 2x2 grid it is from is firmly “advocacy rather than truth” and mostly assuming “Leplen’s reaction is justified”. I think it is worth spelling out what it would look like to explore the other three boxes in the 2x2 grid.
If we retain the FAI-promoting advocacy perspective but imagine that Leplen is wrong because “MNT magic” is actually something future scientists or an AGI could pull together and deploy, then the substantive cost to the world might be that courses of action that are important if MNT is a real concern may not be well addressed by the group of people who may have been mobilized by a “just FAI, not MNT” advocacy. If basically the same AGI-safety strategy is appropriate whether or not an AGI would head towards MNT as a lower bound on the speed and power of the weapons it could invent, then dropping MNT from the advocacy can’t really harm anything. If the appropriate policies are different enough that lots of people convinced of “FAI without MNT” would object to “FAI with MNT” protection measures, then dropping advocacy could be net harmful to the world.
If we retain the idea that Leplen’s dismissal of MNT is coherent and justified, but flip over to a truth-seeking frame (while retaining awareness of a background belief by many old time LWers that MNT is probably important to think about) then the arguments offered to help actually change people’s minds for coherent reasons seem lacking. From a truth seeking perspective it doesn’t matter what turns people on or off if their opinions aren’t themselves important indicators of how the world actually is. The only formal credential offered is in materials science, and this is raised from within an activist advocacy frame where Leplen admits that motivated cognition could account for their attitude with respect to MNT out of defensiveness and a desire to not have skills become obsolete. Lots of people don’t want to become obsolete, so this is useful evidence for figuring out how to convince similarly fearful people of a conclusion about the importance of FAI by dropping other things that might make FAI advocacy harder. But the claim that “MNT is unimportant based on object level science considerations” will be mostly unmoved by the advocacy level arguments here if someone already has chemistry experience, and has read Nanosystems, and still thinks MNT matters. Something else would need to be offered than hand waving and a report about emotional antibodies to a certain topic. So presuming that Leplen’s dismissal of MNT is on track, and that many LWers think MNT is important, it seems like there’s an education gap, where the LW mainstream could be significantly helped by learning the object level reasoning that justify Leplen’s dismissal of MNT. Like where (presuming that it went off the rails somewhere) did Nanosystems go off the rails?
The fourth and final box of the 2x2 grid is for wondering what things would look like if we were in a truth seeking and communal learning mode (not worried about advocacy among random people) and Leplen was wrong to dismiss MNT. In this mode the admixture of advocacy and truth while taking Leplen seriously seems pretty bad because the very local educational process this week on this website would be going awry. It is understandable that Leplen’s reaction is relevant to one of LW’s central advocacy issues and Leplen seems is friendly to that project… and yet from the perspective of an attempt to build community knowledge in the direction of taking serious things seriously and believing true things for good reasons while disbelieving false things when the evidence pushes that way… the conflation is mildly disturbing.
This is a bad argument. Like it doesn’t even taken into account the distinctions between bootstrapping from scratch to a single working general assembler versus how it would work assuming the key atoms could be put into the right places once (like whether and how expensively it could build copies of itself). The “bootstrap difficulty” question and the “mature scaleout” questions are different questions and our discussion seems to be papering over the distinctions. The badness of this argument was gently pointed out by drethelin, but somehow not in a way that was highly upvoted, I suspect because it didn’t take the (probably?) praiseworthy advocacy concerns into account.
To be clear, I’m friendly to the idea that MNT might not be physically possible, or if possible it might not be efficient. I’m not a huge expert here at all and would like to be better educated on the subject. And I’m friendly to the idea of designing AGI advocacy messages that gain traction and motivate people to do things that actually improve the world. I’m just trying to point out that mixing both of these concerns into the same rhetorical ball, seems to do a disservice to both...
Which is pretty ironic, considering that “mixing FAI and MNT together seems politically problematic” seems to be the general claim of the article. Mostly I guess I’m just trying to say that this article is even more complicated because now instead of sometimes doping the FAI discussions with MNT, we’re fully admixing FAI and MNT and political advocacy.
It is possible to have expert experience in chemistry and to find MNT preposterous for reasons derived from that experience. In fact, it’s a common reaction; not totally universal, but very common. And the second quote from leplen sums up why, quite nicely and accurately. Even if one trusts the calculations in Nanosystems regarding the stability of the various structures on display there, they will still look like complete fantasy to someone used to ordinary methods of chemical synthesis, which really do resemble “shaking a large bin of lego in a particular way while blindfolded”!
Nanosystems itself won’t do much to convince someone who thinks that assembly is the main barrier to the existence of such structures. Maybe subsequent papers by Merkle and Freitas would help a little. They argue that you could store HCCH in the interior of nanotubes as a supply of carbons, which can then be extracted, manipulated, and put into place—if you work with great delicacy and precision…
But it is a highly nontrivial assertion, that positional control of small groups of atoms, such as one sees in enzymatic reactions, can be extended so far as to allow the synthesis of diamond through atom-stacking by nanomechanisms. Chemists have a right to be skeptical about that, and if they run across an intellectual community where people blithely talk of an AI ordering a few enzymes in the mail and then quickly bootstrapping its way to possession of a world-eating nanobot army, then they really do have a reason to think that there might be crackpots thereabouts; or, more charitably, people who don’t know the difference between science fiction and reality.
Isn’t this just the argument from personal incredulity? We have an existence proof for molecular nanotechnology—namely, nature did it by making us. We can even look at its solution while constructing ours. The cost of pushing atoms up energy gradents might make some diamondoid structures expensive—but it didn’t stop nature from building atomically-precise organisms. Nobody says we have to make molecular nanotechnology while blindfolded! So: your objections don’t seem to make very much sense.
The blindfold refers to our ability to manipulate atoms in complicated sstructures only through several layers of indirection.
Technically the blindfold was intended to refer to the fact that you can’t make measurements on the system while you’re shaking the box because your measuring device will tend to perturb the atoms you’re manipulating.
The walls of the box that you’re using to push the legos around was intended to refer to our ability to only manipulate atoms using clumsy tools and several layers of indirection, but we’re basically on the same page.
This is also wrong. The actual proposals for MNT involve creating a system that is very stable, so you can measure it safely. The actual machinery is a bunch of parts that are as strong as they can possibly be made (this is why the usual proposals involve covalent bonded carbon aka diamond) so they are stable and you can poke them with a probe. You keep the box as cold as practical.
It’s true that even if you set everything up perfectly, there are some events that can’t be observed directly, such as bonding and rearrangements that could destroy the machine. In addition, practical MNT systems would be 3d mazes of machinery stacked on top of each other, so it would be very difficult to diagnose failures. To summarize : in a world with working MNT, there’s still lots of work that has to be done.
Building molecular nanotechnology seems to be nothing like being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded. Drexler proposes we make nano-scale structures in factories made of other nano-scale components. That’s a far more sensible picture.
Nothing like it? Map the atoms to individual pieces of legos, their configuration relative to each other (i.e. lining up the pegs and the holes) was intended to capture the directionality of covalent bonds. We capture forces and torques well since smaller legos tend to be easier to move, but harder to separate than larger legos. The shaking represents acting on the system via some therodynamic force. Gravity represents a tendency of things to settle into some local ground state that your shaking will have to push them away from. I think it does a pretty good job capturing some of the problems with entropy and exerted forces producing random thermal vibrations since those things are true at all length scales. The blindfold is because you aren’t Laplace’s demon, and you can’t really measure individual chemical reactions while they’re happening.
If anything, the lego system has too few degrees of freedom, and doesn’t capture the massiveness of the problem you’re dealing with because we can’t imagine a mol of lego pieces.
I try not to just throw out analogies willy-nilly. I really think that the problem of MNT is the problem of keeping track of an enormous number of pieces and interactions, and pushing them in very careful ways. I think that trying to shake a box of lego is a very reasonable human-friendly approximation of what’s going on at the nanoscale. I think my example doesn’t do a good job describing the varying strengths or types of molecular bonds, nor does it capture bond stretching or deformation in a meaningful way, but on the whole I think that saying it’s nothing like the problem of MNT is a bit too strong a statement.
The way biological nanotechnology (aka the body you are using to read this) solves this problem is it bonds the molecule being “worked on” to a larger, more stable molecule. This means instead of whole box of legos shaking around everywhere, as you put it, it’s a single lego shaking around bonded to a tool (the tool is composed of more legos, true, but it’s made of a LOT of legos connected in a way that makes it fairly stable). The tool is able to grab the other lego you want to stick to the first one, and is able to press the two together in a way that makes the bonding reaction have a low energetic barrier. The tool is shaped such that other side-reactions won’t “fit” very easily.
Anyways, a series of these reactions, and eventually you have the final product, a nice finished assembly that is glued together pretty strongly. In the final step you break the final product loose from the tool, analagous to ejecting a cast product from a mold. Check it out : http://en.wikipedia.org/wiki/Pyruvate_dehydrogenase
Note a key difference here between biological nanotech (life) and the way you described it in the OP. You need a specific toolset to create a specific final product. You CANNOT make any old molecule. However, you can build these tools from peptide chains, so if you did want another molecule you might be able to code up a new set of tools to make it. (and possibly build those tools using the tools you already have)
Another key factor here is that the machine that does this would operate inside an alien environment compared to existing life—it would operate in a clean vacuum, possibly at low temperatures, and would use extremely stiff subunits made of covalently bonded silicon or carbon. The idea here is to make your “lego” analogy manageable. All the “legos” in the box are glued tightly to one another (low temperature, strong covalent bonds) except for the ones you are actually playing with. No extraneous legos are allowed to enter the box (vacuum chamber)
If you want to bond a blue lego to a red lego, you force the two together in a way that controls which way they are oriented during the bonding. Check it out : http://www.youtube.com/watch?v=mY5192g1gQg
Current organic chemical synthesis DOES operate as a box of shaking legos, and this is exactly why it is very difficult to get lego models that come out without the pieces mis-bonded. http://en.wikipedia.org/wiki/Thalidomide
As for your “Shroedinger Equations are impractical to compute” : what this means is that the Lego Engineers (sorry, nanotech engineers) of the future will not be able to solve any problem in a computer alone, they’ll have to build prototypes and test them the hard way, just as it is today.
Also, this is one place where AI comes in. The universe doesn’t have any trouble modeling the energetics of a large network of atoms. If we have trouble doing the same, even using gigantic computers made of many many of these same atoms, then maybe the problem is we are doing it a hugely inefficient way. An entity smarter that humans might find a way to re-formulate the math for many orders of magnitude more efficient calculations, or it might find a way to build a computer that more efficiently uses the atoms it is composed of.
If you have to do this, then the threat of nanotech looks a lot smaller. Replicators that need a nearly perfect vacuum aren’t much of a threat.
This sounds very close to a default assumption that these processes are genuinely easy to not just compute, but to actually work out what solutions one wants. Answering “how will this protein most likely fold?” is computationally much easier (as far as we can tell) than answering “what protein will fold like this?” It may well be that these are substantially computationally easier than we currently think. Heck, it could be that P=NP, or it could be that even with P != NP that there’s still some extremely slow growing algorithm that solves NP complete problems. But these don’t seem like likely scenarios unless one has some evidence for them.
Got a reference for that? It’s not obvious to me (CS background, not bio).
What if you have an algorithm that attempts to solve the “how will this protein most likely fold?” problem, but is only tractable on 1% of possible inputs, and just gives up on the other 99%? As long as the 1% contains enough interesting structures, it’ll still work as a subroutine for the “what protein will fold like this?” problem. The search algorithm just has to avoid the proteins that it doesn’t know how to evaluate. That’s how human engineers work, anyway: “what does this pile of spaghetti code do?” is uncomputable in the worst case, but that doesn’t stop programmers from solving “write a program that does X”.
Sure, see for example here which discusses some of the issues involved. Although your essential point may still have merit, because it is likely that many of the proteins we would want will have much more restricted shapes than those in general problem. Also, I don’t know much about what work has been done in the last few years, so it is possible that the state of the art has changed substantially.
The idea is to have a vacuum inside the machinery, a macroscopic nanofactory can still exist in an atmosphere.
Sure, but a lot of the hypothetical nanotech disasters are things that require nanotech devices that are themselves very small (e.g. the grey goo scenarios). If one requires a macroscopic object to keep a stable vacuum then the set of threats goes down by a lot. Obviously a lot of them are still possibly present (such as the possibility that almost anyone will be able to refine uranium), but many of them don’t, and many of the obvious scenarios connected to AI would then look less likely.
I don’t know.. I think ‘grey goo’ scenarios would still work even if the individual goolets are insect-sized.
This is unreasonably accusatory. I’m pretty sure MNT is added to the discussion because people here such as Eliezer and Annisimov and Vassar believe it to be both possible and a likely thing for AI to do.
Isn’t this the argument creationists use against evolution? But more seriously, nature does nano-assembly constantly and with pretty remarkable precision in ways we have yet to be able to fully understand or control. This means that there’s at the very least that much to learn about MNT that we’re simply “not smart enough” to be able to understand yet. Consider fields like transfection, where you can buy some reagents and cells from Sigma or whoever and make them create your own custom proteins. This is far far in advance of what we could do 100 years ago but is arguably only a matter of being “smarter” and/or knowing more rather than anything else. Calcium Phosphate transfection doesn’t even use novel chemicals, and yet was only discovered in 1973.
Nature does nano-assembly, but it isn’t arbitrary nano-assembly.
My example of a very hard nano-assembly problem is a ham sandwich, with the hardest part being the lettuce. It’s possible that the easiest way to make a lettuce leaf—they still have live cells—is to grow a head of lettuce.
Maybe the right question (ignoring where MNT fits with AI) is to look at what parts of MNT looks feasible at present levels of knowledge.
Pointing out a possible mental bias isn’t accusatory.
I read that phrase as implying MNT was consciously added to help convince others about FAI, not that it was an unconscious bias eg Eliezer had.
This is precisely what I meant. In some examples the line of reasoning “AI->MNT->we’re all dead if it’s not friendly” is specifically prefaced with the discussion that any detailed example is inherently less plausible, but adding the details is supposed to make it feel more believable. My whole argument is that I think this specific detail will backfire in the “making it feel more believable” department for someone who does not already believe in MNT and other transhumanist memes.
Whether or not MNT is a likely tool of AI (I think it is), IIRC it is usually used as a lower bound on what an AI can do. This answers leplen’s objection that MNT is a burdensome detail—saying “AI could, for example, use MNT to take over the world”, is only as burdensome as the claim that MNT or some other similarly powerful technologies are possible.
No, MNT is part of the discussion because it is taken for granted, along with cryonics, parallel quantum worlds, Dyson spheres, and various less spectacular ideas. You may want to see analogous complaints that I have previously made.
Continuous in the sense of, like, continuous energy levels? Because if so, wow.
Here’s my guess:
“Continuous” is a reference to the wave function as described by current laws of physics.
Eliezer is “infinite set atheist”, which among other things rule out the possibility of an actually continuous fabric of the universe.
As I’ve already pointed out to another infinite set atheist, you could get the appearance of a continuous wavefunction without actually requiring infinite computing power to simulate it. All you need to do is make the simulation lazy—add more trailing digits in a just-in-time fashion.
Whether or not that counts as complicating the rules for the purpose of solomonoff induction is.. hard to say.
Furthermore, a “continuous” function could very well contain a finite amount of information, provided it’s frequency range is limited. But then, it wouldn’t be “actually” continuous.
I just didn’t want to complicate things by mentioning Shannon.
That would be reasonable, but it’s not clear to me what “their own view” about that would look like. My impression is that most physicists see the universe as (at least functionally) continuous, with a few people working on determining upper bounds for how small the discrete spatial elements of the universe could be, and getting results like “well, any cells would be as much smaller than our scale as our scale is from the total size of the observable universe.”
I don’t have an issue bringing up MNT in these discussions, because our goal is to convince people that incautiously designed machine intelligence is a problem, and a major failure mode for people is that they say really stupid things like ’well, the machine won’t be able to do anything on its own because it’s just a computer—it’ll need humanity, therefore, it’ll never kill us all.” Even if MNT is impossible, that’s still true—but bringing up MNT provides people with an obvious intuitive path to the apocalypse. It isn’t guaranteed to happen, but it’s also not unlikely, and it’s a powerful educational tool for showing people the sorts of things that strong AI may be capable of.
This is not a great argument, given that it works equally well if you replace MNT with God/Devil in the above.
That’s… not a strong criticism. There are compelling reasons not to believe that God is going to be a major force in steering the direction the future takes. The exact opposite is true for MNT—I’d bet at better-than-even odds that MNT will be a major factor in how things play out basically no matter what happens.
All we’re doing is providing people with a plausible scenario that contradicts flawed intuitions that they might have, in an effort to get them to revisit those intuitions and reconsider them. There’s nothing wrong with that. Would we need to do it if people were rational agents? No—but, as you may be aware, we definitely don’t live in that universe.
There is no need to use known bad arguments when there are so many good ones.
Of course there is. For starters, most of the good arguments are much more difficult to concisely explain, or invite more arguments from flawed intuitions. Remember, we’re not trying to feel smug in our rational superiority here; we’re trying to save the world.
if your bad argument gets refuted, you lose whatever credibility you may have had.
It isn’t the sort of bad argument that gets refuted. The best someone can do is point out that there’s no guarantee that MNT is possible. In which case, the response is ‘Are you prepared to bet the human species on that? Besides, it doesn’t actually matter, because [insert more sophisticated argument about optimization power here].’ It doesn’t hurt you, and with the overwhelming majority of semi-literate audiences, it helps.
The main problem here is that both you and the people you’re complaining about, confuse early nanotechnology roaming free in the environment maybe as capable as a living cell and probably quite similar to them in other ways, with a much more advanced and energy intensive nanotechnology taking place in a controlled environment. This is further confounded by the AI likely being able to use large amounts of the quickly reproducing early nanotech plus existing infrastructure in order to construct said advanced nanotech in a few hours.
That is, if it doesn’t find a way to program the nearest spinning harddrive to write down working femtotech within a millisecond.
What is femtotech? A femtometer is the size of an individual proton. You have just implied that an AI could conceivably use a spinning disk harddrive to perform nuclear fission and assemble the resulting nucleons into some sort of technology within a thousandth of a second.
Are you sure that you believe that’s possible?
That was about the mental image I had in mind, yea.
And as Baughn said, I don’t think it’s possible, but I’m not entirely certain that it isn’t. More importantly, it doesn’t seem that implausible that at least one of the thousands of other ideas that are about as overpowered and impossible-sounding and that we don’t have the faintest hope of thinking of, is possible. And one is all the AI needs.
Pointing out that a very low probability argument is not proved to be impossible and is thus worth considering is roughly equivalent to pointing out that someone wins the lottery. Just as I wouldn’t listen to a financial adviser who at every meeting pointed out that I might soon win the lottery, it’s difficult to take seriously people warning me of the risks of things that I judge to be impossible. If you have significantly better advice than lottery tickets, I’m happy to hear it, but the argument that surely I could buy lots and lots of lottery tickets, and one of them has to win is not particularly convincing.
You are confusing probability with difficulty; I’m pretty certain that in some sense, random read-write patterns will eventually cause it to happen with some inconceivably low probability. The question is what’d be required to find the right pattern. Will quantum mechanics put a stop to it before it propagates more than a few atoms? Is there a pattern that’d do it, but finding that pattern would break thermodynamics? Is there a pattern that could be found, but that’d require far more computing power than could ever be built in this universe utilized maximally? Maybe it’s technically within reach but would require a matrochika brain. Or maybe, just maybe, almost certainly not, the kind of supercomputer the AI might get it’s hands on will be able to do it.
But yea, the probability this particular strategy would be practical is extremely small. That was not my point. my point is that the rough reference class of absurdly overpowered implausibilities, when you integrate over all the myriad different things in it, ends up with a decent chunk of probability put together.
The ‘femtotech from HDD’ thing is essentially shorthand for some kind of black swan. Considering how powerful mere human-level intelligence is, superintelligence seems certain to find something overpowered somewhere, even if it’s just hacking human minds.
FWIW, while my estimate of femtotech-through-weird-bit-patterns is effectively zero, my estimate of femtotech through rewriting the disk firmware and getting it to construct magnetic fields in no way corresponding to normal disk usage is.. well, still near-zero, but several orders of magnitude higher.
If you didn’t consider overwriting the firmware as an immediately obvious option, you may be merely human. An AI certainly would, and other options I wouldn’t think of. :-)
Once again, someone else express my thoughts way better than I ever could.
I don’t think anyone would claim that. The claim is that we can’t be sufficiently certain it’s impossible.
I am sufficiently certain it’s impossible. I don’t care how intelligent something is, physical law wins. You can’t trick the conservation of energy. You can’t run nuclear reactions on your hard-drive, no matter how you spin it.
I would rate the possibility of unicorns orders of magnitudes higher than the possibility of being able to assemble femtotech using my hard disk. It is more probable that that is impossible than it is probable that I am actually composing this post.
I’ve read at least one science fiction story predicated on the idea that the A.I., within a few moments of waking up, discovers a heretofore unknown principle of physics and somehow uses them as its gateway to freedom, in one case by actually controlling the components of its hardware to manipulate the newly discovered tachyon field or whatever.
Whether or not this scenario as stated is plausible is less important than the underlying question: How much of basic physics do you think humans have already figured out? If your answer is that we’ve already discovered 95% of the true laws of physics, then I can see how you would be skeptical. However, if you’re wrong and there is actually something fundamental that we’re missing because we’re just too stupid, then you can be assured that an arbitrarily powerful A.I. would not miss it, and would figure out how to exploit it.
We’re pretty good at physics. The g-factor) for an electron is 2.0023193043622(15). That number is predicted by theory and measured experimentally, and both give that exact same result. The parentheses in the last 2-digits denote that we’re not totally sure those last two numbers are a one and a five due to experimental error. There are very few other human endeavors where we have 12 or 13 decimal places worth of accuracy. While there’s still a lot of interesting consequences to work out, and people are still working on getting quantum mechanics and general relativity to talk to each other, any new quantum physics is going to have to be hiding somewhere past the 15th decimal point.
No, they are the standard deviation on the previous digits, i.e. we’re 68% sure that the g-factor is between 2.0023193043607 and 2.0023193043637.
Prime Intellect, right?
That particular story was made somewhat more plausible because the chips were already based on a newly-discovered, ill-understood physical principle that contradicted normal quantum mechanics. It’s pretty likely humanity would have made the same discoveries, the AI just made them faster.
That’s the one.
As far as “missing physics” goes, I still feel that it’s a tad hubristic to assert that we’ve got everything nailed down just because we can measure electron mass very precisely. There could always be unknown unknowns, phenomena which we haven’t seen before because we haven’t observed the conditions under which they would arise. There could simply be regularities in our observations which we don’t detect, like how both Newton’s laws and relativity are obvious-in-hindsight but required genius intellects to be first observed.