Also, sorry about the length of this reply. As the adage goes: “If I had more time, I would have written a shorter letter.”
From my perspective you seem simply very optimistic on what kind of data can be extracted from unspecific measurements.
That seems to be one of the relevant differences between us. Although I don’t think it is the only difference that causes us to see things differently.
Other differences (I guess some of these overlap):
It seems I have higher error-bars than you on the question we are discussing now. You seem more comfortable taking the availability heuristic (if you can think of approaches for how something can be done) as conclusive evidence.
Compared to me, it seems that you see experimentation as more inseparably linked with needing to build extensive infrastructure / having access to labs, and spending lots of serial time (with much back-and-fourth).
You seem more pessimistic about the impressiveness/reliability of engineering that can be achieved by a superintelligence that lacks knowledge/data about lots of stuff.
The probability of having a single plan work, and having one of several plans (carried out in parallel) work, seems to be more linked in your mind than mine.
You seem more dismissive than me of conclusions maybe being possible to reach from first-principles thinking (about how universes might work).
I seem to be more optimistic about approaches to thinking that are akin to (a more efficient version of) “think of lots of ways the universe might work, do Montecarlo-simulations for how those conjectures would affect the probability of lots of aspects of lots of different observations, and take notice if some theories about the universe seem unusually consistent with the data we see”.
I wonder if you maybe think of computability in a different way from me. Like, you may think that it’s computationally intractable to predict the properties of complex molecules based on knowledge of the standard model / quantum physics. And my perspective would be that this is extremely contingent on the molecule, what the AI needs to know about it, etc—and that an AGI, unlike us, isn’t forced to approach this sort of thing in an extremely crude manner.
The AI only needs to find one approach that works (from an extremely vast space of possible designs/approaches). I suspect you of having fewer qualms about playing fast and lose with the distinction between “an AI will often/mostly be prevented from doing x due to y” and “an AI will always be prevented from doing x due to y”.
It’s unclear if you share my perspective about how it’s an extremely important factor that an AGI could be much better than us at doing reasoning where it has a low error-rate (in terms of logical flaws in reasoning-steps, etc).
From my perspective, I don’t see how your reasoning is qualitatively distinct from saying in the 1500s: “We will for sure never be able to know what the sun is made out of, since we won’t be able to travel there and take samples.”
Even if we didn’t have e.g. the standard model, my perspective would still be roughly what it is (with some adjustments to credences, but not qualitatively so). So to me, us having the standard model is “icing on the cake”.
Eliezer says “A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis (...)”. I might add more qualifiers (replacing “would” with “might”, etc). I think I have wider error-bars than Eliezer, but similar intuitions when it comes to this kind of thing.
Speaking of intuitions, one question that maybe gets at deeper intuitions is “could AGIs find out how to play theoretically perfect chess / solve the game of chess?”. At 5⁄1 odds, this is a claim that I myself would bet neither for nor against (I wouldn’t bet large sums at 1⁄1 odds either). While I think people of a certain mindset will think “that is computationally intractable [when using the crude methods I have in mind]”, and leave it at that.
As to my credences that a superintelligence could “oneshot” nanobots[1] - without being able to design and run experiments prior to designing this plan—I would bet neither “yes” or “no” to that a 1⁄1 odds (but if I had to bet, I would bet “yes”).
Upon seeing three frames of a falling apple and with no other information, a superintelligence would assign a high probability to Newtonian mechanics, including Newtonian gravity. [from the post you reference]
But it would have other information. Insofar as it can reason about the reasoning-process that it itself consists of, that’s a source of information (some ways by which the universe could work would be more/less likely to produce itself). And among ways that reality might work—which the AI might hypothesize about (in the absence of data) - some will be more likely than others in a “Kolmogorov complexity” sort of way.
How far/short a superintelligence could get with this sort of reasoning, I dunno.
Here is an excerpt from a TED-talk from the Wolfram Alpha that feels a bit relevant (I find the sort of methodology that he outlines deeply intuitive):
”Well, so, that leads to kind of an ultimate question: Could it be that someplace out there in the computational universe we might find our physical universe? Perhaps there’s even some quite simple rule, some simple program for our universe. Well, the history of physics would have us believe that the rule for the universe must be pretty complicated. But in the computational universe, we’ve now seen how rules that are incredibly simple can produce incredibly rich and complex behavior. So could that be what’s going on with our whole universe? If the rules for the universe are simple, it’s kind of inevitable that they have to be very abstract and very low level; operating, for example, far below the level of space or time, which makes it hard to represent things. But in at least a large class of cases, one can think of the universe as being like some kind of network, which, when it gets big enough, behaves like continuous space in much the same way as having lots of molecules can behave like a continuous fluid. Well, then the universe has to evolve by applying little rules that progressively update this network. And each possible rule, in a sense, corresponds to a candidate universe.
Actually, I haven’t shown these before, but here are a few of the candidate universes that I’ve looked at. Some of these are hopeless universes, completely sterile, with other kinds of pathologies like no notion of space, no notion of time, no matter, other problems like that. But the exciting thing that I’ve found in the last few years is that you actually don’t have to go very far in the computational universe before you start finding candidate universes that aren’t obviously not our universe. Here’s the problem: Any serious candidate for our universe is inevitably full of computational irreducibility. Which means that it is irreducibly difficult to find out how it will really behave, and whether it matches our physical universe. A few years ago, I was pretty excited to discover that there are candidate universes with incredibly simple rules that successfully reproduce special relativity, and even general relativity and gravitation, and at least give hints of quantum mechanics.”
invent General Relativity as a hypothesis [from the post you reference]
As I understand it, the original experiment humans did to test for general relativity (not to figure out that general relativity probably was correct, mind you, but to test it “officially”) was to measure gravitational redshift.
And I guess redshift is an example of something that will affect many photos. And a superintelligent mind might be able to use such data better than us (we, having “pathetic” mental abilities, will have a much greater need to construct experiments where we only test one hypothesis at a time, and to gather the Bayesian evidence we need relating to that hypothesis from one or a few experiments).
It seems that any photo that contains lighting stemming from the sun (even if the picture itself doesn’t include the sun) can be a source of Bayesian evidence relating to general relativity:
It seems that GPS data must account for redshift in its timing system. This could maybe mean that some internet logs (where info can be surmised about how long it takes to send messages via satellite) could be another potential source for Bayesian evidence:
I don’t know exactly what and how much data a superintelligence would need to surmise general relativity (if any!). How much/little evidence it could gather from a single picture of an apple I dunno.
There is just absolutely no reason to consider general relativity at all when simpler versions of physics explain absolutely all observations you have ever encountered (which in this case is 2 frames). [from the post you reference]
I disagree with this.
First off, it makes sense to consider theories that explain more observations than just the ones you’ve encountered.
Secondly, simpler versions of physics do not explain your observations when you see 2 webcam-frames of a falling apple. In particular, the colors you see will be affected by non-Newtonian physics.
Also, the existence of apples and digital cameras also relates to which theories of physics are likely/plausible. Same goes for the resolution of the video, etc, etc.
However, there is no way to scale this to a one-shot scenario.
You say that so definitively. Almost as if you aren’t really imagining an entity that is orders of magnitude more capable/intelligent than humans. Or as if you have ruled out large swathes of the possibility-space that I would not rule out.
I just think if an AI executed it today it would have no way of surviving and expanding.
If an AGI is superintelligent and malicious, then surviving/expanding (if it gets onto the internet) seems quite clearly feasible to me.
We even have a hard time getting corona-viruses back in the box! That’s a fairly different sort of thing, but it does show how feeble we are. Another example is illegal images/videos, etc (where the people sharing those are humans).
An AGI could plant itself onto lots of different computers, and there are lots of different humans it could try to manipulate (a low success rate would not necessarily be prohibitive). Many humans fall for pretty simple scams, and AGIs would be able to pull off much more impressive scams.
This is absolutely what engineers do. But finding the right design patterns that do this involves a lot of experimentation (not for a pipe, but for constructing e.g. a reliable transistor).
Here you speak about how humans work—and in such an absolutist way. Being feeble and error-prone reasoners, it makes sense that we need to rely heavily on experiments (and have a hard time making effective use of data not directly related to the thing we’re interested in).
That protein folding is “solved” does not disprove this IMO.
I think protein being “solved” exemplifies my perspective, but I agree about it not “proving” or “disproving” that much.
Biological molecules are, after all, made from simple building blocks (amino acid) with some very predictable properties (how they stick together) so it’s already vastly simplified the problem.
When it comes to predictable properties, I think there are other molecules where this is more the case than for biological ones (DNA-stuff needs to be “messy” in order for mutations that make evolution work to occur). I’m no chemist, but this is my rough impression.
are, after all, made from simple building blocks (amino acid) with some very predictable properties (how they stick together)
Ok, so you acknowledge that there are molecules with very predictable properties.
It’s ok for much/most stuff not to be predictable to an AGI, as long as the subset of stuff that can be predicted is sufficient for the AGI to make powerful plans/designs.
finding the right molecules that reliably do what you want, as well as how to put them together, etc., is a lot of research that I am pretty certain will involve actually producing those molecules and doing experiments with them.
Even IF that is the case (an assumption that I don’t share but also don’t rule out), design-plans may be made to have experimentation built into them. It wouldn’t necessarily need to be like this:
experiments being run
data being sent to the AI so that it can reason about it
then having the AI think a bit and construct new experiments
more experiments being run
data being sent to the AI so that it can reason about it
etc
I could give specific examples of ways to avoid having to do it that way, but any example I gave would be impoverished, and understate the true space of possible approaches.
His claim is that an ASI will order some DNA and get some scientists in a lab to mix it together with some substances and create nanobots.
I read the scenario he described as:
involving DNA being ordered from lab
having some gullible person elsewhere carry out instructions, where the DNA is involved somehow
being meant as one example of a type of thing that was possible (but not ruling out that there could be other ways for a malicious AGI to go about it)
I interpreted him as pointing to a larger possibility-space than the one you present. I don’t think the more specific scenario you describe would appear prominently in his mind, and not mine either (you talk about getting “some scientists in a lab to mix it together”—while I don’t think this would need to happen in a lab).
Here is an excerpt from here (written in 2008), with boldening of text done by me:
“1. Crack the protein folding problem, to the extent of being able to generate DNA strings whose folded peptide sequences fill specific functional roles in a complex chemical interaction. 2. Email sets of DNA strings to one or more online laboratories which offer DNA synthesis, peptide sequencing, and FedEx delivery. (Many labs currently offer this service, and some boast of 72-hour turnaround times.) 3. Find at least one human connected to the Internet who can be paid, blackmailed, or fooled by the right background story, into receiving FedExed vials and mixing them in a specified environment. 4. The synthesized proteins form a very primitive “wet” nanosystem which, ribosomelike, is capable of accepting external instructions; perhaps patterned acoustic vibrations delivered by a speaker attached to the beaker. 5. Use the extremely primitive nanosystem to build more sophisticated systems, which construct still more sophisticated systems, bootstrapping to molecular nanotechnology—or beyond.”
”Naturally, with this in mind, we started to build a biological teleporter. We call it the DBC. That’s short for digital-to-biological converter. Unlike the BioXp, which starts from pre-manufactured short pieces of DNA, the DBC starts from digitized DNA code and converts that DNA code into biological entities, such as DNA, RNA, proteins or even viruses. You can think of the BioXp as a DVD player, requiring a physical DVD to be inserted, whereas the DBC is Netflix. To build the DBC, my team of scientists worked with software and instrumentation engineers to collapse multiple laboratory workflows, all in a single box. This included software algorithms to predict what DNA to build, chemistry to link the G, A, T and C building blocks of DNA into short pieces, Gibson Assembly to stitch together those short pieces into much longer ones, and biology to convert the DNA into other biological entities, such as proteins.
This is the prototype. Although it wasn’t pretty, it was effective. It made therapeutic drugs and vaccines. And laboratory workflows that once took weeks or months could now be carried out in just one to two days. And that’s all without any human intervention and simply activated by the receipt of an email which could be sent from anywhere in the world. We like to compare the DBC to fax machines.
(...)
Here’s what our DBC looks like today. We imagine the DBC evolving in similar ways as fax machines have. We’re working to reduce the size of the instrument, and we’re working to make the underlying technology more reliable, cheaper, faster and more accurate.
(...)
The DBC will be useful for the distributed manufacturing of medicine starting from DNA. Every hospital in the world could use a DBC for printing personalized medicines for a patient at their bedside. I can even imagine a day when it’s routine for people to have a DBC to connect to their home computer or smart phone as a means to download their prescriptions, such as insulin or antibody therapies. The DBC will also be valuable when placed in strategic areas around the world, for rapid response to disease outbreaks. For example, the CDC in Atlanta, Georgia could send flu vaccine instructions to a DBC on the other side of the world, where the flu vaccine is manufactured right on the front lines.”
I believe understanding protein function is still vastly less developed (correct me if I’m wrong here, I haven’t followed it in detail).
I’m no expert on this, but what you say here seems in line with my own vague impression of things. As you maybe noticed, I also put “solved” in quotation marks.
However, in this specific instance, the way Eliezer phrases it, any iterative plan for alignment would be excluded.
As touched upon earlier, I am myself am optimistic when it comes to iterative plans for alignment. But I would prefer such iteration to be done with caution that errs on the side of paranoia (rather than being “not paranoid enough”).
It would be ok if (many of the) people doing this iteration would think it unlikely that intuitions like Eliezer’s or mine are correct. But it would be preferable for them to carry out plans that would be likely to have positive results even if they are wrong about that.
Like, you expect that since something seems hopeless to you, a superintelligent AGI would be unable to do it? Ok, fine. But let’s try to minimize the amount of assumptions like that which are loadbearing in our alignment strategies. Especially for assumptions where smart people who have thought about the question extensively disagree strongly.
As a sidenote:
If I lived in the stone age, I would assign low credence to us going step by step from stone-age technologies akin to iPhones and the international space station and IBM being written with xenon atoms.
If I lived prior to complex life (but my own existence didn’t factor into my reasoning), I would assign low credence to anything like mammals evolving.
It’s interesting to note that even though many people (such as yourself) have a “conservative” way of thinking (about things such as this) compared to me, I am still myself “conservative” in the sense that there are several things that have happened that would have seemed too “out there” to appear realistic to me.
Another sidenote:
One question we might ask ourselves is: “how many rules by which the universe could work would be consistent with e.g. the data we see on the internet?”. And by rules here, I don’t mean rules that can be derived from other rules (like e.g. the weight of a helium atom), but the parameters that most fundamentally determine how the universe works. If we...
Rank rules by (1) how simple/elegant they are and (2) by how likely the data we see on the internet would be to occur with those rules
Consider rules “different from each other” if there are differences between them in regards to predictions they make for which nano-technology-designs that would work
...my (possibly wrong) guess is that there would be a “clear winner”.
Even if my guess is correct, that leaves the question of whether finding/determining the “winner” is computationally tractable. With crude/naive search-techniques it isn’t tractable, but we don’t know the specifics of the techniques that a superintelligence might use—it could maybe develop very efficient methods for ruling out large swathes of search-space.
And a third sidenote (the last one, I promise):
Speculating about this feels sort of analogous to reasoning about a powerful chess engine (although there are also many disanalogies). I know that I can beat an arbitrarily powerful chess engine if I start from a sufficiently advantageous position. But I find it hard to predict where that “line” is (looking at a specific board position, and guessing if an optimal chess-player could beat me). Like, for some board positions the answer will be a clear “yes” or a clear “no”, but for other board-positions, it will not be clear.
I don’t know how much info and compute a superintelligence would need to make nanotechnology-designs that work in a “one short”-ish sort of way. I’m fairly confident that the amount of computational resources used for the initial moon-landing would be far too little (I’m picking an extreme example here, since I want plenty of margin for error). But I don’t know where the “line” is.
Although keep in mind that “oneshotting” does not exclude being able to run experiments (nor does it rule out fairly extensive experimentation). As I touched upon earlier, it may be possible for a plan to have experimentation built into itself. Needing to do experimentation ≠ needing access to a lab and lots of serial time.
This tweet from Eliezer seems relevant btw. I would give similar answers to all of the questions he lists that relate to nanotechnology (but I’d be somewhat more hedged/guarded—e.g. replacing “YES” with “PROBABLY” for some of them).
Likewise :)
Also, sorry about the length of this reply. As the adage goes: “If I had more time, I would have written a shorter letter.”
That seems to be one of the relevant differences between us. Although I don’t think it is the only difference that causes us to see things differently.
Other differences (I guess some of these overlap):
It seems I have higher error-bars than you on the question we are discussing now. You seem more comfortable taking the availability heuristic (if you can think of approaches for how something can be done) as conclusive evidence.
Compared to me, it seems that you see experimentation as more inseparably linked with needing to build extensive infrastructure / having access to labs, and spending lots of serial time (with much back-and-fourth).
You seem more pessimistic about the impressiveness/reliability of engineering that can be achieved by a superintelligence that lacks knowledge/data about lots of stuff.
The probability of having a single plan work, and having one of several plans (carried out in parallel) work, seems to be more linked in your mind than mine.
You seem more dismissive than me of conclusions maybe being possible to reach from first-principles thinking (about how universes might work).
I seem to be more optimistic about approaches to thinking that are akin to (a more efficient version of) “think of lots of ways the universe might work, do Montecarlo-simulations for how those conjectures would affect the probability of lots of aspects of lots of different observations, and take notice if some theories about the universe seem unusually consistent with the data we see”.
I wonder if you maybe think of computability in a different way from me. Like, you may think that it’s computationally intractable to predict the properties of complex molecules based on knowledge of the standard model / quantum physics. And my perspective would be that this is extremely contingent on the molecule, what the AI needs to know about it, etc—and that an AGI, unlike us, isn’t forced to approach this sort of thing in an extremely crude manner.
The AI only needs to find one approach that works (from an extremely vast space of possible designs/approaches). I suspect you of having fewer qualms about playing fast and lose with the distinction between “an AI will often/mostly be prevented from doing x due to y” and “an AI will always be prevented from doing x due to y”.
It’s unclear if you share my perspective about how it’s an extremely important factor that an AGI could be much better than us at doing reasoning where it has a low error-rate (in terms of logical flaws in reasoning-steps, etc).
From my perspective, I don’t see how your reasoning is qualitatively distinct from saying in the 1500s: “We will for sure never be able to know what the sun is made out of, since we won’t be able to travel there and take samples.”
Even if we didn’t have e.g. the standard model, my perspective would still be roughly what it is (with some adjustments to credences, but not qualitatively so). So to me, us having the standard model is “icing on the cake”.
Eliezer says “A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis (...)”. I might add more qualifiers (replacing “would” with “might”, etc). I think I have wider error-bars than Eliezer, but similar intuitions when it comes to this kind of thing.
Speaking of intuitions, one question that maybe gets at deeper intuitions is “could AGIs find out how to play theoretically perfect chess / solve the game of chess?”. At 5⁄1 odds, this is a claim that I myself would bet neither for nor against (I wouldn’t bet large sums at 1⁄1 odds either). While I think people of a certain mindset will think “that is computationally intractable [when using the crude methods I have in mind]”, and leave it at that.
As to my credences that a superintelligence could “oneshot” nanobots[1] - without being able to design and run experiments prior to designing this plan—I would bet neither “yes” or “no” to that a 1⁄1 odds (but if I had to bet, I would bet “yes”).
But it would have other information. Insofar as it can reason about the reasoning-process that it itself consists of, that’s a source of information (some ways by which the universe could work would be more/less likely to produce itself). And among ways that reality might work—which the AI might hypothesize about (in the absence of data) - some will be more likely than others in a “Kolmogorov complexity” sort of way.
How far/short a superintelligence could get with this sort of reasoning, I dunno.
Here is an excerpt from a TED-talk from the Wolfram Alpha that feels a bit relevant (I find the sort of methodology that he outlines deeply intuitive):
”Well, so, that leads to kind of an ultimate question: Could it be that someplace out there in the computational universe we might find our physical universe? Perhaps there’s even some quite simple rule, some simple program for our universe. Well, the history of physics would have us believe that the rule for the universe must be pretty complicated. But in the computational universe, we’ve now seen how rules that are incredibly simple can produce incredibly rich and complex behavior. So could that be what’s going on with our whole universe? If the rules for the universe are simple, it’s kind of inevitable that they have to be very abstract and very low level; operating, for example, far below the level of space or time, which makes it hard to represent things. But in at least a large class of cases, one can think of the universe as being like some kind of network, which, when it gets big enough, behaves like continuous space in much the same way as having lots of molecules can behave like a continuous fluid. Well, then the universe has to evolve by applying little rules that progressively update this network. And each possible rule, in a sense, corresponds to a candidate universe.
Actually, I haven’t shown these before, but here are a few of the candidate universes that I’ve looked at. Some of these are hopeless universes, completely sterile, with other kinds of pathologies like no notion of space, no notion of time, no matter, other problems like that. But the exciting thing that I’ve found in the last few years is that you actually don’t have to go very far in the computational universe before you start finding candidate universes that aren’t obviously not our universe. Here’s the problem: Any serious candidate for our universe is inevitably full of computational irreducibility. Which means that it is irreducibly difficult to find out how it will really behave, and whether it matches our physical universe. A few years ago, I was pretty excited to discover that there are candidate universes with incredibly simple rules that successfully reproduce special relativity, and even general relativity and gravitation, and at least give hints of quantum mechanics.”
As I understand it, the original experiment humans did to test for general relativity (not to figure out that general relativity probably was correct, mind you, but to test it “officially”) was to measure gravitational redshift.
And I guess redshift is an example of something that will affect many photos. And a superintelligent mind might be able to use such data better than us (we, having “pathetic” mental abilities, will have a much greater need to construct experiments where we only test one hypothesis at a time, and to gather the Bayesian evidence we need relating to that hypothesis from one or a few experiments).
It seems that any photo that contains lighting stemming from the sun (even if the picture itself doesn’t include the sun) can be a source of Bayesian evidence relating to general relativity:
It seems that GPS data must account for redshift in its timing system. This could maybe mean that some internet logs (where info can be surmised about how long it takes to send messages via satellite) could be another potential source for Bayesian evidence:
I don’t know exactly what and how much data a superintelligence would need to surmise general relativity (if any!). How much/little evidence it could gather from a single picture of an apple I dunno.
I disagree with this.
First off, it makes sense to consider theories that explain more observations than just the ones you’ve encountered.
Secondly, simpler versions of physics do not explain your observations when you see 2 webcam-frames of a falling apple. In particular, the colors you see will be affected by non-Newtonian physics.
Also, the existence of apples and digital cameras also relates to which theories of physics are likely/plausible. Same goes for the resolution of the video, etc, etc.
You say that so definitively. Almost as if you aren’t really imagining an entity that is orders of magnitude more capable/intelligent than humans. Or as if you have ruled out large swathes of the possibility-space that I would not rule out.
If an AGI is superintelligent and malicious, then surviving/expanding (if it gets onto the internet) seems quite clearly feasible to me.
We even have a hard time getting corona-viruses back in the box! That’s a fairly different sort of thing, but it does show how feeble we are. Another example is illegal images/videos, etc (where the people sharing those are humans).
An AGI could plant itself onto lots of different computers, and there are lots of different humans it could try to manipulate (a low success rate would not necessarily be prohibitive). Many humans fall for pretty simple scams, and AGIs would be able to pull off much more impressive scams.
Here you speak about how humans work—and in such an absolutist way. Being feeble and error-prone reasoners, it makes sense that we need to rely heavily on experiments (and have a hard time making effective use of data not directly related to the thing we’re interested in).
I think protein being “solved” exemplifies my perspective, but I agree about it not “proving” or “disproving” that much.
When it comes to predictable properties, I think there are other molecules where this is more the case than for biological ones (DNA-stuff needs to be “messy” in order for mutations that make evolution work to occur). I’m no chemist, but this is my rough impression.
Ok, so you acknowledge that there are molecules with very predictable properties.
It’s ok for much/most stuff not to be predictable to an AGI, as long as the subset of stuff that can be predicted is sufficient for the AGI to make powerful plans/designs.
Even IF that is the case (an assumption that I don’t share but also don’t rule out), design-plans may be made to have experimentation built into them. It wouldn’t necessarily need to be like this:
experiments being run
data being sent to the AI so that it can reason about it
then having the AI think a bit and construct new experiments
more experiments being run
data being sent to the AI so that it can reason about it
etc
I could give specific examples of ways to avoid having to do it that way, but any example I gave would be impoverished, and understate the true space of possible approaches.
I read the scenario he described as:
involving DNA being ordered from lab
having some gullible person elsewhere carry out instructions, where the DNA is involved somehow
being meant as one example of a type of thing that was possible (but not ruling out that there could be other ways for a malicious AGI to go about it)
I interpreted him as pointing to a larger possibility-space than the one you present. I don’t think the more specific scenario you describe would appear prominently in his mind, and not mine either (you talk about getting “some scientists in a lab to mix it together”—while I don’t think this would need to happen in a lab).
Here is an excerpt from here (written in 2008), with boldening of text done by me:
“1. Crack the protein folding problem, to the extent of being able to generate DNA
strings whose folded peptide sequences fill specific functional roles in a complex
chemical interaction.
2. Email sets of DNA strings to one or more online laboratories which offer DNA
synthesis, peptide sequencing, and FedEx delivery. (Many labs currently offer this
service, and some boast of 72-hour turnaround times.)
3. Find at least one human connected to the Internet who can be paid, blackmailed,
or fooled by the right background story, into receiving FedExed vials and mixing
them in a specified environment.
4. The synthesized proteins form a very primitive “wet” nanosystem which, ribosomelike, is capable of accepting external instructions; perhaps patterned acoustic vibrations delivered by a speaker attached to the beaker.
5. Use the extremely primitive nanosystem to build more sophisticated systems, which
construct still more sophisticated systems, bootstrapping to molecular
nanotechnology—or beyond.”
Btw, here are excerpts from a TED-talk by Dan Gibson from 2018:
”Naturally, with this in mind, we started to build a biological teleporter. We call it the DBC. That’s short for digital-to-biological converter. Unlike the BioXp, which starts from pre-manufactured short pieces of DNA, the DBC starts from digitized DNA code and converts that DNA code into biological entities, such as DNA, RNA, proteins or even viruses. You can think of the BioXp as a DVD player, requiring a physical DVD to be inserted, whereas the DBC is Netflix. To build the DBC, my team of scientists worked with software and instrumentation engineers to collapse multiple laboratory workflows, all in a single box. This included software algorithms to predict what DNA to build, chemistry to link the G, A, T and C building blocks of DNA into short pieces, Gibson Assembly to stitch together those short pieces into much longer ones, and biology to convert the DNA into other biological entities, such as proteins.
This is the prototype. Although it wasn’t pretty, it was effective. It made therapeutic drugs and vaccines. And laboratory workflows that once took weeks or months could now be carried out in just one to two days. And that’s all without any human intervention and simply activated by the receipt of an email which could be sent from anywhere in the world. We like to compare the DBC to fax machines.
(...)
Here’s what our DBC looks like today. We imagine the DBC evolving in similar ways as fax machines have. We’re working to reduce the size of the instrument, and we’re working to make the underlying technology more reliable, cheaper, faster and more accurate.
(...)
The DBC will be useful for the distributed manufacturing of medicine starting from DNA. Every hospital in the world could use a DBC for printing personalized medicines for a patient at their bedside. I can even imagine a day when it’s routine for people to have a DBC to connect to their home computer or smart phone as a means to download their prescriptions, such as insulin or antibody therapies. The DBC will also be valuable when placed in strategic areas around the world, for rapid response to disease outbreaks. For example, the CDC in Atlanta, Georgia could send flu vaccine instructions to a DBC on the other side of the world, where the flu vaccine is manufactured right on the front lines.”
I’m no expert on this, but what you say here seems in line with my own vague impression of things. As you maybe noticed, I also put “solved” in quotation marks.
As touched upon earlier, I am myself am optimistic when it comes to iterative plans for alignment. But I would prefer such iteration to be done with caution that errs on the side of paranoia (rather than being “not paranoid enough”).
It would be ok if (many of the) people doing this iteration would think it unlikely that intuitions like Eliezer’s or mine are correct. But it would be preferable for them to carry out plans that would be likely to have positive results even if they are wrong about that.
Like, you expect that since something seems hopeless to you, a superintelligent AGI would be unable to do it? Ok, fine. But let’s try to minimize the amount of assumptions like that which are loadbearing in our alignment strategies. Especially for assumptions where smart people who have thought about the question extensively disagree strongly.
As a sidenote:
If I lived in the stone age, I would assign low credence to us going step by step from stone-age technologies akin to iPhones and the international space station and IBM being written with xenon atoms.
If I lived prior to complex life (but my own existence didn’t factor into my reasoning), I would assign low credence to anything like mammals evolving.
It’s interesting to note that even though many people (such as yourself) have a “conservative” way of thinking (about things such as this) compared to me, I am still myself “conservative” in the sense that there are several things that have happened that would have seemed too “out there” to appear realistic to me.
Another sidenote:
One question we might ask ourselves is: “how many rules by which the universe could work would be consistent with e.g. the data we see on the internet?”. And by rules here, I don’t mean rules that can be derived from other rules (like e.g. the weight of a helium atom), but the parameters that most fundamentally determine how the universe works. If we...
Rank rules by (1) how simple/elegant they are and (2) by how likely the data we see on the internet would be to occur with those rules
Consider rules “different from each other” if there are differences between them in regards to predictions they make for which nano-technology-designs that would work
...my (possibly wrong) guess is that there would be a “clear winner”.
Even if my guess is correct, that leaves the question of whether finding/determining the “winner” is computationally tractable. With crude/naive search-techniques it isn’t tractable, but we don’t know the specifics of the techniques that a superintelligence might use—it could maybe develop very efficient methods for ruling out large swathes of search-space.
And a third sidenote (the last one, I promise):
Speculating about this feels sort of analogous to reasoning about a powerful chess engine (although there are also many disanalogies). I know that I can beat an arbitrarily powerful chess engine if I start from a sufficiently advantageous position. But I find it hard to predict where that “line” is (looking at a specific board position, and guessing if an optimal chess-player could beat me). Like, for some board positions the answer will be a clear “yes” or a clear “no”, but for other board-positions, it will not be clear.
I don’t know how much info and compute a superintelligence would need to make nanotechnology-designs that work in a “one short”-ish sort of way. I’m fairly confident that the amount of computational resources used for the initial moon-landing would be far too little (I’m picking an extreme example here, since I want plenty of margin for error). But I don’t know where the “line” is.
Although keep in mind that “oneshotting” does not exclude being able to run experiments (nor does it rule out fairly extensive experimentation). As I touched upon earlier, it may be possible for a plan to have experimentation built into itself. Needing to do experimentation ≠ needing access to a lab and lots of serial time.
This tweet from Eliezer seems relevant btw. I would give similar answers to all of the questions he lists that relate to nanotechnology (but I’d be somewhat more hedged/guarded—e.g. replacing “YES” with “PROBABLY” for some of them).