The AI risk scenario that Eliezer Yudkowsky relatively often uses is that of the AI solving the protein folding problem.
If you believe a “hard takeoff” to be probable, what reason is there to believe that the distance between a.) an AI capable of cracking that specific problem and b.) an AI triggering an intelligence explosion is too short for humans to do something similarly catastrophic as what the AI would have done with the resulting technological breakthrough?
In other words, does the protein folding problem require AI to reach a level of sophistication that would allow humans, or the AI itself, within days or months, to reach the stages where it undergoes an intelligence explosion? How so?
My assumption is that the protein-folding problem is unimaginably easier than an AI doing recursive self-improvement without breaking itself.
Admittedly, Eliezer is describing something harder than the usual interpretation of the protein-folding problem, but it still seems a lot less general than a program making itself more intelligent.
Is this question equivalent to “Is the protein-folding problem equivalently hard to the build-a-smarter-intelligence-than-I-am problem?” ? It seems like it ought to be, but I’m genuinely unsure, as the wording of your question kind of confuses me.
If so, my answer would be that it depends on how intelligent I am, since I expect the second problem to get more difficult as I get more intelligent. If we’re talking about the actual me… yeah, I don’t have higher confidence either way.
Is this question equivalent to “Is the protein-folding problem equivalently hard to the build-a-smarter-intelligence-than-I-am problem?” ?
It is mostly equivalent. Is it easier to design an AI that can solve one specific hard problem than an AI that can solve all hard problems?
Expecting that only a fully-fledged artificial general intelligence is able to solve the protein-folding problem seems to be equivalent to believing the conjunction “an universal problem solver can solve the protein-folding problem” AND “an universal problem solver is easier to solve than the protein-folding problem”. Are there good reasons to believe this?
ETA: My perception is that people who believe unfriendly AI to come sooner than nanotechnology believe that it is easier to devise a computer algorithm to devise a computer algorithm to predict protein structures from their sequences rather than to directly devise a computer algorithm to predict protein structures from their sequences. This seems counter-intuitive.
it is easier to devise a computer algorithm to devise a computer algorithm to predict protein structures from their sequences rather than to directly devise a computer algorithm to predict protein structures from their sequences. This seems counter-intuitive.
Ah, this helps, thanks.
For my own part, the idea that we might build tools better at algorithm-development than our own brains are doesn’t seem counterintuitive at all… we build a lot of tools that are better than our own brains at a lot of things. Neither does it seem implausible that there exist problems that are solvable by algorithm-development, but whose solution requires algorithms that our brains aren’t good enough algorithm-developers to develop algorithms to solve.
So it seems reasonable enough that there are problems which we’ll solve faster by developing algorithm-developers to solve them for us, than by trying to solve the problem itself.
Whether protein-folding is one of those problems, I have absolutely no idea. But it sounds like your position isn’t unique to protein-folding.
For my own part, the idea that we might build tools better at algorithm-development than our own brains are doesn’t seem counterintuitive at all...
So you believe that many mathematical problems are too hard for humans to solve but that humans can solve all of mathematics?
I already asked Timothy Gowers a similar question and I really don’t understand how people can believe this.
In order to create an artificial mathematician it is first necssary to discover, prove and encode the mathematics of discovering and proving non-arbitrary mathematics (i.e. to encode a formalization of the natural language goal “be as good as humans at mathematics”). This seems much more difficult than solving any single problem. And that’s just mathematics...
Neither does it seem implausible that there exist problems that are solvable by algorithm-development, but whose solution requires algorithms that our brains aren’t good enough algorithm-developers to develop algorithms to solve.
I do not disagree with this in theory. After all, evolution is an example of this. But it was not computationally simple for evolution to do so and it did do so by a bottom-up approach, piece by piece.
So it seems reasonable enough that there are problems which we’ll solve faster by developing algorithm-developers to solve them for us, than by trying to solve the problem itself.
To paraphrase your sentence: It seems reasonable that we can design an algorithm that can design algorithms that we are unable to design.
This can only be true in the sense that this algorithm-design-algorithm would run faster on other computational substrates than human brains. I agree that this is possible. But are relevant algorithms in a class for which a speed advantage would be substantial?
Again, in theory, all of this is fine. But how do you know that general algorithm design can be captured by an algorithm that a.) is simpler than most specific algorithms b.) whose execution is faster than that of evolution c.) which can locate useful algorithms within the infinite space of programs and d.) that humans will discover this algorithm?
Some people here seem to be highly confident about this. How?
ETA: Maybe this post better highlights the problems I see.
So you believe that many mathematical problems are too hard for humans to solve but that humans can solve all of mathematics?
All of mathematics? Dunno. I’m not even sure what that phrase refers to. But sure, there exist mathematical problems that humans can’t solve unaided, but which can be solved by tools we create.
I really don’t understand how people can believe this. In order to create an artificial mathematician it is first necssary to discover, prove and encode the mathematics of discovering and proving non-arbitrary mathematics (i.e. to encode a formalization of the natural language goal “be as good as humans at mathematics”). This seems much more difficult than solving any single problem.
In other words: you believe that if we take all possible mathematical problems and sort them by difficulty-to-humans, that one will turn out to be the most difficult?
I don’t mean to put words in your mouth here, I just want to make sure I understood you.
If so… why do you believe that?
To paraphrase your sentence: It seems reasonable that we can design an algorithm that can design algorithms that we are unable to design.
Yes, that’s a fair paraphrase.
This can only be true in the sense that this algorithm-design-algorithm would run faster on other computational substrates than human brains. I agree that this is possible. But are relevant algorithms in a class for which a speed advantage would be substantial?
Nah, I’m not talking about speed.
But how do you know that general algorithm design can be captured by an algorithm that a.) is simpler than most specific algorithms
Can you clarify what you mean by “simpler” here? If you mean in some objective sense, like how many bits would be required to specify it in a maximally compressed form or some such thing, I don’t claim that. If you mean easier for humans to develop… well, of course I don’t know that, but it seems more plausible to me than the idea that human brains happen to be the optimal machine for developing algorithms.
b.) whose execution is faster than that of evolution
We have thus far done pretty good at this; evolution is slow. I don’t expect that to change.
c.) which can locate useful algorithms within the infinite space of programs
Well, this is part of the problem specification. A tool for generating useless algorithms would be much easier to build.
d.) that humans will discover this algorithm?
(shrug) Perhaps we won’t. Perhaps we won’t solve protein-folding, either.
Some people here seem to be highly confident about this. How?
Can you quantify “highly confident” here?
For example, what confidence do you consider appropriate for the idea that there exists at least one useful algorithm A, and at least one artificial algorithm-developer AD, such that it’s easier for humans to develop AD than to develop A, and it’s easier for AD to develop A than it is for humans to develop A?
In other words: you believe that if we take all possible mathematical problems and sort them by difficulty-to-humans, that one will turn out to be the most difficult?
If you want an artificial agent to solve problems for you then you need to somehow constrain it, since there are an infinite number of problems. In this sense it is easier to specify an AI to solve a single problem, such as the protein-folding problem, rather than all problems (whatever that means, supposedly “general intelligence”).
The problem here is that goals and capabilities are not orthogonal. It is more difficult to design an AI that can play all possible games, and then tell it to play a certain game, than designing an AI to play a certain game in the first place.
Can you clarify what you mean by “simpler” here?
The information theoretic complexity of the code of a general problem solver constrained to solve a specific problem should be larger than the constrain itself. I assume here that the constrain is most of the work in getting an algorithm to do useful work. Which I like to exemplify by the difference between playing chess and doing mathematics. Both are rigorously defined activities, one of which has a clear and simple terminal goal, the other being infinite and thus hard to constrain.
For example, what confidence do you consider appropriate for the idea that there exists at least one useful algorithm A, and at least one artificial algorithm-developer AD, such that it’s easier for humans to develop AD than to develop A, and it’s easier for AD to develop A than it is for humans to develop A?
The more general the artificial algorithm-developer is, the less confident I am that it is easier to create than the specific algorithm itself.
I agree that specialized tools to perform particular tasks are easier to design than general-purpose tools. It follows that if I understand a problem well enough to know what tasks must be performed in order to solve that problem, it should be easier to solve that problem by designing specialized tools to perform those tasks, than by designing a general-purpose problem solver.
I agree that the complexity of a general problem solver should be larger than that of whatever constrains it to work on a specific task.
I agree that for a randomly selected algorithm A2, and a randomly selected artificial algorithm-developer AD2, the more general AD2 is the more likely it is that A2 is easier to develop than AD2.
I agree that the complexity of a general problem solver should be larger than that of whatever constrains it to work on a specific task.
What I meant is that if you have a very general and information theoretically simple problem solver, like evolution or AIXI, then in order to make it solve a specific problem you need a complex fitness function, respectively, in the case of AIXI, a substantial head start (the large multiplicative constant mentioned in Hutter’s paper).
When producing e.g. a chair, an AI will have to either know the specifications of the chair (such as its size or the material it is supposed to be made of) or else know how to choose a specification from an otherwise infinite set of possible specifications. Given a poorly designed fitness function, or the inability to refine its fitness function, an AI will either (a) not know what to do or (b) will not be able to converge on a qualitative solution, if at all, given limited computationally resources.
In a sense it is therefore true that an universal problem solver is easier to design than any specialized expert system. But only if you ignore the constrain it takes to “focus” the universal problem solver sufficiently in order to make it solve the right problem efficiently. Which means that the time to develop the universal problem solver plus the time it takes to constrain it might be longer than to develop the specialized solver. Since constraining it means to already know a lot about the problem in question. ETA: Or take science as another example. Once you generated a hypothesis, and an experiment to test it, you have already done most of the work. What reason do I have to believe that this is not true for the protein folding problem?
if you have a very general and information theoretically simple problem solver, like evolution or AIXI, then in order to make it solve a specific problem you need a complex fitness function
I agree with this as well. That said, sometimes that fitness function is implicit in the real world, and need not be explicitly formalized by me.
Once you generated a hypothesis, and an experiment to test it, you have already done most of the work. What reason do I have to believe that this is not true for the protein folding problem?
As I’ve said a couple of times now, I don’t have a dog in the race wrt the protein folding problem, but your argument seems to apply equally well to all conceivable problems. That’s why I asked a while back whether you think algorithm design is the single hardest problem for humans to solve. As I suggested then, I have no particular reason to think the protein-folding problem is harder (or easier) than the algorithm-design problem, but it seems really unlikely that no problem has this property.
That’s why I asked a while back whether you think algorithm design is the single hardest problem for humans to solve.
The problem is that I don’t know what you mean by “algorithm design”. Once you solved “algorithm design”, what do you expect to be able to do with it, and how?
Once you compute this “algorithm design”-algorithm, how will its behavior look like? Will it output all possible algorithms, or just the algorithms that you care about? If the latter, how does it know what algorithms you care about?
There is no brain area for “algorithm design”. There is just this computational substrate that can learn, recognize patterns etc. and whose behavior is defined and constrained by its environmental circumstances.
Say you cloned Donald E. Knuth and made him grow up under completely different circumstances, e.g. as a member of some Amazonian tribe. Now this clone has the same algorithm-design-potential, but he lacks the right input and constrains to output “The Art of Computer Programming”.
What I want to highlight is that “algorithm design”, or even “general intelligence”, is not a sufficient feature in order to get “algorithm that predicts protein structures from their sequences”.
Solving “algorithm design” or “general intelligence” does not give you some sort of oracle. In the same sense as an universal Turing machine does not give you “algorithm design” or “general intelligence”. You have to program the Turing machine in order to compute “algorithm design” or “general intelligence”. In the same sense you have to define what algorithm you want, respectively what problem you want to be solved, in order for your “algorithm design” or “general intelligence” to do what you want.
Just imagine having a human baby, the clone of a 250 IQ eugenics experiment, and ask it to solve protein folding for you. Well, it doesn’t even speak English yet. Even though you have this superior general intelligence, it won’t do what you want it to do without a lot of additional work. And even then it is not clear that it will have the motivation to do so.
You have a good point, in the case of trying to make a narrow intelligence to solve the protein folding problem. Yes, to make it spit out solutions to protein folding (even if given a “general” intelligence), you first must give it a detailed specification of the problem, which may take much work to derive in the first place.
But a solution to the protein solving problem is a means to an end. Generally, through the subgoal of being able to manipulate matter. To put it simply, the information complexity of the “practical facet” of the protein folding problem is actually not that high, because other much more general problems (“the manipulating matter problem”) point to it. An unfriendly AGI with general intelligence above a human’s doesn’t need us to do any work specifying the protein folding problem for them; they’ll find it themselves in their search for solutions to “take over the world”.
Conversely, while an AGI with a goal like the rearranging all the matter in the world a particular way might happen to solve the protein folding problem in the process of its planning, such a machine does not qualify as a useful protein-folding-solver-bot for us humans. Firstly because there’s no guarantee it will actually end up solving protein folding (maybe some other method of rearranging matter turns out to be more useful). Secondly because it doesn’t necessarily care to solve the entire protein solving problem, just the special cases relevant to its goals. Thirdly because it has no interest in giving us the solutions.
That’s why writing an AGI doesn’t violate information theory by giving us a detailed specification of the protein folding problem for free.
An unfriendly AGI with general intelligence above a human’s doesn’t need us to do any work specifying the protein folding problem for them; they’ll find it themselves in their search for solutions to “take over the world”.
First of all, we have narrow AI’s that do not exhibit Omohundro’s “Basic AI Drives”. Secondly, everyone seems to agree that it should be possible to create general AI that does (a) not exhibit those drives or (b) only exhibit AI drives to a limited extent or (c) which focuses AI drives in a manner that agrees with human volition.
The question then—regarding whether a protein-folding solver will be invented before a general AI that solves the same problem for instrumental reasons—is about the algorithmic complexity of an AI whose terminal goal is protein-folding versus an AI that does exhibit the necessary drives in order to solve an equivalent problem for instrumental reasons.
The first sub-question here is whether the aforementioned drives are a feature or a side-effect of general AI. Whether those drives have to be an explicit feature of a general AI or if they are an implicit consequence. The belief around here seems the be the latter.
Given that the necessary drives are implicit, the second sub-question is then about the point at which mostly well-behaved (bounded) AI systems become motivated to act in unbounded and catastrophic ways.
My objections to Omohundro’s “Basic AI Drives” are basically twofold: (a) I do not believe that AIs designed by humans will ever exhibit Omohundro’s “Basic AI Drives” in an unbounded fashion and (b) I believe that AIs that do exhibit Omohundro’s “Basic AI Drives” are either infeasible or require a huge number of constrains to work at all.
(a) The point of transition (step 4 below) between systems that do not exhibit Omohundro’s “Basic AI Drives” and those that do is too vague to count as a non-negligible hypothesis:
(1) Present-day software is better than previous software generations at understanding and doing what humans mean.
(2) There will be future generations of software which will be better than the current generation at understanding and doing what humans mean.
(3) If there is better software, there will be even better software afterwards.
(4) Magic happens.
(5) Software will be superhuman good at understanding what humans mean but catastrophically worse than all previous generations at doing what humans mean.
(b) An AI that does exhibit Omohundro’s “Basic AI Drives” would be paralyzed by infinite choice and low-probability hypotheses that imply vast amounts of expected utility.
There is an infinite choice of paperclip designs to choose from and a choosing a wrong design could have negative consequences that are in the range of −3^^^^3 utils.
Such an AI will not even be able to decide if trying to acquire unlimited computationally resources was instrumentally rational because without more resources it will be unable to decide if the actions that are required to acquire those resources might be instrumentally irrational from the perspective of what it is meant to do (that any terminal goal can be realized in an infinite number of ways, implies an infinite number of instrumental goals to choose from).
Another example is self-protection, which requires a definition of “self”, or otherwise the AI risks destroying itself.
Well, I’ve argued with you about (a) in the past, and it didn’t seem to go anywhere, so I won’t repeat that.
With regards to (b), that sounds like a good list of problems we need to solve in order to obtain AGI. I’m sure someone somewhere is already working on them.
I have no strong opinion on whether a “hard takeoff” is probable. (Because I haven’t thought about it a lot, not because I think the evidence is exquisitely balanced.) I don’t see any particular reason to think that protein folding is the only possible route to a “hard takeoff”.
What is alleged to make for an intelligence explosion is having a somewhat-superhuman AI that’s able to modify itself or make new AIs reasonably quickly. A solution to the protein folding problem might offer one way to make new AIs much more capable than oneself, I suppose, but it’s hardly the only way one can envisage.
The AI risk scenario that Eliezer Yudkowsky relatively often uses is that of the AI solving the protein folding problem.
If you believe a “hard takeoff” to be probable, what reason is there to believe that the distance between a.) an AI capable of cracking that specific problem and b.) an AI triggering an intelligence explosion is too short for humans to do something similarly catastrophic as what the AI would have done with the resulting technological breakthrough?
In other words, does the protein folding problem require AI to reach a level of sophistication that would allow humans, or the AI itself, within days or months, to reach the stages where it undergoes an intelligence explosion? How so?
My assumption is that the protein-folding problem is unimaginably easier than an AI doing recursive self-improvement without breaking itself.
Admittedly, Eliezer is describing something harder than the usual interpretation of the protein-folding problem, but it still seems a lot less general than a program making itself more intelligent.
Is this question equivalent to “Is the protein-folding problem equivalently hard to the build-a-smarter-intelligence-than-I-am problem?” ? It seems like it ought to be, but I’m genuinely unsure, as the wording of your question kind of confuses me.
If so, my answer would be that it depends on how intelligent I am, since I expect the second problem to get more difficult as I get more intelligent. If we’re talking about the actual me… yeah, I don’t have higher confidence either way.
It is mostly equivalent. Is it easier to design an AI that can solve one specific hard problem than an AI that can solve all hard problems?
Expecting that only a fully-fledged artificial general intelligence is able to solve the protein-folding problem seems to be equivalent to believing the conjunction “an universal problem solver can solve the protein-folding problem” AND “an universal problem solver is easier to solve than the protein-folding problem”. Are there good reasons to believe this?
ETA: My perception is that people who believe unfriendly AI to come sooner than nanotechnology believe that it is easier to devise a computer algorithm to devise a computer algorithm to predict protein structures from their sequences rather than to directly devise a computer algorithm to predict protein structures from their sequences. This seems counter-intuitive.
Ah, this helps, thanks.
For my own part, the idea that we might build tools better at algorithm-development than our own brains are doesn’t seem counterintuitive at all… we build a lot of tools that are better than our own brains at a lot of things. Neither does it seem implausible that there exist problems that are solvable by algorithm-development, but whose solution requires algorithms that our brains aren’t good enough algorithm-developers to develop algorithms to solve.
So it seems reasonable enough that there are problems which we’ll solve faster by developing algorithm-developers to solve them for us, than by trying to solve the problem itself.
Whether protein-folding is one of those problems, I have absolutely no idea. But it sounds like your position isn’t unique to protein-folding.
So you believe that many mathematical problems are too hard for humans to solve but that humans can solve all of mathematics?
I already asked Timothy Gowers a similar question and I really don’t understand how people can believe this.
In order to create an artificial mathematician it is first necssary to discover, prove and encode the mathematics of discovering and proving non-arbitrary mathematics (i.e. to encode a formalization of the natural language goal “be as good as humans at mathematics”). This seems much more difficult than solving any single problem. And that’s just mathematics...
I do not disagree with this in theory. After all, evolution is an example of this. But it was not computationally simple for evolution to do so and it did do so by a bottom-up approach, piece by piece.
To paraphrase your sentence: It seems reasonable that we can design an algorithm that can design algorithms that we are unable to design.
This can only be true in the sense that this algorithm-design-algorithm would run faster on other computational substrates than human brains. I agree that this is possible. But are relevant algorithms in a class for which a speed advantage would be substantial?
Again, in theory, all of this is fine. But how do you know that general algorithm design can be captured by an algorithm that a.) is simpler than most specific algorithms b.) whose execution is faster than that of evolution c.) which can locate useful algorithms within the infinite space of programs and d.) that humans will discover this algorithm?
Some people here seem to be highly confident about this. How?
ETA: Maybe this post better highlights the problems I see.
Why did you interview Gowers anyway? It’s not like he has any domain knowledge in artificial intelligence.
He works on automatic theorem proving. In addition I was simply curious what a topnotch mathematician thinks about the whole subject.
All of mathematics? Dunno. I’m not even sure what that phrase refers to. But sure, there exist mathematical problems that humans can’t solve unaided, but which can be solved by tools we create.
In other words: you believe that if we take all possible mathematical problems and sort them by difficulty-to-humans, that one will turn out to be the most difficult?
I don’t mean to put words in your mouth here, I just want to make sure I understood you.
If so… why do you believe that?
Yes, that’s a fair paraphrase.
Nah, I’m not talking about speed.
Can you clarify what you mean by “simpler” here? If you mean in some objective sense, like how many bits would be required to specify it in a maximally compressed form or some such thing, I don’t claim that. If you mean easier for humans to develop… well, of course I don’t know that, but it seems more plausible to me than the idea that human brains happen to be the optimal machine for developing algorithms.
We have thus far done pretty good at this; evolution is slow. I don’t expect that to change.
Well, this is part of the problem specification. A tool for generating useless algorithms would be much easier to build.
(shrug) Perhaps we won’t. Perhaps we won’t solve protein-folding, either.
Can you quantify “highly confident” here?
For example, what confidence do you consider appropriate for the idea that there exists at least one useful algorithm A, and at least one artificial algorithm-developer AD, such that it’s easier for humans to develop AD than to develop A, and it’s easier for AD to develop A than it is for humans to develop A?
If you want an artificial agent to solve problems for you then you need to somehow constrain it, since there are an infinite number of problems. In this sense it is easier to specify an AI to solve a single problem, such as the protein-folding problem, rather than all problems (whatever that means, supposedly “general intelligence”).
The problem here is that goals and capabilities are not orthogonal. It is more difficult to design an AI that can play all possible games, and then tell it to play a certain game, than designing an AI to play a certain game in the first place.
The information theoretic complexity of the code of a general problem solver constrained to solve a specific problem should be larger than the constrain itself. I assume here that the constrain is most of the work in getting an algorithm to do useful work. Which I like to exemplify by the difference between playing chess and doing mathematics. Both are rigorously defined activities, one of which has a clear and simple terminal goal, the other being infinite and thus hard to constrain.
The more general the artificial algorithm-developer is, the less confident I am that it is easier to create than the specific algorithm itself.
I agree that specialized tools to perform particular tasks are easier to design than general-purpose tools. It follows that if I understand a problem well enough to know what tasks must be performed in order to solve that problem, it should be easier to solve that problem by designing specialized tools to perform those tasks, than by designing a general-purpose problem solver.
I agree that the complexity of a general problem solver should be larger than that of whatever constrains it to work on a specific task.
I agree that for a randomly selected algorithm A2, and a randomly selected artificial algorithm-developer AD2, the more general AD2 is the more likely it is that A2 is easier to develop than AD2.
What I meant is that if you have a very general and information theoretically simple problem solver, like evolution or AIXI, then in order to make it solve a specific problem you need a complex fitness function, respectively, in the case of AIXI, a substantial head start (the large multiplicative constant mentioned in Hutter’s paper).
When producing e.g. a chair, an AI will have to either know the specifications of the chair (such as its size or the material it is supposed to be made of) or else know how to choose a specification from an otherwise infinite set of possible specifications. Given a poorly designed fitness function, or the inability to refine its fitness function, an AI will either (a) not know what to do or (b) will not be able to converge on a qualitative solution, if at all, given limited computationally resources.
In a sense it is therefore true that an universal problem solver is easier to design than any specialized expert system. But only if you ignore the constrain it takes to “focus” the universal problem solver sufficiently in order to make it solve the right problem efficiently. Which means that the time to develop the universal problem solver plus the time it takes to constrain it might be longer than to develop the specialized solver. Since constraining it means to already know a lot about the problem in question. ETA: Or take science as another example. Once you generated a hypothesis, and an experiment to test it, you have already done most of the work. What reason do I have to believe that this is not true for the protein folding problem?
I agree with this as well. That said, sometimes that fitness function is implicit in the real world, and need not be explicitly formalized by me.
As I’ve said a couple of times now, I don’t have a dog in the race wrt the protein folding problem, but your argument seems to apply equally well to all conceivable problems. That’s why I asked a while back whether you think algorithm design is the single hardest problem for humans to solve. As I suggested then, I have no particular reason to think the protein-folding problem is harder (or easier) than the algorithm-design problem, but it seems really unlikely that no problem has this property.
The problem is that I don’t know what you mean by “algorithm design”. Once you solved “algorithm design”, what do you expect to be able to do with it, and how?
Once you compute this “algorithm design”-algorithm, how will its behavior look like? Will it output all possible algorithms, or just the algorithms that you care about? If the latter, how does it know what algorithms you care about?
There is no brain area for “algorithm design”. There is just this computational substrate that can learn, recognize patterns etc. and whose behavior is defined and constrained by its environmental circumstances.
Say you cloned Donald E. Knuth and made him grow up under completely different circumstances, e.g. as a member of some Amazonian tribe. Now this clone has the same algorithm-design-potential, but he lacks the right input and constrains to output “The Art of Computer Programming”.
What I want to highlight is that “algorithm design”, or even “general intelligence”, is not a sufficient feature in order to get “algorithm that predicts protein structures from their sequences”.
Solving “algorithm design” or “general intelligence” does not give you some sort of oracle. In the same sense as an universal Turing machine does not give you “algorithm design” or “general intelligence”. You have to program the Turing machine in order to compute “algorithm design” or “general intelligence”. In the same sense you have to define what algorithm you want, respectively what problem you want to be solved, in order for your “algorithm design” or “general intelligence” to do what you want.
Just imagine having a human baby, the clone of a 250 IQ eugenics experiment, and ask it to solve protein folding for you. Well, it doesn’t even speak English yet. Even though you have this superior general intelligence, it won’t do what you want it to do without a lot of additional work. And even then it is not clear that it will have the motivation to do so.
Tapping out now.
You have a good point, in the case of trying to make a narrow intelligence to solve the protein folding problem. Yes, to make it spit out solutions to protein folding (even if given a “general” intelligence), you first must give it a detailed specification of the problem, which may take much work to derive in the first place.
But a solution to the protein solving problem is a means to an end. Generally, through the subgoal of being able to manipulate matter. To put it simply, the information complexity of the “practical facet” of the protein folding problem is actually not that high, because other much more general problems (“the manipulating matter problem”) point to it. An unfriendly AGI with general intelligence above a human’s doesn’t need us to do any work specifying the protein folding problem for them; they’ll find it themselves in their search for solutions to “take over the world”.
Conversely, while an AGI with a goal like the rearranging all the matter in the world a particular way might happen to solve the protein folding problem in the process of its planning, such a machine does not qualify as a useful protein-folding-solver-bot for us humans. Firstly because there’s no guarantee it will actually end up solving protein folding (maybe some other method of rearranging matter turns out to be more useful). Secondly because it doesn’t necessarily care to solve the entire protein solving problem, just the special cases relevant to its goals. Thirdly because it has no interest in giving us the solutions.
That’s why writing an AGI doesn’t violate information theory by giving us a detailed specification of the protein folding problem for free.
First of all, we have narrow AI’s that do not exhibit Omohundro’s “Basic AI Drives”. Secondly, everyone seems to agree that it should be possible to create general AI that does (a) not exhibit those drives or (b) only exhibit AI drives to a limited extent or (c) which focuses AI drives in a manner that agrees with human volition.
The question then—regarding whether a protein-folding solver will be invented before a general AI that solves the same problem for instrumental reasons—is about the algorithmic complexity of an AI whose terminal goal is protein-folding versus an AI that does exhibit the necessary drives in order to solve an equivalent problem for instrumental reasons.
The first sub-question here is whether the aforementioned drives are a feature or a side-effect of general AI. Whether those drives have to be an explicit feature of a general AI or if they are an implicit consequence. The belief around here seems the be the latter.
Given that the necessary drives are implicit, the second sub-question is then about the point at which mostly well-behaved (bounded) AI systems become motivated to act in unbounded and catastrophic ways.
My objections to Omohundro’s “Basic AI Drives” are basically twofold: (a) I do not believe that AIs designed by humans will ever exhibit Omohundro’s “Basic AI Drives” in an unbounded fashion and (b) I believe that AIs that do exhibit Omohundro’s “Basic AI Drives” are either infeasible or require a huge number of constrains to work at all.
(a) The point of transition (step 4 below) between systems that do not exhibit Omohundro’s “Basic AI Drives” and those that do is too vague to count as a non-negligible hypothesis:
(1) Present-day software is better than previous software generations at understanding and doing what humans mean.
(2) There will be future generations of software which will be better than the current generation at understanding and doing what humans mean.
(3) If there is better software, there will be even better software afterwards.
(4) Magic happens.
(5) Software will be superhuman good at understanding what humans mean but catastrophically worse than all previous generations at doing what humans mean.
(b) An AI that does exhibit Omohundro’s “Basic AI Drives” would be paralyzed by infinite choice and low-probability hypotheses that imply vast amounts of expected utility.
There is an infinite choice of paperclip designs to choose from and a choosing a wrong design could have negative consequences that are in the range of −3^^^^3 utils.
Such an AI will not even be able to decide if trying to acquire unlimited computationally resources was instrumentally rational because without more resources it will be unable to decide if the actions that are required to acquire those resources might be instrumentally irrational from the perspective of what it is meant to do (that any terminal goal can be realized in an infinite number of ways, implies an infinite number of instrumental goals to choose from).
Another example is self-protection, which requires a definition of “self”, or otherwise the AI risks destroying itself.
Well, I’ve argued with you about (a) in the past, and it didn’t seem to go anywhere, so I won’t repeat that.
With regards to (b), that sounds like a good list of problems we need to solve in order to obtain AGI. I’m sure someone somewhere is already working on them.
I have no strong opinion on whether a “hard takeoff” is probable. (Because I haven’t thought about it a lot, not because I think the evidence is exquisitely balanced.) I don’t see any particular reason to think that protein folding is the only possible route to a “hard takeoff”.
What is alleged to make for an intelligence explosion is having a somewhat-superhuman AI that’s able to modify itself or make new AIs reasonably quickly. A solution to the protein folding problem might offer one way to make new AIs much more capable than oneself, I suppose, but it’s hardly the only way one can envisage.