I don’t think that the LW “party line” is that mere additional computational resources are sufficient to get superintelligence or even just intelligence (I’d find such a view simplistic and a bit naive, but I don’t find Eliezer’s views simplistic and naive).
I think that it’s pretty likely that today’s hardware would be in theory sufficient to run roughly human level or superhuman intelligence (in the broad sense, “could do most intellectual jobs human do today” for example), though that doesn’t mean humans are likely to make them anytime soon (just like, if you teleported a competent engineer back in ancient Greece, he would be able to make some amazing device with the technology of the time, even if that doesn’t mean the Greeks were about to invent those things).
I do think that as computational resources increase, the number of ways of designing minds increases, so it makes it more and more likely that someone will eventually figure out how to make something AGIish. But that’s not the same as saying that “just increase computational resources and it’ll work!”.
For an analogy, it may have been possible to build an internal explosion engine with 1700-time technology, but as time went by and the precision of measurement and manufacturing tools increased, it became easier and easier to do. That doesn’t mean that making 1700-era manufacturing machinery have modern levels of precision would be enough to allow them to build an internal explosion engine.
...but I don’t find Eliezer’s views simplistic and naive...
My whole problem is that some people seem to have high confidence in following idea voiced by Eliezer:
I think that at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability—“AI go FOOM”. Just to be clear on the claim, “fast” means on a timescale of weeks or hours rather than years or decades; and “FOOM” means way the hell smarter than anything else around, capable of delivering in short time periods technological advancements that would take humans decades, probably including full-scale molecular nanotechnology (that it gets by e.g. ordering custom proteins over the Internet with 72-hour turnaround time).
I do not doubt that it is a possibility but I just don’t see how people justify to be very confident about it. It sure sounds nice when formulated in English. But is it the result of disjunctive reasoning? I perceive it to be conjunctive, a lot of assumptions have to turn out to be correct to make humans discover simple algorithms over night that can then be improved to self-improve explosively. I would compare that to the idea of a Babylonian mathematician discovering modern science and physics given that he would be uploaded into a supercomputer. I believe that to be highly speculative. It assumes that he could brute-force conceptual revolutions. Even if he was given a detailed explanation of how his mind works and the resources to understand it, self-improving to achieve superhuman intelligence assumes that throwing resources at the problem of intelligence will magically allow him to pull improved algorithms from solution space as if they were signposted. But unknown unknowns are not signposted. It’s rather like finding a needle in a haystack. Evolution is great at doing that and assuming that one could speed up evolution considerably is another assumption about technological feasibility and real-world resources.
OK, so here are some assumptions, stated as disjunctively as I can:
1: Humans have, over the last hundred years, created systems in the world that are intended to achieve certain goals. Call those systems “technology” for convenience.
2: At least some technology is significantly more capable of achieving the goals it’s intended to achieve than its closest biologically evolved analogs. For example, technological freight-movers can move more freight further and faster than biologically evolved ones.
3: For the technology described in assumption 2, biological evolution would have required millenia to develop equivalently capable systems for achieving the goals of that technology.
4: Human intelligence (rather than other things such as, for example, human musculature or covert intervention by technologically advanced aliens) is primarily responsible for the creation of technology described in assumption 2.
5: Technology analogous to the technology-developing functions of human intelligence is in principle possible.
6: Technological technology-developers, if developed, will be significantly more capable of developing technology than human intelligence is.
Here are some assertions of confidence of these assumptions:
A1: 1-epsilon. A2: 1-epsilon. A3, given A2: .99+ A4, given A2 : ~.9 A5 given A4: .99+ A6 given A5: .95+
I conclude a .8+ confidence that it’s in principle possible for humans to develop systems that are significantly more capable of delivering technological developments than humans are.
I’ll pause there and see if we’ve diverged thus far: if you have different confidence levels for the assumptions I’ve stated, I’m interested in yours. If you don’t believe that my conclusion follows from the assumptions I’ve stated, I’m interested in why not.
You can’t really compare technological designs for which there was no selection pressure and therefore no optimization with superficially similar evolutionary inventions. For example, you would have to compare the energy efficiency with which insects or birds can carry certain amounts of weight with a similar artificial means of transport carrying the same amount of weight. Or you would have to compare the energy efficiency and maneuverability of bird and insect flight with artificial flight. But comparing a train full of hard disk drives with the bandwidth of satellite communication is not useful. Saying that a rocket can fly faster than anything that evolution came up with is not generalizable to intelligence. And if even if I was to accept that argument, then there are many counter-examples. The echolocation of bats, economic photosynthesis or human gait. And the invention of rockets did not led to space colonization either, space exploration is actually retrogressive.
You also mention that human intelligence is primarily responsible for the creation of technology. I do think this is misleading. What is responsible is that we are goal-oriented while evolution is not. But the advance of scientific knowledge is largely an evolutionary process. I don’t see that intelligence is currently tangible enough to measure that the return of increased intelligence is proportional to the resources it would take to amplify it. The argument from the gap between chimpanzees and humans is interesting but can not be used to extrapolate onwards from human general intelligence. It is pure speculation that humans are not Turing complete and that there are levels above our own. That chimpanzees exist, and humans exist, is not a proof for the existence of anything that bears, in any relevant respect, the same relationship to a human that a human bears to a chimpanzee.
It is in principle possible to create artificial intelligence that is as capable as human intelligence. But this says nothing about how quickly we will be able to come up with it. I believe that intelligence is fundamentally dependent on the complexity of the goals against which it is measured. Goals give rise to agency and define an agent’s drives. As long as we won’t be able to precisely hard-code a complexity of values similar to that of humans we won’t achieve levels of general intelligence similar to humans.
It is true that humans have created a lot of tools that help them to achieve their goals. But it is not clear that incorporating those tools into some sort of self-perception, some sort of guiding agency, is superior to humans using a combination of tools and expert systems. In other words, it is not clear that there does exist a class of problems that is solvable by Turing machines in general, but not by a combination of humans and expert systems. And if that was the case then I think that, just like chimpanzees would be unable to invent science, we won’t be able to come up with a meta-heuristic that would allow us to discover algorithms that can solve a class of problems that we can’t (other than by using guided evolution).
Besides, recursive self-improvement does not demand sentience, consciousness or agency. Even if humans are not able to “recursively improve” their own algorithms we can still “recursively improve” our tools. And the supremacy of recursively improving agent’s over humans and their tools is a reasonable conjecture but not a fact. It largely relies on the idea that the integration of tools into a coherent framework of agencies has huge benefits.
I also object to assigning numerical probability estimates to informal arguments and predictions. When faced with data from empirical experiments, or goats behind doors in a gameshow, it is reasonable. But using formalized methods to evaluate informal evidence can be very misleading. For real-world, computationally limited agents it is a recipe to fail spectacularly. Using formalized methods to to evaluate vague ideas like risks from AI can lead you to dramatically over or underestimate evidence by forcing you to use your intuition to assign numbers to your intuitive judgement of informal arguments.
And as a disclaimer: Don’t jump to the conclusion that I generally rule out the possibility that very soon someone will stumble upon a simple algorithm that can be run on a digital computer, that can be improved to self-improve, become superhuman and take over the universe. All am saying is that the possibility isn’t as inevitable as some seem to believe. If forced, I would probably assign a 1% probability to it but still feel uncomfortable about that (which isn’t to equate with risks from AI in general, I don’t think FOOM is required for AI’s to pose a risk).
I think that Eliezer crossed the border of what can sensibly be said about this topic at the present time when he says that AI will likely invent molecular nanotechnology in a matter of hours or days. Jürgen Schmidhuber is the only person I could find who might agree with that. Even Shane Legg is more skeptical. And since I do not yet have the education to evaluate state of the art AI research myself I will side with the experts and say that Eliezer is likely wrong. Of course, I have no authority but I have to make a decision. I don’t feel it would be reasonable to believe Eliezer here without restrictions.
Just because the possibility of superhuman AI seems to be disjunctive on some level doesn’t mean that there are no untested assumptions underlying the claims that such an outcome is possible. Reduce the vagueness and you will discover a set assumptions that need to be true in conjunction.
So, I’m having a lot of difficulty mapping your response to the question I asked. But if I’ve understood your response, you are arguing that technology analogous to the technology-developing functions of human intelligence might not be in principle possible, or that if developed might not be capable of significantly greater technology-developing power than human intelligence is.
In other words, that assumptions 5 and/or 6 might be false.
I agree that it’s possible. Similar things are true of the other examples you give: it’s possible that technological echolocation, or technological walking, or technological photosynthesis, either aren’t possible in principle, or can’t be significantly more powerful than their naturally evolved analogs. (Do you actually believe that to be true of those examples, incidentally?)
This seems to me highly implausible, which is why my confidence for A5 and A6 are very high. (I have similarly high confidence in our ability to develop machines more efficient than human legs at locomotion, machines more efficient at converting sunlight to useful work than plants, and more efficient at providing sonar-based information about their surroundings than bats.)
So, OK. We’ve identified a couple of specific, relevant assertions for which you think that my confidence is too high. Awesome! That’s progress.
So, what level of confidence do you think is justified for those assertions? I realize that you reject assigning numbers to reported confidence, so OK… do you have a preferred way of comparing levels of confidence? Or do you reject the whole enterprise of such comparisons?
Incidentally: you say a lot of other stuff here which seems entirely beside my point… I think because you’re running out ahead to arguments you think I might make some day. I will return to that stuff if I ever actually make an argument to which it’s relevant.
I am uneasy with premise 4. I think human technological progress involves an awful lot of tinkering and evolution, and intelligent action by the technologist is not the hardest part. I doubt that if we could all think twice as quickly*, we would develop technology twice as quickly. The real rate-limiting step isn’t the design, it’s building things and testing them.
This doesn’t mean that premise 4 is wrong, exactly, but it means that I’m worried it’s going to be used in an inconsistent, equivocal, way.
*I am picturing taking all the relevant people, and having them think the same thoughts they do today, in half the time. Presumably they use the newly-free time to think more thoughts.
No no, I wasn’t attributing “same thoughts in half the time” to you. I was explaining the thought-experiment I was using to distinguish “intelligence” as an input from other requirements for technology creation.
If what you understand by “intelligence” is the ability to arrive at the same conclusions faster, then I agree with you that that thing has almost nothing to do with technological development, and I should probably backup and rewrite assumptions 4-6 while tabooing the word “intelligence”
I don’t think that the LW “party line” is that mere additional computational resources are sufficient to get superintelligence or even just intelligence (I’d find such a view simplistic and a bit naive, but I don’t find Eliezer’s views simplistic and naive).
I think that it’s pretty likely that today’s hardware would be in theory sufficient to run roughly human level or superhuman intelligence (in the broad sense, “could do most intellectual jobs human do today” for example), though that doesn’t mean humans are likely to make them anytime soon (just like, if you teleported a competent engineer back in ancient Greece, he would be able to make some amazing device with the technology of the time, even if that doesn’t mean the Greeks were about to invent those things).
I do think that as computational resources increase, the number of ways of designing minds increases, so it makes it more and more likely that someone will eventually figure out how to make something AGIish. But that’s not the same as saying that “just increase computational resources and it’ll work!”.
For an analogy, it may have been possible to build an internal explosion engine with 1700-time technology, but as time went by and the precision of measurement and manufacturing tools increased, it became easier and easier to do. That doesn’t mean that making 1700-era manufacturing machinery have modern levels of precision would be enough to allow them to build an internal explosion engine.
My whole problem is that some people seem to have high confidence in following idea voiced by Eliezer:
I do not doubt that it is a possibility but I just don’t see how people justify to be very confident about it. It sure sounds nice when formulated in English. But is it the result of disjunctive reasoning? I perceive it to be conjunctive, a lot of assumptions have to turn out to be correct to make humans discover simple algorithms over night that can then be improved to self-improve explosively. I would compare that to the idea of a Babylonian mathematician discovering modern science and physics given that he would be uploaded into a supercomputer. I believe that to be highly speculative. It assumes that he could brute-force conceptual revolutions. Even if he was given a detailed explanation of how his mind works and the resources to understand it, self-improving to achieve superhuman intelligence assumes that throwing resources at the problem of intelligence will magically allow him to pull improved algorithms from solution space as if they were signposted. But unknown unknowns are not signposted. It’s rather like finding a needle in a haystack. Evolution is great at doing that and assuming that one could speed up evolution considerably is another assumption about technological feasibility and real-world resources.
OK, so here are some assumptions, stated as disjunctively as I can:
1: Humans have, over the last hundred years, created systems in the world that are intended to achieve certain goals. Call those systems “technology” for convenience.
2: At least some technology is significantly more capable of achieving the goals it’s intended to achieve than its closest biologically evolved analogs. For example, technological freight-movers can move more freight further and faster than biologically evolved ones.
3: For the technology described in assumption 2, biological evolution would have required millenia to develop equivalently capable systems for achieving the goals of that technology.
4: Human intelligence (rather than other things such as, for example, human musculature or covert intervention by technologically advanced aliens) is primarily responsible for the creation of technology described in assumption 2.
5: Technology analogous to the technology-developing functions of human intelligence is in principle possible.
6: Technological technology-developers, if developed, will be significantly more capable of developing technology than human intelligence is.
Here are some assertions of confidence of these assumptions:
A1: 1-epsilon.
A2: 1-epsilon.
A3, given A2: .99+
A4, given A2 : ~.9
A5 given A4: .99+
A6 given A5: .95+
I conclude a .8+ confidence that it’s in principle possible for humans to develop systems that are significantly more capable of delivering technological developments than humans are.
I’ll pause there and see if we’ve diverged thus far: if you have different confidence levels for the assumptions I’ve stated, I’m interested in yours. If you don’t believe that my conclusion follows from the assumptions I’ve stated, I’m interested in why not.
You can’t really compare technological designs for which there was no selection pressure and therefore no optimization with superficially similar evolutionary inventions. For example, you would have to compare the energy efficiency with which insects or birds can carry certain amounts of weight with a similar artificial means of transport carrying the same amount of weight. Or you would have to compare the energy efficiency and maneuverability of bird and insect flight with artificial flight. But comparing a train full of hard disk drives with the bandwidth of satellite communication is not useful. Saying that a rocket can fly faster than anything that evolution came up with is not generalizable to intelligence. And if even if I was to accept that argument, then there are many counter-examples. The echolocation of bats, economic photosynthesis or human gait. And the invention of rockets did not led to space colonization either, space exploration is actually retrogressive.
You also mention that human intelligence is primarily responsible for the creation of technology. I do think this is misleading. What is responsible is that we are goal-oriented while evolution is not. But the advance of scientific knowledge is largely an evolutionary process. I don’t see that intelligence is currently tangible enough to measure that the return of increased intelligence is proportional to the resources it would take to amplify it. The argument from the gap between chimpanzees and humans is interesting but can not be used to extrapolate onwards from human general intelligence. It is pure speculation that humans are not Turing complete and that there are levels above our own. That chimpanzees exist, and humans exist, is not a proof for the existence of anything that bears, in any relevant respect, the same relationship to a human that a human bears to a chimpanzee.
It is in principle possible to create artificial intelligence that is as capable as human intelligence. But this says nothing about how quickly we will be able to come up with it. I believe that intelligence is fundamentally dependent on the complexity of the goals against which it is measured. Goals give rise to agency and define an agent’s drives. As long as we won’t be able to precisely hard-code a complexity of values similar to that of humans we won’t achieve levels of general intelligence similar to humans.
It is true that humans have created a lot of tools that help them to achieve their goals. But it is not clear that incorporating those tools into some sort of self-perception, some sort of guiding agency, is superior to humans using a combination of tools and expert systems. In other words, it is not clear that there does exist a class of problems that is solvable by Turing machines in general, but not by a combination of humans and expert systems. And if that was the case then I think that, just like chimpanzees would be unable to invent science, we won’t be able to come up with a meta-heuristic that would allow us to discover algorithms that can solve a class of problems that we can’t (other than by using guided evolution).
Besides, recursive self-improvement does not demand sentience, consciousness or agency. Even if humans are not able to “recursively improve” their own algorithms we can still “recursively improve” our tools. And the supremacy of recursively improving agent’s over humans and their tools is a reasonable conjecture but not a fact. It largely relies on the idea that the integration of tools into a coherent framework of agencies has huge benefits.
I also object to assigning numerical probability estimates to informal arguments and predictions. When faced with data from empirical experiments, or goats behind doors in a gameshow, it is reasonable. But using formalized methods to evaluate informal evidence can be very misleading. For real-world, computationally limited agents it is a recipe to fail spectacularly. Using formalized methods to to evaluate vague ideas like risks from AI can lead you to dramatically over or underestimate evidence by forcing you to use your intuition to assign numbers to your intuitive judgement of informal arguments.
And as a disclaimer: Don’t jump to the conclusion that I generally rule out the possibility that very soon someone will stumble upon a simple algorithm that can be run on a digital computer, that can be improved to self-improve, become superhuman and take over the universe. All am saying is that the possibility isn’t as inevitable as some seem to believe. If forced, I would probably assign a 1% probability to it but still feel uncomfortable about that (which isn’t to equate with risks from AI in general, I don’t think FOOM is required for AI’s to pose a risk).
I think that Eliezer crossed the border of what can sensibly be said about this topic at the present time when he says that AI will likely invent molecular nanotechnology in a matter of hours or days. Jürgen Schmidhuber is the only person I could find who might agree with that. Even Shane Legg is more skeptical. And since I do not yet have the education to evaluate state of the art AI research myself I will side with the experts and say that Eliezer is likely wrong. Of course, I have no authority but I have to make a decision. I don’t feel it would be reasonable to believe Eliezer here without restrictions.
Just because the possibility of superhuman AI seems to be disjunctive on some level doesn’t mean that there are no untested assumptions underlying the claims that such an outcome is possible. Reduce the vagueness and you will discover a set assumptions that need to be true in conjunction.
So, I’m having a lot of difficulty mapping your response to the question I asked. But if I’ve understood your response, you are arguing that technology analogous to the technology-developing functions of human intelligence might not be in principle possible, or that if developed might not be capable of significantly greater technology-developing power than human intelligence is.
In other words, that assumptions 5 and/or 6 might be false.
I agree that it’s possible. Similar things are true of the other examples you give: it’s possible that technological echolocation, or technological walking, or technological photosynthesis, either aren’t possible in principle, or can’t be significantly more powerful than their naturally evolved analogs. (Do you actually believe that to be true of those examples, incidentally?)
This seems to me highly implausible, which is why my confidence for A5 and A6 are very high. (I have similarly high confidence in our ability to develop machines more efficient than human legs at locomotion, machines more efficient at converting sunlight to useful work than plants, and more efficient at providing sonar-based information about their surroundings than bats.)
So, OK. We’ve identified a couple of specific, relevant assertions for which you think that my confidence is too high. Awesome! That’s progress.
So, what level of confidence do you think is justified for those assertions? I realize that you reject assigning numbers to reported confidence, so OK… do you have a preferred way of comparing levels of confidence? Or do you reject the whole enterprise of such comparisons?
Incidentally: you say a lot of other stuff here which seems entirely beside my point… I think because you’re running out ahead to arguments you think I might make some day. I will return to that stuff if I ever actually make an argument to which it’s relevant.
I am uneasy with premise 4. I think human technological progress involves an awful lot of tinkering and evolution, and intelligent action by the technologist is not the hardest part. I doubt that if we could all think twice as quickly*, we would develop technology twice as quickly. The real rate-limiting step isn’t the design, it’s building things and testing them.
This doesn’t mean that premise 4 is wrong, exactly, but it means that I’m worried it’s going to be used in an inconsistent, equivocal, way.
*I am picturing taking all the relevant people, and having them think the same thoughts they do today, in half the time. Presumably they use the newly-free time to think more thoughts.
Fair enough. If I end up using it equivocally or inconsistently, please do call me out on it.
Note that absolutely nothing I’ve said so far implies people thinking the same thoughts they do today in half the time.
No no, I wasn’t attributing “same thoughts in half the time” to you. I was explaining the thought-experiment I was using to distinguish “intelligence” as an input from other requirements for technology creation.
If what you understand by “intelligence” is the ability to arrive at the same conclusions faster, then I agree with you that that thing has almost nothing to do with technological development, and I should probably backup and rewrite assumptions 4-6 while tabooing the word “intelligence”