Take P != NP, for instance the attempted proof that’s been making the rounds on various blogs. If you’ve skimmed any of the discussion you can see that even this attempted proof piggybacks on “vast amounts of ‘ordinary’ intellectual labor,
By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of “vast” in mind.
The way I think about it is: think of all the intermediate levels of technological development that exist between what we have now and outright Singularity. I would only be half-joking if I said that we ought to have flying cars before we have AGI. There are of course more important examples of technologies that seem easier than AGI, but which themselves seem decades away. Repair of spinal cord injuries; artificial vision; useful quantum computers (or an understanding of their impossibility); cures for the numerous cancers; revival of cryonics patients; weather control. (Some of these, such as vision, are arguably sub-problems of AGI: problems that would have to be solved in the course of solving AGI.)
Actually, think of math problems if you like. Surely there are conjectures in existence now—probably some of them already famous—that will take mathematicians more than a century from now to prove (assuming no Singularity or intelligence enhancement before then). Is AGI significantly easier than the hardest math problems around now? This isn’t my impression—indeed, it looks to me more analogous to problems that are considered “hopeless”, like the “problem” of classifying all groups, say.
By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of “vast” in mind.
I hate to go all existence proofy on you, but we have an existence proof of a general intelligence—accidentally sneezed out by natural selection, no less, which has severe trouble building freely rotating wheels—and no existence proof of a proof of P != NP. I don’t know much about the field, but from what I’ve heard, I wouldn’t be too surprised if proving P != NP is harder than building FAI for the unaided human mind. I wonder if Scott Aaronson would agree with me on that, even though neither of us understand the other’s field? (I just wrote him an email and asked, actually; and this time remembered not to say my opinion before asking for his.)
After glancing over a 100-page proof that claimed to solve the biggest problem in computer science, Scott Aaronson bet his house that it was wrong. Why?
What I find interesting is that the pattern nearly always goes the other way: you’re more likely to think that a celebrated problem you understand well is harder than one you don’t know much about. It says a lot about both Eliezer’s and Scott’s rationality that they think of the other guy’s hard problems as even harder than their own.
As for existence proof of a general intelligence, that doesn’t prove anything about how difficult it is, for anthropic reasons. For all we know 10^20 evolutions each in 10^50 universes that would in principle allow intelligent life might on average result in 1 general intelligence actually evolving.
Of course, if you buy the self-indication assumption (which I do not) or various other related principles you’ll get an update that compels belief in quite frequent life (constrained by the Fermi paradox and a few other things).
More relevantly, approaches like Robin’s Hard Step analysis and convergent evolution (e.g. octopus/bird intelligence) can rule out substantial portions of “crazy-hard evolution of intelligence” hypothesis-space. And we know that human intelligence isn’t so unstable as to see it being regularly lost in isolated populations, as we might expect given ludicrous anthropic selection effects.
We can make better guesses than that: evolution coughed up quite a few things that would be considered pretty damn intelligent for a computer program, like ravens, octopuses, rats or dolphins.
Not independently (not even cephalopods, at least completely). And we have no way of estimating the difference in difficulty between that level of intelligence and general intelligence other than evolutionary history (which for anthropic reasons could be highly untypical), and similarity in makeup, but already know that our type of nervous system is capable of supporting general intelligence, most rat level intelligences might hit fundamental architectural problems first.
We can always estimate, even with very little knowledge—we’ll just have huge error margins. I agree it is possible that “For all we know 10^20 evolutions each in 10^50 universes that would in principle allow intelligent life might on average result in 1 general intelligence actually evolving”, I would just bet on a much higher probability than that, though I agree with the principle.
The evidence that pretty smart animals exist in distant branches of the tree of life, and in different environments is weak evidence that intelligence is “pretty accessible” in evolution’s search space. It’s stronger evidence than the mere fact that we, intelligent beings, exist.
Intelligence sure. The original point was that our existence doesn’t put a meaningful upper bound on the difficultly of general intelligence. Cephalopods are good evidence that given whatever rudimentary precursors of a nervous system our common ancestor had (I know it had differentiated cells, but I’m not sure what else. I think it didn’t really have organs like higher animals, let alone anything that really qualified as a nervous system) cephalopod level intelligence is comparatively easy, having evolved independently two times. It doesn’t say anything about how much more difficult general intelligence is compared to cephalopod intelligence, nor whether whatever precursors to a nervous system our common ancestor had were unusually conductive to intelligence compared to the average of similar complex evolved beings.
If I had to guess I would assume cephalopod level intelligence within our galaxy and a number of general intelligences somewhere outside our past light cone. But that’s because I already think of general intelligence as not fantastically difficult independently of the relevance of the existence proof.
Hox genes suggest that they both had a modular body plan of some sort. Triploblasty implies some complexity (the least complex triploblastic organism today is a flatworm).
I’d be very surprised if most recent common ancestor didn’t have neurons similar to most neurons today, as I’ve had a hard time finding out the differences between the two. A basic introduction to nervous systems suggests they are very similar.
Well, I for one strongly hope that we resolve whether P = NP before we have AI since a large part of my estimate for the probability of AI being able to go FOOM is based on how much of the complexity hierarchy collapses. If there’s heavy collapse, AI going FOOM Is much more plausible.
I don’t know much about the field, but from what I’ve heard, I wouldn’t be too surprised if proving P != NP is harder than building FAI for the unaided human mind
Well actually, after thinking about it, I’m not sure I would either. There is something special about P vs NP, from what I understand, and I didn’t even mean to imply otherwise above; I was only disputing the idea that “vast amounts” of work had already gone into the problem, for my definition of “vast”.
Scott Aaronson’s view on this doesn’t move my opinion much (despite his large contribution to my beliefs about P vs NP), since I think he overestimates the difficulty of AGI (see your Bloggingheads diavlog with him).
I don’t know much about the field, but from what I’ve heard, I wouldn’t be too surprised if proving P != NP is harder than building FAI for the unaided human mind.
Awesome! Be sure to let us know what he thinks. Sounds unbelievable to me though, but what do I know.
A ‘few clues’ sounds like a gross underestimation. It is the only working example, so it certainly contains all the clues, not just a few. The question of course is how much of a shortcut is possible. The answer to date seems to be: none to slim.
I agree engineers reverse engineering will succeed way ahead of full emulation, that wasn’t my point.
If information is not extracted and used, it doesn’t qualify as being a “clue”.
The question of course is how much of a shortcut is possible.
The answer to date seems to be: none to slim.
The search oracles and stockmarketbot makers have paid precious little attention to the brain. They are based on engineering principles instead.
I agree engineers reverse engineering will succeed way ahead of full emulation,
Most engineers spend very little time on reverse-engineering nature. There is a little “bioinspiration”—but inspiration is a bit different from wholescale copying.
By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of “vast” in mind.
The way I think about it is: think of all the intermediate levels of technological development that exist between what we have now and outright Singularity. I would only be half-joking if I said that we ought to have flying cars before we have AGI. There are of course more important examples of technologies that seem easier than AGI, but which themselves seem decades away. Repair of spinal cord injuries; artificial vision; useful quantum computers (or an understanding of their impossibility); cures for the numerous cancers; revival of cryonics patients; weather control. (Some of these, such as vision, are arguably sub-problems of AGI: problems that would have to be solved in the course of solving AGI.)
Actually, think of math problems if you like. Surely there are conjectures in existence now—probably some of them already famous—that will take mathematicians more than a century from now to prove (assuming no Singularity or intelligence enhancement before then). Is AGI significantly easier than the hardest math problems around now? This isn’t my impression—indeed, it looks to me more analogous to problems that are considered “hopeless”, like the “problem” of classifying all groups, say.
I hate to go all existence proofy on you, but we have an existence proof of a general intelligence—accidentally sneezed out by natural selection, no less, which has severe trouble building freely rotating wheels—and no existence proof of a proof of P != NP. I don’t know much about the field, but from what I’ve heard, I wouldn’t be too surprised if proving P != NP is harder than building FAI for the unaided human mind. I wonder if Scott Aaronson would agree with me on that, even though neither of us understand the other’s field? (I just wrote him an email and asked, actually; and this time remembered not to say my opinion before asking for his.)
Scott says that he thinks P != NP is easier / likely to come first.
Here an interview with Scott Aaronson:
It’s interesting that you both seem to think that your problem is easier, I wonder if there’s a general pattern there.
What I find interesting is that the pattern nearly always goes the other way: you’re more likely to think that a celebrated problem you understand well is harder than one you don’t know much about. It says a lot about both Eliezer’s and Scott’s rationality that they think of the other guy’s hard problems as even harder than their own.
Obviously not. That would be a proof of P != NP.
As for existence proof of a general intelligence, that doesn’t prove anything about how difficult it is, for anthropic reasons. For all we know 10^20 evolutions each in 10^50 universes that would in principle allow intelligent life might on average result in 1 general intelligence actually evolving.
Of course, if you buy the self-indication assumption (which I do not) or various other related principles you’ll get an update that compels belief in quite frequent life (constrained by the Fermi paradox and a few other things).
More relevantly, approaches like Robin’s Hard Step analysis and convergent evolution (e.g. octopus/bird intelligence) can rule out substantial portions of “crazy-hard evolution of intelligence” hypothesis-space. And we know that human intelligence isn’t so unstable as to see it being regularly lost in isolated populations, as we might expect given ludicrous anthropic selection effects.
I looked at Nick’s:
http://www.anthropic-principle.com/preprints/olum/sia.pdf
I don’t get it. Anyone know what is supposed to be wrong with the SIA?
We can make better guesses than that: evolution coughed up quite a few things that would be considered pretty damn intelligent for a computer program, like ravens, octopuses, rats or dolphins.
Not independently (not even cephalopods, at least completely). And we have no way of estimating the difference in difficulty between that level of intelligence and general intelligence other than evolutionary history (which for anthropic reasons could be highly untypical), and similarity in makeup, but already know that our type of nervous system is capable of supporting general intelligence, most rat level intelligences might hit fundamental architectural problems first.
We can always estimate, even with very little knowledge—we’ll just have huge error margins. I agree it is possible that “For all we know 10^20 evolutions each in 10^50 universes that would in principle allow intelligent life might on average result in 1 general intelligence actually evolving”, I would just bet on a much higher probability than that, though I agree with the principle.
The evidence that pretty smart animals exist in distant branches of the tree of life, and in different environments is weak evidence that intelligence is “pretty accessible” in evolution’s search space. It’s stronger evidence than the mere fact that we, intelligent beings, exist.
Intelligence sure. The original point was that our existence doesn’t put a meaningful upper bound on the difficultly of general intelligence. Cephalopods are good evidence that given whatever rudimentary precursors of a nervous system our common ancestor had (I know it had differentiated cells, but I’m not sure what else. I think it didn’t really have organs like higher animals, let alone anything that really qualified as a nervous system) cephalopod level intelligence is comparatively easy, having evolved independently two times. It doesn’t say anything about how much more difficult general intelligence is compared to cephalopod intelligence, nor whether whatever precursors to a nervous system our common ancestor had were unusually conductive to intelligence compared to the average of similar complex evolved beings.
If I had to guess I would assume cephalopod level intelligence within our galaxy and a number of general intelligences somewhere outside our past light cone. But that’s because I already think of general intelligence as not fantastically difficult independently of the relevance of the existence proof.
This page on the history of invertebrates) suggests that our common ancestors had bilateral symmetry, triploblastic and with hox genes.
Hox genes suggest that they both had a modular body plan of some sort. Triploblasty implies some complexity (the least complex triploblastic organism today is a flatworm).
I’d be very surprised if most recent common ancestor didn’t have neurons similar to most neurons today, as I’ve had a hard time finding out the differences between the two. A basic introduction to nervous systems suggests they are very similar.
Well, I for one strongly hope that we resolve whether P = NP before we have AI since a large part of my estimate for the probability of AI being able to go FOOM is based on how much of the complexity hierarchy collapses. If there’s heavy collapse, AI going FOOM Is much more plausible.
Well actually, after thinking about it, I’m not sure I would either. There is something special about P vs NP, from what I understand, and I didn’t even mean to imply otherwise above; I was only disputing the idea that “vast amounts” of work had already gone into the problem, for my definition of “vast”.
Scott Aaronson’s view on this doesn’t move my opinion much (despite his large contribution to my beliefs about P vs NP), since I think he overestimates the difficulty of AGI (see your Bloggingheads diavlog with him).
Awesome! Be sure to let us know what he thinks. Sounds unbelievable to me though, but what do I know.
Why is AGI a math problem? What is abstract about it?
We don’t need math proofs to know if AGI is possible. It is, the brain is living proof.
We don’t need math proofs to know how to build AGI—we can reverse engineer the brain.
There may be a few clues in there—but engineers are likely to get to the goal looong before the emulators arrive—and engineers are math-friendly.
A ‘few clues’ sounds like a gross underestimation. It is the only working example, so it certainly contains all the clues, not just a few. The question of course is how much of a shortcut is possible. The answer to date seems to be: none to slim.
I agree engineers reverse engineering will succeed way ahead of full emulation, that wasn’t my point.
If information is not extracted and used, it doesn’t qualify as being a “clue”.
The search oracles and stockmarketbot makers have paid precious little attention to the brain. They are based on engineering principles instead.
Most engineers spend very little time on reverse-engineering nature. There is a little “bioinspiration”—but inspiration is a bit different from wholescale copying.
This is a good part of the guts of it. That bit of it is a math problem:
http://timtyler.org/sequence_prediction/