Questionable. Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans? To my awareness we have no evidence to this end.
What would you accept as evidence?
Would you accept sophisticated machine learning algorithms like the ones in the Netflix contest, who find connections that make no sense to humans, who simply can’t work with high-dimensional data?
Would you accept a circuit designed by a genetic algorithm, which doesn’t work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?
Would you accept a chess program which could crush any human chess player who ever lived? Kasparov at ELO 2851, Rybka at 3265. Wikipedia says grandmaster status comes at ELO 2500. So Rybka is now even further beyond Kasparov at his peak as Kasparov was beyond a new grandmaster. And it’s not like Rybka or the other chess AIs will weaken with age.
Or are you going to pull a no-true-Scotsman and assert that each one of these is mechanical or unoriginal or not really beyond human or just not different enough?
I think it at least possible that much-smarter-than human intelligence might turn
out to be impossible. There exist some problem domains where there appear to be a large number of
solutions, but where the quality of the solutions saturate quickly as more and more resources
are thrown at them. A toy example is how often records are broken in a continuous 1-D domain, with attempts drawn from a constant probability distribution:
The number of records broken goes as the log of the number of attempts. If some
of the tasks an AGI must solve are like this, then it might not do much better than
humans—not because evolution did a wonderful job of optimizing humans for perfect
intelligence, but because that part of the problem domain is a brick wall, and
anything must bash into it at nearly the same point.
One (admittedly weak) piece of evidence: a real example of saturation, is an
optimizing compiler being used to recompile itself. It is a recursive optimizing
system, and, if there is a knob to allow more effort being used on the optimization,
the speed-up from the first pass can be used to allow a bit more effort to be applied
to a second pass for the same cpu time. Nonetheless, the results for this specific recursion are not FOOM.
The evidence in the other direction are basically existence proofs from the most intelligent people or groups of people that we know of. Something as intelligent as Einstein must be possible, since Einstein existed. Given an AI Einstein, working on improving its own intelligence—it isn’t clear if it could make a little progress or a
great deal.
but because that part of the problem domain is a brick wall, and anything must bash into it at nearly the same point.
This goes for your compilers as well, doesn’t it? There are still major speed-ups available in compilation technology (the closely connected areas of whole-program compilation+partial evaluation+supercompilation), but a compiler is still expected to produce isomorphic code, and that puts hard information-theoretic bounds on output.
Would you accept a circuit designed by a genetic algorithm, which doesn’t work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?
“This aim was achieved within 3000 generations, but the success was even greater than had been anticipated. The evolved system uses far fewer cells than anything a human engineer could have designed, and it does not even need the most critical component of human-built systems—a clock. How does it work? Thompson has no idea, though he has traced the input signal through a complex arrangement of feedback loops within the evolved circuit. In fact, out of the 37 logic gates the final product uses, five of them are not even connected to the rest of the circuit in any way—yet if their power supply is removed, the circuit stops working. It seems that evolution has exploited some subtle electromagnetic effect of these cells to come up with its solution, yet the exact workings of the complex and intricate evolved structure remain a mystery (Davidson 1997).”
The analogy that AGI can be to us as we are to chimps. This is the part that needs the focus.
We could have said in the 1950s that machines beat us at arithmetic by orders of magnitude. Classical AI researchers clearly were deluded by success at easy problems. The problem with winning on easy problems is that it says little about hard ones.
What I see is that in the domain of problems for which human level performance is difficult to replicate, computers are capable of catching us and likely beating us, but gaining a great distance on us in performance is difficult. After all, a human can still beat the best chess programs with a mere pawn handicap. This may never get to two pawns. ever. Certainly the second pawn is massively harder than the first. It’s the nature of the problem space. In terms of runaway AGI control of the planet, we have to wonder if humans will always have the equivalent of a pawn handicap via other means (mostly as a result of having their hands on the reigns of the economic, political, and legal structures).
BTW, is ELO supposed to have that kind of linear interpretation?
The analogy that AGI can be to us as we are to chimps. This is the part that needs the focus.
Yes, this is the important part. Chimps lag behind humans in 2 distinct ways—they differ in degree, and in kind. Chimps can do a lot of human-things, but very minimally. Painting comes to mind. They do a little, but not a lot. (Degree.) Language is another well-studied subject. IIRC, they can memorize some symbols and use them, but not in the recursive way that modern linguistics (pace Chomsky) seems to regard as key, not recursive at all. (Kind.)
What can we do with this distinction? How does it apply to my three examples?
After all, a human can still beat the best chess programs with a mere pawn handicap.
Ever is a long time. Would you like to make this a concrete prediction I could put on PredictionBook, perhaps something along the lines of ‘no FIDE grandmaster will lose a 2-pawns-odds chess match(s) to a computer by 2050’?
BTW, is ELO supposed to have that kind of linear interpretation?
I’m not an expert on ELO by any means (do we know any LW chess experts?), but reading through http://en.wikipedia.org/wiki/Elo_rating_system#Mathematical_details doesn’t show me any warning signs—ELO point differences are supposed to reflect probabilistic differences in winning, or a ratio, and so the absolute values shouldn’t matter. I think.
we have to wonder if humans will always have the equivalent of a pawn handicap via other means (mostly as a result of having their hands on the reigns of the economic, political, and legal structures).
This is a possibility (made more plausible if we’re talking about those reins being used to incentivize early AIs to design more reliable and transparent safety mechanisms for more powerful successive AI generations), but it’s greatly complicated by international competition: to the extent that careful limitation and restriction of AI capabilities and access to potential sources of power reduces economic, scientific, and military productivity it will be tough to coordinate. Not to mention that existing economic, political, and legal structures are not very reliably stable: electorates and governing incumbents often find themselves unable to retain power.
BTW, is ELO supposed to have that kind of linear interpretation?
It seems that whether or not it’s supposed to, in practice it does. From the just released “Intrinsic Chess Ratings”, which takes Rybka and does exhaustive evaluations (deep enough to be ‘relatively omniscient’) of many thousands of modern chess games; on page 9:
We conclude that there is a smooth relationship between the actual players’ Elo ratings and the intrinsic quality of the move choices as measured by the chess program and the agent fitting. Moreover, the final s-fit values obtained are nearly the same for the corresponding entries of all three time periods. Since a lower s indicates higher skill, we conclude that there has been little or no ‘inflation’ in ratings over time—if anything there has been deflation. This runs counter to conventional wisdom, but is predicted by population models on which rating systems have been based [Gli99].
The results also support a no answer to question 2 [“Were the top players of earlier times as strong as the top players of today?”]. In the 1970’s there were only two players with ratings over 2700, namely Bobby Fischer and Anatoly Karpov, and there were years as late as 1981 when no one had a rating over 2700 (see [Wee00]). In the past decade there have usually been thirty or more players with such ratings. Thus lack of inflation implies that those players are better than all but Fischer and Karpov were. Extrapolated backwards, this would be consistent with the findings of [DHMG07], which however (like some recent competitions to improve on the Elo system) are based only on the results of games, not on intrinsic decision-making.
You are getting much closer than any of the commenter’s before you to provide some other form of evidence to substantiate one of the primary claims here.
You have to list your primary propositions on which you base further argumentation, from which you draw conclusions and which you use to come up with probability estimations stating risks associated with former premises. You have to list these main principles so anyone who comes across claims of existential risks and a plead for donation, can get an overview. Then you have to provide the references you listed above, if you believe they give credence to the ideas, so that people see that all you say isn’t made up but based on previous work and evidence by people that are not associated with your organisation.
Or are you going to pull a no-true-Scotsman and assert that each one of these is mechanical or unoriginal or not really beyond human or just not different enough?
No, although I have heard about all of the achievements I’m not yet able to judge if they provide evidence supporting the possibility of strong superhuman AI, the kind that would pose a existential risk. Although in the case of chess I’m pretty much the opinion that this is no strong evidence as it is not sufficiently close to be able to overpower humans to an extent of posing a existential risk when extrapolated into other areas.
It would be good if you could provide links to the mentioned examples. Especially the genetic algorithm (ETA: Here.). It is still questionable however if this could lead to the stated recursive improvements or will shortly hit a limit. To my knowledge genetic algorithms are merely used for optimization, based on previous design spaces and are not able to come up with something unique to the extent of leaving their design space.
Whether sophisticated machine learning algorithms are able to discover valuable insights beyond statistical inferences within higher-dimensional data-sets is a very interesting idea though. As I just read, the 2009 prize of the Netflix contest was given to a team that achieved a 10.05% improvement over the previous algorithm. I’ll have to examine this further if it might bear evidence that shows this kind of complicated mesh of algorithms might lead to a quick self-improvement.
One of the best comments so far, thanks. Although your last sentence was to my understanding simply showing that you are reluctant to further critique.
I am reluctant because you seem to ask for magical programs when you write things like:
“To my knowledge genetic algorithms are merely used for optimization, based on previous design spaces and are not able to come up with something unique to the extent of leaving their design space.”
I was going to link to AIXI and approximationsthereof; full AIXI is as general as an intelligence can be if you accept that there are no uncomputable phenomenon, and the approximations are already pretty powerful (from nothing to playing Pac-Man).
But then it occurred to me that anyone invoking a phrase like ‘leaving their design space’ might then just say ‘oh, those designs and models can only model Turing machines, and so they’re stuck in their design space’.
But then it occurred to me that anyone invoking a phrase like ‘leaving their design space’...
I’ve no idea (formally) of what a ‘design space’ actually is. This is a tactic I’m frequently using against strongholds of argumentation that are seemingly based on expertise. I use their own terminology and rearrange it into something that sounds superficially clever. I like to call it a Chinese room approach. Sometimes it turns out that all they were doing was to sound smart but cannot explain themselves when faced with their own terminology set to inquire about their pretences.
I thank you however for taking the time to actually link to further third party information that will substantiate given arguments for anyone not trusting the whole of LW without it.
I see. Does that actually work for you? (Note that your answer will determine whether I mentally re-categorize you from ‘interested open-minded outsider’ to ‘troll’.)
It works against cults and religion in general. I don’t argue with them about their religion being not even wrong but rather accept their terms and highlight inconsistencies within their own framework by going as far as I can with one of their arguments and by inquiring about certain aspects based on their own terminology until they are unable to consistently answer or explain where I am wrong.
This also works with the anti GM-food bunch, data protection activists, hippies and many other fringe groups. For example, the data protection bunch concerned with information disclosure on social networks or Google Streetview. Yes, I say, that’s bad, burglar could use such services to check out your house! I wonder what evidence there is for the increase of burglary in the countries where Streetview is already available for many years?
Or I tell the anti-gun lobbyists how I support their cause. It’s really bad if anyone can buy a gun. Can you point me to the strong correlation between gun ownership and firearm homicides? Thanks.
What would you accept as evidence?
Would you accept sophisticated machine learning algorithms like the ones in the Netflix contest, who find connections that make no sense to humans, who simply can’t work with high-dimensional data?
Would you accept a circuit designed by a genetic algorithm, which doesn’t work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?
Would you accept a chess program which could crush any human chess player who ever lived? Kasparov at ELO 2851, Rybka at 3265. Wikipedia says grandmaster status comes at ELO 2500. So Rybka is now even further beyond Kasparov at his peak as Kasparov was beyond a new grandmaster. And it’s not like Rybka or the other chess AIs will weaken with age.
Or are you going to pull a no-true-Scotsman and assert that each one of these is mechanical or unoriginal or not really beyond human or just not different enough?
I think it at least possible that much-smarter-than human intelligence might turn out to be impossible. There exist some problem domains where there appear to be a large number of solutions, but where the quality of the solutions saturate quickly as more and more resources are thrown at them. A toy example is how often records are broken in a continuous 1-D domain, with attempts drawn from a constant probability distribution: The number of records broken goes as the log of the number of attempts. If some of the tasks an AGI must solve are like this, then it might not do much better than humans—not because evolution did a wonderful job of optimizing humans for perfect intelligence, but because that part of the problem domain is a brick wall, and anything must bash into it at nearly the same point.
One (admittedly weak) piece of evidence: a real example of saturation, is an optimizing compiler being used to recompile itself. It is a recursive optimizing system, and, if there is a knob to allow more effort being used on the optimization, the speed-up from the first pass can be used to allow a bit more effort to be applied to a second pass for the same cpu time. Nonetheless, the results for this specific recursion are not FOOM.
The evidence in the other direction are basically existence proofs from the most intelligent people or groups of people that we know of. Something as intelligent as Einstein must be possible, since Einstein existed. Given an AI Einstein, working on improving its own intelligence—it isn’t clear if it could make a little progress or a great deal.
This goes for your compilers as well, doesn’t it? There are still major speed-ups available in compilation technology (the closely connected areas of whole-program compilation+partial evaluation+supercompilation), but a compiler is still expected to produce isomorphic code, and that puts hard information-theoretic bounds on output.
Can you provide details / link on this?
I should’ve known someone would ask for the cite rather than just do a little googling. Oh well. Turns out it wasn’t a radio, but a voice-recognition circuit. From http://www.talkorigins.org/faqs/genalg/genalg.html#examples :
The analogy that AGI can be to us as we are to chimps. This is the part that needs the focus.
We could have said in the 1950s that machines beat us at arithmetic by orders of magnitude. Classical AI researchers clearly were deluded by success at easy problems. The problem with winning on easy problems is that it says little about hard ones.
What I see is that in the domain of problems for which human level performance is difficult to replicate, computers are capable of catching us and likely beating us, but gaining a great distance on us in performance is difficult. After all, a human can still beat the best chess programs with a mere pawn handicap. This may never get to two pawns. ever. Certainly the second pawn is massively harder than the first. It’s the nature of the problem space. In terms of runaway AGI control of the planet, we have to wonder if humans will always have the equivalent of a pawn handicap via other means (mostly as a result of having their hands on the reigns of the economic, political, and legal structures).
BTW, is ELO supposed to have that kind of linear interpretation?
Yes, this is the important part. Chimps lag behind humans in 2 distinct ways—they differ in degree, and in kind. Chimps can do a lot of human-things, but very minimally. Painting comes to mind. They do a little, but not a lot. (Degree.) Language is another well-studied subject. IIRC, they can memorize some symbols and use them, but not in the recursive way that modern linguistics (pace Chomsky) seems to regard as key, not recursive at all. (Kind.)
What can we do with this distinction? How does it apply to my three examples?
O RLY?
Ever is a long time. Would you like to make this a concrete prediction I could put on PredictionBook, perhaps something along the lines of ‘no FIDE grandmaster will lose a 2-pawns-odds chess match(s) to a computer by 2050’?
I’m not an expert on ELO by any means (do we know any LW chess experts?), but reading through http://en.wikipedia.org/wiki/Elo_rating_system#Mathematical_details doesn’t show me any warning signs—ELO point differences are supposed to reflect probabilistic differences in winning, or a ratio, and so the absolute values shouldn’t matter. I think.
This is a possibility (made more plausible if we’re talking about those reins being used to incentivize early AIs to design more reliable and transparent safety mechanisms for more powerful successive AI generations), but it’s greatly complicated by international competition: to the extent that careful limitation and restriction of AI capabilities and access to potential sources of power reduces economic, scientific, and military productivity it will be tough to coordinate. Not to mention that existing economic, political, and legal structures are not very reliably stable: electorates and governing incumbents often find themselves unable to retain power.
It seems that whether or not it’s supposed to, in practice it does. From the just released “Intrinsic Chess Ratings”, which takes Rybka and does exhaustive evaluations (deep enough to be ‘relatively omniscient’) of many thousands of modern chess games; on page 9:
You are getting much closer than any of the commenter’s before you to provide some other form of evidence to substantiate one of the primary claims here.
You have to list your primary propositions on which you base further argumentation, from which you draw conclusions and which you use to come up with probability estimations stating risks associated with former premises. You have to list these main principles so anyone who comes across claims of existential risks and a plead for donation, can get an overview. Then you have to provide the references you listed above, if you believe they give credence to the ideas, so that people see that all you say isn’t made up but based on previous work and evidence by people that are not associated with your organisation.
No, although I have heard about all of the achievements I’m not yet able to judge if they provide evidence supporting the possibility of strong superhuman AI, the kind that would pose a existential risk. Although in the case of chess I’m pretty much the opinion that this is no strong evidence as it is not sufficiently close to be able to overpower humans to an extent of posing a existential risk when extrapolated into other areas.
It would be good if you could provide links to the mentioned examples. Especially the genetic algorithm (ETA: Here.). It is still questionable however if this could lead to the stated recursive improvements or will shortly hit a limit. To my knowledge genetic algorithms are merely used for optimization, based on previous design spaces and are not able to come up with something unique to the extent of leaving their design space.
Whether sophisticated machine learning algorithms are able to discover valuable insights beyond statistical inferences within higher-dimensional data-sets is a very interesting idea though. As I just read, the 2009 prize of the Netflix contest was given to a team that achieved a 10.05% improvement over the previous algorithm. I’ll have to examine this further if it might bear evidence that shows this kind of complicated mesh of algorithms might lead to a quick self-improvement.
One of the best comments so far, thanks. Although your last sentence was to my understanding simply showing that you are reluctant to further critique.
I am reluctant because you seem to ask for magical programs when you write things like:
I was going to link to AIXI and approximations thereof; full AIXI is as general as an intelligence can be if you accept that there are no uncomputable phenomenon, and the approximations are already pretty powerful (from nothing to playing Pac-Man).
But then it occurred to me that anyone invoking a phrase like ‘leaving their design space’ might then just say ‘oh, those designs and models can only model Turing machines, and so they’re stuck in their design space’.
I’ve no idea (formally) of what a ‘design space’ actually is. This is a tactic I’m frequently using against strongholds of argumentation that are seemingly based on expertise. I use their own terminology and rearrange it into something that sounds superficially clever. I like to call it a Chinese room approach. Sometimes it turns out that all they were doing was to sound smart but cannot explain themselves when faced with their own terminology set to inquire about their pretences.
I thank you however for taking the time to actually link to further third party information that will substantiate given arguments for anyone not trusting the whole of LW without it.
I see. Does that actually work for you? (Note that your answer will determine whether I mentally re-categorize you from ‘interested open-minded outsider’ to ‘troll’.)
It works against cults and religion in general. I don’t argue with them about their religion being not even wrong but rather accept their terms and highlight inconsistencies within their own framework by going as far as I can with one of their arguments and by inquiring about certain aspects based on their own terminology until they are unable to consistently answer or explain where I am wrong.
This also works with the anti GM-food bunch, data protection activists, hippies and many other fringe groups. For example, the data protection bunch concerned with information disclosure on social networks or Google Streetview. Yes, I say, that’s bad, burglar could use such services to check out your house! I wonder what evidence there is for the increase of burglary in the countries where Streetview is already available for many years?
Or I tell the anti-gun lobbyists how I support their cause. It’s really bad if anyone can buy a gun. Can you point me to the strong correlation between gun ownership and firearm homicides? Thanks.