Do you have an overall view on the feasibility and timeline for genetic engineering of human intelligence?
For example, at what odds would you bet that we will have the ability to create hundreds of IQ +6 sigma super-geniuses by 2020 (for a reasonable cost, e.g. total project cost <$1bn)? 2030? 2040? 2050? 2075?
This is quite relevant for people interested in the singularity, because if it is highly feasible (and there are some who think it is), then it could provide a route to singularity that is independent of software AI progress, thereby forcing a rational observer to include an additional factor in favor of extreme scientific progress in the 21st century.
I would say that it is some sense obvious that higher intelligence is possible, because the process that led to whatever intelligence we have was haphazard (path-dependent, stochastic, and all that) and because what optimization did occur was under severe constraints—some of which no longer apply. Clearly, the best possible performance under severe constraints is inferior to the best possible with fewer constraints.
So, if C-sections allow baby heads to get bigger, or if calories are freely available today, changes in brain development that take advantage of those relaxed constraints ought to be feasible. In principle this does not have to result in people who are damaged or goofy, although they would not do well in ancestral environments. In practice, since we won’t know what the hell we are doing… of course it will.
Still, that’s too close to an existence proof: it doesn’t really tell you how to do it.
You could probably get real improvements by mining existing genetic variation: look at individuals and groups with unusually high IQs, search for causal variants.
Plomin and company haven’t any real success ( in terms of QTLs that explain much of the variance) but for this purpose one doesn’t care about variance explained, just effect size. A rare allele that does the job would be useful. I’d look at groups with high average IQ, but at others also.
There are other possible approaches. If you could error-correct the genome, fix all the mutational noise, you might see higher IQ. You could dig up Gauss and clone him. My favorite idea is finding two human ethnic groups that ‘nick’ - whose F1 offspring exhibit hybrid vigor.
As for the singularity: I could, I think, make a pretty good case that scientific and technological progress is slowing down.
As for the singularity: I could, I think, make a pretty good case that scientific and technological progress is slowing down.
I think that this comment highlights the fact that SIAI has a major brand management problem: SIAI is not concerned with “acceleration” of “progress”, but with the development of smarter-that-human AI—which could occur at a point in time where technology and economic indicators show growth, stagnation or even decline.
But those who push the “acceleration” of “progress” brand, have about 10^3 times our marketing budget.
No disrespect to Gregory—it is simply the case that the marketing and info that’s out there has turned the “Singularity” brand sour—the term has lost any precise meaning.
If the problem is Kurzweil’s mesage than it probably doesn’t help SIAI’s brand that he’s listed second.
Anecdotally, I’d say you’re absolutely right and that SIAI’s prospects could be substantially improved by jettisoning the term “singularity”. I’m someone who SIAI should want to target as a supporter, and I’ve mostly come around but the term singularity just radiates bad juju for me. I think I’m going to apply for a visiting fellow spot but frankly, I’m not especially comfortable telling friends and family that I’m planning to work at a place called the Singularity Institute for Artificial Intelligence and not get paid for it (I’m hoping they don’t have the same reaction to the word that I did). I suspect I would have been more supportive earlier if SIAI had been called something else.
I concur. Whenever I describe what I would be doing if I volunteered for SIAI, I avoid mentioning its name entirely and just say that they deal in “robotics” (which I tend to use instead of AI) at the “theoretical level” and that they want to bring to the “level of human intelligence” and that they study “risks to humanity”.
Of course, this is all “counting chickens ’fore they’re hatched” at this point, because I haven’t sent my email/CV to Anna Salamon yet...
But current predictions of what happens when smarter than human AI is made, somewhat rely on there being a positive relation between brain/processing power and technological innovation.
The brain power and processing power of humanity is ever increasing, more human population, more educated humans and more computing power. We can crunch ever bigger data sets. The science we are trying to do requires us to use these bigger data sets as well (LHC, genomic analysis, weather prediction). Perhaps we have nearly exhausted the simple science and we are left with the increasingly complex, and similar problems will happen to AI if it tries to self-improve. The question would be whether the rate of self-improvement would be greater than or less than the rate of increasing difficulty of the problems it had to solve to self-improve.
(Consider the following question in a Bayesian spirit, i.e. the spirit of giving a probability to any event, even if you don’t have an associated frequency for it)
If you had to bet on whether the technology for these genetic engineering efforts (NOT the political will) will be ready by e.g.
2030, 2040, 2050, 2075, 2125,
what kind of odds/probabilities would you bet at?
I have heard discussion about the singularity on the web but I have never had any idea at all what it is, so I can’t say much about that.
I do not think there is much prospect for dramatic IQ elevation without producing somewhat damaged people. We talk a lot in our book about the ever-present deleterious consequences of the strong selection that follows any environmental change. Have a look for example at the whippet homozygous for a dinged version of myostatin. Even a magic pill is likely to do the same thing. OTOH scientists don’t have a very good track record at predicting the future. Now, I am going to hop into my flying car and go to the office -:)
I have heard discussion about the singularity on the web but I have never had any idea at all what it is, so I can’t say much about that.
You could contact Anna Salamon or Carl Shulman for a well-written introductory piece on the singularity.
Very short summary: if we humans manage to scientifically understand intelligence, then the consequences would be counter-intuitively extreme. The counter-intuitiveness comes from the fact that humans struggle to see our own intelligence in perspective:
both how extreme and sudden its effects have been on the biosphere,
and the fact that it is not the best possible form of intelligence, not the final word, more the like a messy first attempt
If one accepts that intelligence is a naturalistic property of computational systems, then it becomes clear that the range of possible kinds or levels of intelligence probably extends both to much narrower and dumber systems than humans and to much more able, general systems.
I do not think there is much prospect for dramatic IQ elevation without producing somewhat damaged people.
Interesting. Would these people be so damaged that they would be unable to do science? Or would you be expecting super-aspergers types? (Or, to put it more rigorously, what probability would you assign to dead/severely disabled vs. super-aspergers/some other non-showstopping deleterious effect?)
I don’t know but I can give you some candidates. One is torsion spasm (Idiopathic Torsion Dystonia). It will give you about a ten point IQ boost just by itself. Most of the time the only effect of the disease is vulnerability to writer’s cramp, but 10% of the time it puts you in a wheelchair. So you could do science just fine.
Similarly the Ashkenazi form of Gaucher’s disease is not ordinarily all that serious but it also give a hefty IQ boost. Asperger like stuff would probably also increase: many super bright people seem to be a bit not quite. Of course lots of other super-brights seem to be completely normal.
I am just babbling, I have no special insight at all...
The only question that remains in my mind is what the timescale for this is: both the “when will it become technically feasible” and “when will political and economic factors actually cause it to happen”.
Do you have an overall view on the feasibility and timeline for genetic engineering of human intelligence?
For example, at what odds would you bet that we will have the ability to create hundreds of IQ +6 sigma super-geniuses by 2020 (for a reasonable cost, e.g. total project cost <$1bn)? 2030? 2040? 2050? 2075?
This is quite relevant for people interested in the singularity, because if it is highly feasible (and there are some who think it is), then it could provide a route to singularity that is independent of software AI progress, thereby forcing a rational observer to include an additional factor in favor of extreme scientific progress in the 21st century.
I would say that it is some sense obvious that higher intelligence is possible, because the process that led to whatever intelligence we have was haphazard (path-dependent, stochastic, and all that) and because what optimization did occur was under severe constraints—some of which no longer apply. Clearly, the best possible performance under severe constraints is inferior to the best possible with fewer constraints.
So, if C-sections allow baby heads to get bigger, or if calories are freely available today, changes in brain development that take advantage of those relaxed constraints ought to be feasible. In principle this does not have to result in people who are damaged or goofy, although they would not do well in ancestral environments. In practice, since we won’t know what the hell we are doing… of course it will.
Still, that’s too close to an existence proof: it doesn’t really tell you how to do it.
You could probably get real improvements by mining existing genetic variation: look at individuals and groups with unusually high IQs, search for causal variants. Plomin and company haven’t any real success ( in terms of QTLs that explain much of the variance) but for this purpose one doesn’t care about variance explained, just effect size. A rare allele that does the job would be useful. I’d look at groups with high average IQ, but at others also.
There are other possible approaches. If you could error-correct the genome, fix all the mutational noise, you might see higher IQ. You could dig up Gauss and clone him. My favorite idea is finding two human ethnic groups that ‘nick’ - whose F1 offspring exhibit hybrid vigor.
As for the singularity: I could, I think, make a pretty good case that scientific and technological progress is slowing down.
I think that this comment highlights the fact that SIAI has a major brand management problem: SIAI is not concerned with “acceleration” of “progress”, but with the development of smarter-that-human AI—which could occur at a point in time where technology and economic indicators show growth, stagnation or even decline.
But those who push the “acceleration” of “progress” brand, have about 10^3 times our marketing budget.
No disrespect to Gregory—it is simply the case that the marketing and info that’s out there has turned the “Singularity” brand sour—the term has lost any precise meaning.
If the problem is Kurzweil’s mesage than it probably doesn’t help SIAI’s brand that he’s listed second.
Anecdotally, I’d say you’re absolutely right and that SIAI’s prospects could be substantially improved by jettisoning the term “singularity”. I’m someone who SIAI should want to target as a supporter, and I’ve mostly come around but the term singularity just radiates bad juju for me. I think I’m going to apply for a visiting fellow spot but frankly, I’m not especially comfortable telling friends and family that I’m planning to work at a place called the Singularity Institute for Artificial Intelligence and not get paid for it (I’m hoping they don’t have the same reaction to the word that I did). I suspect I would have been more supportive earlier if SIAI had been called something else.
I concur. Whenever I describe what I would be doing if I volunteered for SIAI, I avoid mentioning its name entirely and just say that they deal in “robotics” (which I tend to use instead of AI) at the “theoretical level” and that they want to bring to the “level of human intelligence” and that they study “risks to humanity”.
Of course, this is all “counting chickens ’fore they’re hatched” at this point, because I haven’t sent my email/CV to Anna Salamon yet...
Ah, go on Silas. I’m especially sure Alicorn will be delighted to meet you at the SIAI Benton house ;-)
But current predictions of what happens when smarter than human AI is made, somewhat rely on there being a positive relation between brain/processing power and technological innovation.
The brain power and processing power of humanity is ever increasing, more human population, more educated humans and more computing power. We can crunch ever bigger data sets. The science we are trying to do requires us to use these bigger data sets as well (LHC, genomic analysis, weather prediction). Perhaps we have nearly exhausted the simple science and we are left with the increasingly complex, and similar problems will happen to AI if it tries to self-improve. The question would be whether the rate of self-improvement would be greater than or less than the rate of increasing difficulty of the problems it had to solve to self-improve.
Thanks for the response.
(Consider the following question in a Bayesian spirit, i.e. the spirit of giving a probability to any event, even if you don’t have an associated frequency for it)
If you had to bet on whether the technology for these genetic engineering efforts (NOT the political will) will be ready by e.g.
2030, 2040, 2050, 2075, 2125,
what kind of odds/probabilities would you bet at?
I have heard of the theory that the human with the “consensus” genome would be way above average in phenotype.
any idea how much?
I have heard discussion about the singularity on the web but I have never had any idea at all what it is, so I can’t say much about that.
I do not think there is much prospect for dramatic IQ elevation without producing somewhat damaged people. We talk a lot in our book about the ever-present deleterious consequences of the strong selection that follows any environmental change. Have a look for example at the whippet homozygous for a dinged version of myostatin. Even a magic pill is likely to do the same thing. OTOH scientists don’t have a very good track record at predicting the future. Now, I am going to hop into my flying car and go to the office -:)
HCH
You could contact Anna Salamon or Carl Shulman for a well-written introductory piece on the singularity.
Very short summary: if we humans manage to scientifically understand intelligence, then the consequences would be counter-intuitively extreme. The counter-intuitiveness comes from the fact that humans struggle to see our own intelligence in perspective:
both how extreme and sudden its effects have been on the biosphere,
and the fact that it is not the best possible form of intelligence, not the final word, more the like a messy first attempt
If one accepts that intelligence is a naturalistic property of computational systems, then it becomes clear that the range of possible kinds or levels of intelligence probably extends both to much narrower and dumber systems than humans and to much more able, general systems.
Interesting. Would these people be so damaged that they would be unable to do science? Or would you be expecting super-aspergers types? (Or, to put it more rigorously, what probability would you assign to dead/severely disabled vs. super-aspergers/some other non-showstopping deleterious effect?)
I don’t know but I can give you some candidates. One is torsion spasm (Idiopathic Torsion Dystonia). It will give you about a ten point IQ boost just by itself. Most of the time the only effect of the disease is vulnerability to writer’s cramp, but 10% of the time it puts you in a wheelchair. So you could do science just fine.
Similarly the Ashkenazi form of Gaucher’s disease is not ordinarily all that serious but it also give a hefty IQ boost. Asperger like stuff would probably also increase: many super bright people seem to be a bit not quite. Of course lots of other super-brights seem to be completely normal.
I am just babbling, I have no special insight at all...
HCH
That is very interesting, thanks.
The only question that remains in my mind is what the timescale for this is: both the “when will it become technically feasible” and “when will political and economic factors actually cause it to happen”.